Updates from: 08/19/2022 01:13:51
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-domain-services Concepts Migration Benefits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-migration-benefits.md
Previously updated : 05/26/2020 Last updated : 08/17/2022
active-directory-domain-services Concepts Replica Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/concepts-replica-sets.md
Previously updated : 03/30/2021 Last updated : 08/17/2022
active-directory-domain-services Create Gmsa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-gmsa.md
Previously updated : 07/06/2020 Last updated : 08/17/2022
active-directory-domain-services Create Ou https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/create-ou.md
Previously updated : 07/06/2020 Last updated : 08/17/2022
active-directory-domain-services Deploy Azure App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-azure-app-proxy.md
Previously updated : 03/07/2022 Last updated : 08/17/2022
active-directory-domain-services Deploy Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-kcd.md
Previously updated : 07/06/2020 Last updated : 08/17/2022
active-directory-domain-services Deploy Sp Profile Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/deploy-sp-profile-sync.md
Previously updated : 10/05/2021 Last updated : 08/17/2022
active-directory-domain-services Manage Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-dns.md
Previously updated : 09/16/2021 Last updated : 08/17/2022
active-directory-domain-services Manage Group Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/manage-group-policy.md
Previously updated : 07/26/2021 Last updated : 08/17/2022
active-directory-domain-services Mismatched Tenant Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/mismatched-tenant-error.md
Previously updated : 07/09/2020 Last updated : 08/17/2022
active-directory-domain-services Notifications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/notifications.md
Previously updated : 07/06/2020 Last updated : 08/17/2022
active-directory-domain-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/overview.md
Previously updated : 06/15/2022 Last updated : 08/17/2022
active-directory-domain-services Password Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/password-policy.md
Previously updated : 08/11/2021 Last updated : 08/17/2022
active-directory-domain-services Powershell Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-create-instance.md
Previously updated : 06/17/2022 Last updated : 08/17/2022
active-directory-domain-services Powershell Scoped Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/powershell-scoped-synchronization.md
Previously updated : 03/07/2022 Last updated : 08/17/2022
active-directory-domain-services Secure Remote Vm Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-remote-vm-access.md
Previously updated : 07/09/2020 Last updated : 08/17/2022
active-directory-domain-services Secure Your Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/secure-your-domain.md
Previously updated : 07/21/2021 Last updated : 08/17/2022
active-directory-domain-services Security Audit Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/security-audit-events.md
Previously updated : 07/06/2020 Last updated : 08/07/2022
active-directory-domain-services Suspension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/suspension.md
Previously updated : 07/09/2020 Last updated : 08/17/2022
active-directory-domain-services Template Create Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/template-create-instance.md
Previously updated : 03/04/2022 Last updated : 08/17/2022
active-directory-domain-services Troubleshoot Account Lockout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-account-lockout.md
Previously updated : 12/15/2021 Last updated : 08/17/2022 #Customer intent: As a directory administrator, I want to troubleshoot why user accounts are locked out in an Azure Active Directory Domain Services managed domain.
active-directory-domain-services Troubleshoot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-alerts.md
Previously updated : 06/07/2021 Last updated : 08/17/2022
active-directory-domain-services Troubleshoot Domain Join https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-domain-join.md
Previously updated : 07/06/2020 Last updated : 08/07/2022 #Customer intent: As a directory administrator, I want to troubleshoot why VMs can't join an Azure Active Directory Domain Services managed domain.
active-directory-domain-services Troubleshoot Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot-sign-in.md
Previously updated : 07/06/2020 Last updated : 08/07/2022 #Customer intent: As a directory administrator, I want to troubleshoot user account sign in problems in an Azure Active Directory Domain Services managed domain.
active-directory-domain-services Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/troubleshoot.md
Previously updated : 07/06/2020 Last updated : 08/17/2022
active-directory-domain-services Tshoot Ldaps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-domain-services/tshoot-ldaps.md
Previously updated : 07/09/2020 Last updated : 08/17/2022
active-directory Accidental Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/accidental-deletions.md
Title: Enable accidental deletions prevention in Application Provisioning in Azu
description: Enable accidental deletions prevention in Application Provisioning in Azure Active Directory. -+
active-directory Application Provisioning Config Problem No Users Provisioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-no-users-provisioned.md
Title: Users are not being provisioned in my application
description: How to troubleshoot common issues faced when you don't see users appearing in an Azure AD Gallery Application you have configured for user provisioning with Azure AD -+
active-directory Application Provisioning Config Problem Scim Compatibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem-scim-compatibility.md
Title: Known issues with System for Cross-Domain Identity Management (SCIM) 2.0
description: How to solve common protocol compatibility issues faced when adding a non-gallery application that supports SCIM 2.0 to Azure AD -+
active-directory Application Provisioning Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-config-problem.md
Title: Problem configuring user provisioning to an Azure Active Directory Galler
description: How to troubleshoot common issues faced when configuring user provisioning to an application already listed in the Azure Active Directory Application Gallery -+
active-directory Application Provisioning Configuration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-configuration-api.md
Title: Configure provisioning using Microsoft Graph APIs
description: Learn how to save time by using the Microsoft Graph APIs to automate the configuration of automatic provisioning. -+
active-directory Application Provisioning Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-log-analytics.md
Title: Understand how Provisioning integrates with Azure Monitor logs in Azure A
description: Understand how Provisioning integrates with Azure Monitor logs in Azure Active Directory. -+
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
Title: Quarantine status in Azure Active Directory Application Provisioning
description: When you've configured an application for automatic user provisioning, learn what a provisioning status of Quarantine means and how to clear it. -+
active-directory Application Provisioning When Will Provisioning Finish Specific User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-when-will-provisioning-finish-specific-user.md
Title: Find out when a specific user will be able to access an app in Azure Acti
description: How to find out when a critically important user be able to access an application you have configured for user provisioning with Azure Active Directory -+
active-directory Check Status User Account Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/check-status-user-account-provisioning.md
Title: Report automatic user account provisioning from Azure Active Directory to
description: 'Learn how to check the status of automatic user account provisioning jobs, and how to troubleshoot the provisioning of individual users.' -+
active-directory Configure Automatic User Provisioning Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/configure-automatic-user-provisioning-portal.md
Title: User provisioning management for enterprise apps in Azure Active Director
description: Learn how to manage user account provisioning for enterprise apps using the Azure Active Directory. -+
active-directory Customize Application Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/customize-application-attributes.md
Title: Tutorial - Customize Azure Active Directory attribute mappings in Applica
description: Learn what attribute mappings for Software as a Service (SaaS) apps in Azure Active Directory Application Provisioning are how you can modify them to address your business needs. -+
active-directory Define Conditional Rules For Provisioning User Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/define-conditional-rules-for-provisioning-user-accounts.md
Title: Use scoping filters in Azure Active Directory Application Provisioning
description: Learn how to use scoping filters to prevent objects in apps that support automated user provisioning from being provisioned if an object doesn't satisfy your business requirements in Azure Active Directory Application Provisioning. -+
active-directory Export Import Provisioning Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/export-import-provisioning-configuration.md
Title: Export Application Provisioning configuration and roll back to a known go
description: Learn how to export your Application Provisioning configuration and roll back to a known good state for disaster recovery in Azure Active Directory. -+
active-directory Expression Builder https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/expression-builder.md
Title: Understand how expression builder works with Application Provisioning in
description: Understand how expression builder works with Application Provisioning in Azure Active Directory. -+
active-directory Functions For Customizing Application Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/functions-for-customizing-application-data.md
Title: Reference for writing expressions for attribute mappings in Azure Active Directory Application Provisioning description: Learn how to use expression mappings to transform attribute values into an acceptable format during automated provisioning of SaaS app objects in Azure Active Directory. Includes a reference list of functions. -+
active-directory How Provisioning Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/how-provisioning-works.md
Title: Understand how Application Provisioning in Azure Active Directory
description: Understand how Application Provisioning works in Azure Active Directory. -+
active-directory Hr Attribute Retrieval Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-attribute-retrieval-issues.md
Title: Troubleshoot attribute retrieval issues with HR provisioning description: Learn how to troubleshoot attribute retrieval issues with HR provisioning -+
active-directory Hr Manager Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-manager-update-issues.md
Title: Troubleshoot manager update issues with HR provisioning description: Learn how to troubleshoot manager update issues with HR provisioning -+
active-directory Hr User Creation Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-creation-issues.md
Title: Troubleshoot user creation issues with HR provisioning description: Learn how to troubleshoot user creation issues with HR provisioning -+
active-directory Hr User Update Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-user-update-issues.md
Title: Troubleshoot user update issues with HR provisioning description: Learn how to troubleshoot user update issues with HR provisioning -+
active-directory Hr Writeback Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/hr-writeback-issues.md
Title: Troubleshoot write back issues with HR provisioning description: Learn how to troubleshoot write back issues with HR provisioning -+
active-directory Isv Automatic Provisioning Multi Tenant Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/isv-automatic-provisioning-multi-tenant-apps.md
Title: Enable automatic user provisioning for multi-tenant applications in Azure
description: A guide for independent software vendors for enabling automated provisioning in Azure Active Directory -+
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/known-issues.md
Title: Known issues for application provisioning in Azure Active Directory
description: Learn about known issues when you work with automated application provisioning in Azure Active Directory. -+
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
Title: 'Azure AD on-premises application provisioning architecture | Microsoft D
description: Presents an overview of on-premises application provisioning architecture. -+
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
Title: 'Troubleshooting issues with provisioning to on-premises applications'
description: Describes how to troubleshoot various issues you might encounter when you install and use the ECMA Connector Host. -+
active-directory On Premises Ldap Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-ldap-connector-configure.md
Title: Azure AD Provisioning to LDAP directories (preview)
description: This document describes how to configure Azure AD to provision users into an LDAP directory. -+
active-directory On Premises Migrate Microsoft Identity Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-migrate-microsoft-identity-manager.md
Title: 'Export a Microsoft Identity Manager connector for use with the Azure AD
description: Describes how to create and export a connector from MIM Sync to be used with the Azure AD ECMA Connector Host. -+
active-directory On Premises Scim Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-scim-provisioning.md
Title: Azure AD on-premises app provisioning to SCIM-enabled apps description: This article describes how to use the Azure AD provisioning service to provision users into an on-premises app that's SCIM enabled. -+
active-directory On Premises Sql Connector Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/on-premises-sql-connector-configure.md
Title: Provisioning users into SQL based applications using the ECMA Connector h
description: Provisioning users into SQL based applications using the ECMA Connector host -+
active-directory Partner Driven Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/partner-driven-integrations.md
Title: 'Use partner driven integrations to provision accounts into all your appl
description: Use partner driven integrations to provision accounts into all your applications. -+
active-directory Plan Auto User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-auto-user-provisioning.md
Title: Plan an automatic user provisioning deployment for Azure Active Directory
description: Guidance for planning and executing automatic user provisioning in Azure Active Directory -+
active-directory Plan Cloud Hr Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/plan-cloud-hr-provision.md
Title: Plan cloud HR application to Azure Active Directory user provisioning
description: This article describes the deployment process of integrating cloud HR systems, such as Workday and SuccessFactors, with Azure Active Directory. Integrating Azure AD with your cloud HR system results in a complete identity lifecycle management system. -+
active-directory Provision On Demand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provision-on-demand.md
Title: Provision a user or group on demand using the Azure Active Directory prov
description: Learn how to provision users on demand in Azure Active Directory. -+
active-directory Provisioning Agent Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/provisioning-agent-release-version-history.md
Title: Azure Active Directory Connect Provisioning Agent - Version release histo
description: This article lists all releases of Azure Active Directory Connect Provisioning Agent and describes new features and fixed issues. -+
active-directory Sap Successfactors Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-attribute-reference.md
Title: SAP SuccessFactors attribute reference for Azure Active Directory
description: Learn which attributes from SuccessFactors are supported by SuccessFactors-HR driven provisioning in Azure Active Directory. -+
active-directory Sap Successfactors Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/sap-successfactors-integration-reference.md
Title: Azure Active Directory and SAP SuccessFactors integration reference
description: Technical deep dive into SAP SuccessFactors-HR driven provisioning for Azure Active Directory. -+
active-directory Scim Graph Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/scim-graph-scenarios.md
Title: Use SCIM, Microsoft Graph, and Azure Active Directory to provision users
description: Using SCIM and the Microsoft Graph together to provision users and enrich your application with the data it needs in Azure Active Directory. -+
active-directory Tutorial Ecma Sql Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/tutorial-ecma-sql-connector.md
Title: Azure AD Provisioning to SQL applications (preview)
description: This tutorial describes how to provision users from Azure AD into a SQL database. -+
active-directory Use Scim To Build Users And Groups Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-build-users-and-groups-endpoints.md
Title: Build a SCIM endpoint for user provisioning to apps from Azure Active Dir
description: Learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and automatically provision users and groups into your cloud applications. -+ Previously updated : 05/11/2021 Last updated : 08/18/2022 # Tutorial: Develop a sample SCIM endpoint in Azure Active Directory
-No one wants to build a new endpoint from scratch, so we created some [reference code](https://aka.ms/scimreferencecode) for you to get started with [System for Cross-domain Identity Management (SCIM)](https://aka.ms/scimoverview). You can get your SCIM endpoint up and running with no code in just five minutes.
-
-This tutorial describes how to deploy the SCIM reference code in Azure and test it by using Postman or by integrating with the Azure Active Directory (Azure AD) SCIM client. This tutorial is intended for developers who want to get started with SCIM, or anyone interested in testing a SCIM endpoint.
+This tutorial describes how to deploy the SCIM [reference code](https://aka.ms/scimreferencecode) with [Azure App Service](../../app-service/index.yml). Then, test the code by using Postman or by integrating with the Azure Active Directory (Azure AD) Provisioning Service. The tutorial is intended for developers who want to get started with SCIM, or anyone interested in testing a [SCIM endpoint](./use-scim-to-provision-users-and-groups.md).
In this tutorial, you learn how to:
In this tutorial, you learn how to:
## Deploy your SCIM endpoint in Azure
-The steps here deploy the SCIM endpoint to a service by using [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) and [Azure App Service](../../app-service/index.yml). The SCIM reference code can also be run locally, hosted by an on-premises server, or deployed to another external service.
-1. Go to the [reference code](https://github.com/AzureAD/SCIMReferenceCode) from GitHub and select **Clone or download**.
+The steps here deploy the SCIM endpoint to a service by using [Visual Studio 2019](https://visualstudio.microsoft.com/downloads/) and [Visual Studio Code](https://code.visualstudio.com/) with [Azure App Service](../../app-service/index.yml). The SCIM reference code can run locally, hosted by an on-premises server, or deployed to another external service.
+
+### Get and deploy the sample app
+
+Go to the [reference code](https://github.com/AzureAD/SCIMReferenceCode) from GitHub and select **Clone or download**. Select **Open in Desktop**, or copy the link, open Visual Studio, and select **Clone or check out code** to enter the copied link and make a local copy. Save the files into a folder where the total length of the path is 260 or fewer characters.
-1. Select **Open in Desktop**, or copy the link, open Visual Studio, and select **Clone or check out code** to enter the copied link and make a local copy.
+# [Visual Studio](#tab/visual-studio)
1. In Visual Studio, make sure to sign in to the account that has access to your hosting resources.
The steps here deploy the SCIM endpoint to a service by using [Visual Studio 201
![Screenshot that shows publishing a new app service.](media/use-scim-to-build-users-and-groups-endpoints/cloud-publish-4.png)
-1. Go to the application in **Azure App Service** > **Configuration** and select **New application setting** to add the *Token__TokenIssuer* setting with the value `https://sts.windows.net/<tenant_id>/`. Replace `<tenant_id>` with your Azure AD tenant ID. If you want to test the SCIM endpoint by using [Postman](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint), add an *ASPNETCORE_ENVIRONMENT* setting with the value `Development`.
- ![Screenshot that shows the Application settings window.](media/use-scim-to-build-users-and-groups-endpoints/app-service-settings.png)
+# [Visual Studio Code](#tab/visual-studio-code)
+
+1. In Visual Studio Code, make sure to sign in to the account that has access to your hosting resources.
+
+1. In Visual Studio Code, open the folder that contains the *Microsoft.SCIM.sln* file.
+
+1. Open the Visual Studio Code integrated [terminal](https://code.visualstudio.com/docs/terminal/basics) and run the [dotnet restore](/nuget/consume-packages/install-use-packages-dotnet-cli#restore-packages) command. This command restores the packages listed in the project files.
+
+1. In the terminal, change the directory using the `cd Microsoft.SCIM.WebHostSample` command
+
+1. To run your app locally, in the terminal, run the .NET CLI command below. The [dotnet run](/dotnet/core/tools/dotnet-run) runs the Microsoft.SCIM.WebHostSample project using the [development environment](/aspnet/core/fundamentals/environments#set-environment-on-the-command-line).
+
+ ```dotnetcli
+ dotnet run --environment Development
+ ```
+
+1. If not installed, add [Azure App Service for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azureappservice) extension.
+
+1. To deploy the Microsoft.SCIM.WebHostSample app to Azure App Services, [create a new App Services](/azure/app-service/tutorial-dotnetcore-sqldb-app#2create-the-app-service).
+
+1. In the Visual Studio Code terminal, run the .NET CLI command below. This command generates a deployable publish folder for the app in the bin/debug/publish directory.
+
+ ```dotnetcli
+ dotnet publish -c Debug
+ ```
+
+1. In the Visual Studio Code explorer, right-click on the generated **publish** folder, and select Deploy to Web App.
+1. A new workflow will open in the command palette at the top of the screen. Select the **Subscription** you would like to publish your app to.
+1. Select the **App Service** web app you created earlier.
+1. If Visual Studio Code prompts you to confirm, select **Deploy**. The deployment process may take a few moments. When the process completes, a notification should appear in the bottom right corner prompting you to browse to the deployed app.
+++
+### Configure the App Service
+
+Go to the application in **Azure App Service** > **Configuration** and select **New application setting** to add the *Token__TokenIssuer* setting with the value `https://sts.windows.net/<tenant_id>/`. Replace `<tenant_id>` with your Azure AD tenant ID. If you want to test the SCIM endpoint by using [Postman](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint), add an *ASPNETCORE_ENVIRONMENT* setting with the value `Development`.
+
+![Screenshot that shows the Application settings window.](media/use-scim-to-build-users-and-groups-endpoints/app-service-settings.png)
- When you test your endpoint with an enterprise application in the [Azure portal](use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-azure-ad-scim-client), you have two options. You can keep the environment in `Development` and provide the testing token from the `/scim/token` endpoint, or you can change the environment to `Production` and leave the token field empty.
+When you test your endpoint with an enterprise application in the [Azure portal](use-scim-to-provision-users-and-groups.md#integrate-your-scim-endpoint-with-the-azure-ad-provisioning-service), you have two options. You can keep the environment in `Development` and provide the testing token from the `/scim/token` endpoint, or you can change the environment to `Production` and leave the token field empty.
That's it! Your SCIM endpoint is now published, and you can use the Azure App Service URL to test the SCIM endpoint.
active-directory Use Scim To Provision Users And Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/use-scim-to-provision-users-and-groups.md
Title: Tutorial - Develop a SCIM endpoint for user provisioning to apps from Azu
description: System for Cross-domain Identity Management (SCIM) standardizes automatic user provisioning. In this tutorial, you learn to develop a SCIM endpoint, integrate your SCIM API with Azure Active Directory, and start automating provisioning users and groups into your cloud applications. -+ Previously updated : 05/25/2022 Last updated : 08/17/2022 # Tutorial: Develop and plan provisioning for a SCIM endpoint in Azure Active Directory
-As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure AD. This article describes how to build a SCIM endpoint and integrate with the Azure AD provisioning service. The SCIM specification provides a common user schema for provisioning. When used in conjunction with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
+As an application developer, you can use the System for Cross-Domain Identity Management (SCIM) user management API to enable automatic provisioning of users and groups between your application and Azure Active Directory (Azure AD). This article describes how to build a SCIM endpoint and integrate with the Azure AD provisioning service. The SCIM specification provides a common user schema for provisioning. When used with federation standards like SAML or OpenID Connect, SCIM gives administrators an end-to-end, standards-based solution for access management.
![Provisioning from Azure AD to an app with SCIM](media/use-scim-to-provision-users-and-groups/scim-provisioning-overview.png)
-SCIM is a standardized definition of two endpoints: a `/Users` endpoint and a `/Groups` endpoint. It uses common REST verbs to create, update, and delete objects, and a pre-defined schema for common attributes like group name, username, first name, last name and email. Apps that offer a SCIM 2.0 REST API can reduce or eliminate the pain of working with a proprietary user management API. For example, any compliant SCIM client knows how to make an HTTP POST of a JSON object to the `/Users` endpoint to create a new user entry. Instead of needing a slightly different API for the same basic actions, apps that conform to the SCIM standard can instantly take advantage of pre-existing clients, tools, and code.
+SCIM 2.0 is a standardized definition of two endpoints: a `/Users` endpoint and a `/Groups` endpoint. It uses common REST API endpoints to create, update, and delete objects. The SCIM consists of a pre-defined schema for common attributes like group name, username, first name, last name and email.
-The standard user object schema and rest APIs for management defined in SCIM 2.0 (RFC [7642](https://tools.ietf.org/html/rfc7642), [7643](https://tools.ietf.org/html/rfc7643), [7644](https://tools.ietf.org/html/rfc7644)) allow identity providers and apps to more easily integrate with each other. Application developers that build a SCIM endpoint can integrate with any SCIM-compliant client without having to do custom work.
+Apps that offer a SCIM 2.0 REST API can reduce or eliminate the pain of working with a proprietary user management API. For example, any compliant SCIM client knows how to make an HTTP POST of a JSON object to the `/Users` endpoint to create a new user entry. Instead of needing a slightly different API for the same basic actions, apps that conform to the SCIM standard can instantly take advantage of pre-existing clients, tools, and code.
-To automate provisioning to an application will require building and integrating a SCIM endpoint with the Azure AD SCIM client. Use the following steps to start provisioning users and groups into your application.
-
-1. Design your user and group schema
+The standard user object schema and rest APIs for management defined in SCIM 2.0 (RFC [7642](https://tools.ietf.org/html/rfc7642), [7643](https://tools.ietf.org/html/rfc7643), [7644](https://tools.ietf.org/html/rfc7644)) allow identity providers and apps to more easily integrate with each other. Application developers that build a SCIM endpoint can integrate with any SCIM-compliant client without having to do custom work.
- Identify the application's objects and attributes to determine how they map to the user and group schema supported by the Azure AD SCIM implementation.
+To automate provisioning to an application, it requires building and integrating a SCIM endpoint that is access by the Azure AD Provisioning Service. Use the following steps to start provisioning users and groups into your application.
-1. Understand the Azure AD SCIM implementation
- Understand how the Azure AD SCIM client is implemented to model your SCIM protocol request handling and responses.
+1. [Design your user and group schema](#design-your-user-and-group-schema) - Identify the application's objects and attributes to determine how they map to the user and group schema supported by the Azure AD SCIM implementation.
-1. Build a SCIM endpoint
+1. [Understand the Azure AD SCIM implementation](#understand-the-azure-ad-scim-implementation) - Understand how the Azure AD Provisioning Service is implemented to model your SCIM protocol request handling and responses.
- An endpoint must be SCIM 2.0-compatible to integrate with the Azure AD provisioning service. As an option, use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against using them as dependencies in your production app.
+1. [Build a SCIM endpoint](#build-a-scim-endpoint) - An endpoint must be SCIM 2.0-compatible to integrate with the Azure AD provisioning service. As an option, use Microsoft Common Language Infrastructure (CLI) libraries and code samples to build your endpoint. These samples are for reference and testing only; we recommend against using them as dependencies in your production app.
-1. Integrate your SCIM endpoint with the Azure AD SCIM client
- If your organization uses a third-party application to implement a profile of SCIM 2.0 that Azure AD supports, you can quickly automate both provisioning and deprovisioning of users and groups.
+1. [Integrate your SCIM endpoint](#integrate-your-scim-endpoint-with-the-azure-ad-provisioning-service) with the Azure AD Provisioning Service. If your organization uses a third-party application to implement a profile of SCIM 2.0 that Azure AD supports, you can quickly automate both provisioning and deprovisioning of users and groups.
-1. Publish your application to the Azure AD application gallery
- Make it easy for customers to discover your application and easily configure provisioning.
+1. [Optional] [Publish your application to the Azure AD application gallery](#publish-your-application-to-the-azure-ad-application-gallery) - Make it easy for customers to discover your application and easily configure provisioning.
-![Steps for integrating a SCIM endpoint with Azure AD](media/use-scim-to-provision-users-and-groups/process.png)
+![Diagram that shows the required steps for integrating a SCIM endpoint with Azure AD.](media/use-scim-to-provision-users-and-groups/process.png)
## Design your user and group schema
For example, if your application requires both a user's email and userΓÇÖs manag
To design your schema, follow these steps:
-1. List the attributes your application requires, then categorize as attributes needed for authentication (e.g. loginName and email), attributes needed to manage the user lifecycle (e.g. status / active), and all other attributes needed for the application to work (e.g. manager, tag).
+1. List the attributes your application requires, then categorize as attributes needed for authentication (for example, loginName and email). Attributes are needed to manage the user lifecycle (for example, status / active), and all other attributes needed for the application to work (for example, manager, tag).
1. Check if the attributes are already defined in the **core** user schema or **enterprise** user schema. If not, you must define an extension to the user schema that covers the missing attributes. See example below for an extension to the user to allow provisioning a user `tag`.
-1. Map SCIM attributes to the user attributes in Azure AD. If one of the attributes you have defined in your SCIM endpoint does not have a clear counterpart on the Azure AD user schema, guide the tenant administrator to extend their schema or use an extension attribute as shown below for the `tags` property.
+1. Map SCIM attributes to the user attributes in Azure AD. If one of the attributes you've defined in your SCIM endpoint doesn't have a clear counterpart on the Azure AD user schema, guide the tenant administrator to extend their schema, or use an extension attribute as shown below for the `tags` property.
+
+The following table lists an example of required attributes:
|Required app attribute|Mapped SCIM attribute|Mapped Azure AD attribute| |--|--|--|
To design your schema, follow these steps:
|tag|urn:ietf:params:scim:schemas:extension:CustomExtensionName:2.0:User:tag|extensionAttribute1| |status|active|isSoftDeleted (computed value not stored on user)|
-**Example list of required attributes**
+The following JSON payload shows an example SCIM schema:
```json {
To design your schema, follow these steps:
} } ```
-**Example schema defined by a JSON payload**
+ > [!NOTE] > In addition to the attributes required for the application, the JSON representation also includes the required `id`, `externalId`, and `meta` attributes.
To design your schema, follow these steps:
It helps to categorize between `/User` and `/Group` to map any default user attributes in Azure AD to the SCIM RFC, see [how customize attributes are mapped between Azure AD and your SCIM endpoint](customize-application-attributes.md).
-| Azure Active Directory user | "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User" |
+The following table lists an example of user attributes:
+
+| Azure AD user | urn:ietf:params:scim:schemas:extension:enterprise:2.0:User |
| | | | IsSoftDeleted |active |
-|department|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department|
+|department| `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:department`|
| displayName |displayName |
-|employeeId|urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber|
+|employeeId|`urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:employeeNumber`|
| Facsimile-TelephoneNumber |phoneNumbers[type eq "fax"].value | | givenName |name.givenName | | jobTitle |title | | mail |emails[type eq "work"].value | | mailNickname |externalId |
-| manager |urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager |
+| manager |`urn:ietf:params:scim:schemas:extension:enterprise:2.0:User:manager` |
| mobile |phoneNumbers[type eq "mobile"].value | | postalCode |addresses[type eq "work"].postalCode | | proxy-Addresses |emails[type eq "other"].Value |
It helps to categorize between `/User` and `/Group` to map any default user attr
| telephone-Number |phoneNumbers[type eq "work"].value | | user-PrincipalName |userName |
-**Example list of user and group attributes**
+The following table lists an example of group attributes:
-| Azure Active Directory group | urn:ietf:params:scim:schemas:core:2.0:Group |
+| Azure AD group | urn:ietf:params:scim:schemas:core:2.0:Group |
| | | | displayName |displayName | | members |members | | objectId |externalId |
-**Example list of group attributes**
> [!NOTE] > You are not required to support both users and groups, or all the attributes shown here, it's only a reference on how attributes in Azure AD are often mapped to properties in the SCIM protocol.
-There are several endpoints defined in the SCIM RFC. You can start with the `/User` endpoint and then expand from there.
+There are several endpoints defined in the SCIM RFC. You can start with the `/User` endpoint and then expand from there. The following table lists some of the SCIM endpoints:
|Endpoint|Description| |--|--| |/User|Perform CRUD operations on a user object.| |/Group|Perform CRUD operations on a group object.| |/Schemas|The set of attributes supported by each client and service provider can vary. One service provider might include `name`, `title`, and `emails`, while another service provider uses `name`, `title`, and `phoneNumbers`. The schemas endpoint allows for discovery of the attributes supported.|
-|/Bulk|Bulk operations allow you to perform operations on a large collection of resource objects in a single operation (e.g. update memberships for a large group).|
-|/ServiceProviderConfig|Provides details about the features of the SCIM standard that are supported, for example the resources that are supported and the authentication method.|
+|/Bulk|Bulk operations allow you to perform operations on a large collection of resource objects in a single operation (for example, update memberships for a large group).|
+|/ServiceProviderConfig|Provides details about the features of the SCIM standard that are supported, for example, the resources that are supported and the authentication method.|
|/ResourceTypes|Specifies metadata about each resource.|
-**Example list of endpoints**
- > [!NOTE] > Use the `/Schemas` endpoint to support custom attributes or if your schema changes frequently as it enables a client to retrieve the most up-to-date schema automatically. Use the `/Bulk` endpoint to support groups. ## Understand the Azure AD SCIM implementation
-To support a SCIM 2.0 user management API, this section describes how the Azure AD SCIM client is implemented and shows how to model your SCIM protocol request handling and responses.
+To support a SCIM 2.0 user management API, this section describes how the Azure AD Provisioning Service is implemented and shows how to model your SCIM protocol request handling and responses.
> [!IMPORTANT] > The behavior of the Azure AD SCIM implementation was last updated on December 18, 2018. For information on what changed, see [SCIM 2.0 protocol compliance of the Azure AD User Provisioning service](application-provisioning-config-problem-scim-compatibility.md).
To support a SCIM 2.0 user management API, this section describes how the Azure
Within the [SCIM 2.0 protocol specification](http://www.simplecloud.info/#Specification), your application must support these requirements: |Requirement|Reference notes (SCIM protocol)|
-|-|-|
-|Create users, and optionally also groups|[section 3.3](https://tools.ietf.org/html/rfc7644#section-3.3)|
-|Modify users or groups with PATCH requests|[section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Supporting ensures that groups and users are provisioned in a performant manner.|
-|Retrieve a known resource for a user or group created earlier|[section 3.4.1](https://tools.ietf.org/html/rfc7644#section-3.4.1)|
-|Query users or groups|[section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.|
-|The filter [excludedAttributes=members](#get-group) when querying the group resource|section 3.4.2.5|
+|||
+|Create users, and optionally also groups|[Section 3.3](https://tools.ietf.org/html/rfc7644#section-3.3)|
+|Modify users or groups with PATCH requests|[Section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Supporting ensures that groups and users are provisioned in a performant manner.|
+|Retrieve a known resource for a user or group created earlier|[Section 3.4.1](https://tools.ietf.org/html/rfc7644#section-3.4.1)|
+|Query users or groups|[Section 3.4.2](https://tools.ietf.org/html/rfc7644#section-3.4.2). By default, users are retrieved by their `id` and queried by their `username` and `externalId`, and groups are queried by `displayName`.|
+|The filter [excludedAttributes=members](#get-group) when querying the group resource|Section [3.4.2.2](https://www.rfc-editor.org/rfc/rfc7644#section-3.4.2.2)|
+|Support listing users and paginating|[Section 3.4.2.4](https://datatracker.ietf.org/doc/html/rfc7644#section-3.4.2.4).|
+|Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user shouldn't be returned is when it's hard deleted from the application.|
+|Support the /Schemas endpoint|[Section 7](https://tools.ietf.org/html/rfc7643#page-30) The schema discovery endpoint will be used to discover more attributes.|
|Accept a single bearer token for authentication and authorization of Azure AD to your application.||
-|Soft-deleting a user `active=false` and restoring the user `active=true`|The user object should be returned in a request whether or not the user is active. The only time the user should not be returned is when it is hard deleted from the application.|
-|Support the /Schemas endpoint|[section 7](https://tools.ietf.org/html/rfc7643#page-30) The schema discovery endpoint will be used to discover additional attributes.|
-|Support listing users and paginating|[section 3.4.2.4](https://datatracker.ietf.org/doc/html/rfc7644#section-3.4.2.4).|
Use the general guidelines when implementing a SCIM endpoint to ensure compatibility with Azure AD:
-##### General:
-* `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero members.
-* Values sent should be stored in the same format as what they were sent in. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data should not happen between data being sent by Azure AD and data being stored in the SCIM application. (e.g. A phone number sent as 55555555555 should not be saved/returned as +5 (555) 555-5555)
+### General:
+
+* `id` is a required property for all resources. Every response that returns a resource should ensure each resource has this property, except for `ListResponse` with zero elements.
+* Values sent should be stored in the same format as what they were sent in. Invalid values should be rejected with a descriptive, actionable error message. Transformations of data shouldn't happen between data being sent by Azure AD and data being stored in the SCIM application. (for example. A phone number sent as 55555555555 shouldn't be saved/returned as +5 (555) 555-5555)
* It isn't necessary to include the entire resource in the **PATCH** response. * Don't require a case-sensitive match on structural elements in SCIM, in particular **PATCH** `op` operation values, as defined in [section 3.5.2](https://tools.ietf.org/html/rfc7644#section-3.5.2). Azure AD emits the values of `op` as **Add**, **Replace**, and **Remove**. * Microsoft Azure AD makes requests to fetch a random user and group to ensure that the endpoint and the credentials are valid. It's also done as a part of the **Test Connection** flow in the [Azure portal](https://portal.azure.com). * Support HTTPS on your SCIM endpoint.
-* Custom complex and multivalued attributes are supported but Azure AD does not have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes are not well supported at this time.
-* The "type" sub-attribute values of multivalued complex attributes must be unique. For example, there cannot be two different email addresses with the "work" sub-type.
+* Custom complex and multivalued attributes are supported but Azure AD doesn't have many complex data structures to pull data from in these cases. Simple paired name/value type complex attributes can be mapped to easily, but flowing data to complex attributes with three or more subattributes aren't well supported at this time.
+* The "type" subattribute values of multivalued complex attributes must be unique. For example, there can't be two different email addresses with the "work" subtype.
+
+### Retrieving Resources:
-##### Retrieving Resources:
* Response to a query/filter request should always be a `ListResponse`. * Microsoft Azure AD only uses the following operators: `eq`, `and` * The attribute that the resources can be queried on should be set as a matching attribute on the application in the [Azure portal](https://portal.azure.com), see [Customizing User Provisioning Attribute Mappings](customize-application-attributes.md).
-##### /Users:
-* The entitlements attribute is not supported.
-* Any attributes that are considered for user uniqueness must be usable as part of a filtered query. (e.g. if user uniqueness is evaluated for both userName and emails[type eq "work"], a GET to /Users with a filter must allow for both _userName eq "user@contoso.com"_ and _emails[type eq "work"].value eq "user@contoso.com"_ queries.
+### /Users:
+
+* The entitlements attribute isn't supported.
+* Any attributes that are considered for user uniqueness must be usable as part of a filtered query. (for example, if user uniqueness is evaluated for both userName and emails[type eq "work"], a GET to /Users with a filter must allow for both _userName eq "user@contoso.com"_ and _emails[type eq "work"].value eq "user@contoso.com"_ queries.
+
+### /Groups:
-##### /Groups:
* Groups are optional, but only supported if the SCIM implementation supports **PATCH** requests.
-* Groups must have uniqueness on the 'displayName' value for the purpose of matching between Azure Active Directory and the SCIM application. This is not a requirement of the SCIM protocol, but is a requirement for integrating a SCIM service with Azure Active Directory.
+* Groups must have uniqueness on the 'displayName' value to match with Azure AD and the SCIM application. The uniqueness isn't a requirement of the SCIM protocol, but is a requirement for integrating a SCIM endpoint with Azure AD.
-##### /Schemas (Schema discovery):
+### /Schemas (Schema discovery):
* [Sample request/response](#schema-discovery)
-* Schema discovery is not currently supported on the custom non-gallery SCIM application, but it is being used on certain gallery applications. Going forward, schema discovery will be used as the sole method to add additional attributes to the schema of an existing gallery SCIM application.
-* If a value is not present, do not send null values.
-* Property values should be camel cased (e.g. readWrite).
+* Schema discovery isn't currently supported on the custom non-gallery SCIM application, but it's being used on certain gallery applications. Going forward, schema discovery will be used as the sole method to add more attributes to the schema of an existing gallery SCIM application.
+* If a value isn't present, don't send null values.
+* Property values should be camel cased (for example, readWrite).
* Must return a list response.
-* The /schemas request will be made by the Azure AD SCIM client every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Any additional attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to additional target attributes being added. It will not result in attributes being removed.
+* The /schemas request will be made by the Azure AD Provisioning Service every time someone saves the provisioning configuration in the Azure portal or every time a user lands on the edit provisioning page in the Azure portal. Other attributes discovered will be surfaced to customers in the attribute mappings under the target attribute list. Schema discovery only leads to more target attributes being added. It will not result in attributes being removed.
-
### User provisioning and deprovisioning
-The following illustration shows the messages that Azure AD sends to a SCIM service to manage the lifecycle of a user in your application's identity store.
+The following diagram shows the messages that Azure AD sends to a SCIM endpoint to manage the lifecycle of a user in your application's identity store.
-![Shows the user provisioning and deprovisioning sequence](media/use-scim-to-provision-users-and-groups/scim-figure-4.png)<br/>
-*User provisioning and deprovisioning sequence*
+[![Diagram that shows the user deprovisioning sequence.](media/use-scim-to-provision-users-and-groups/scim-figure-4.png)](media/use-scim-to-provision-users-and-groups/scim-figure-4.png#lightbox)
### Group provisioning and deprovisioning
-Group provisioning and deprovisioning are optional. When implemented and enabled, the following illustration shows the messages that Azure AD sends to a SCIM service to manage the lifecycle of a group in your application's identity store. Those messages differ from the messages about users in two ways:
+Group provisioning and deprovisioning are optional. When implemented and enabled, the following illustration shows the messages that Azure AD sends to a SCIM endpoint to manage the lifecycle of a group in your application's identity store. Those messages differ from the messages about users in two ways:
* Requests to retrieve groups specify that the members attribute is to be excluded from any resource provided in response to the request. * Requests to determine whether a reference attribute has a certain value are requests about the members attribute.
-![Shows the group provisioning and deprovisioning sequence](media/use-scim-to-provision-users-and-groups/scim-figure-5.png)<br/>
-*Group provisioning and deprovisioning sequence*
+The following diagram shows the group deprovisioning sequence:
+
+[![Diagram that shows the group deprovisioning sequence.](media/use-scim-to-provision-users-and-groups/scim-figure-5.png)](media/use-scim-to-provision-users-and-groups/scim-figure-5.png#lightbox)
### SCIM protocol requests and responses
-This section provides example SCIM requests emitted by the Azure AD SCIM client and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses.
+
+This article provides example SCIM requests emitted by the Azure Active Directory (Azure AD) Provisioning Service and example expected responses. For best results, you should code your app to handle these requests in this format and emit the expected responses.
> [!IMPORTANT] > To understand how and when the Azure AD user provisioning service emits the operations described below, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md). [User Operations](#user-operations)
- - [Create User](#create-user) ([Request](#request) / [Response](#response))
- - [Get User](#get-user) ([Request](#request-1) / [Response](#response-1))
- - [Get User by query](#get-user-by-query) ([Request](#request-2) / [Response](#response-2))
- - [Get User by query - Zero results](#get-user-by-queryzero-results) ([Request](#request-3) / [Response](#response-3))
- - [Update User [Multi-valued properties]](#update-user-multi-valued-properties) ([Request](#request-4) / [Response](#response-4))
- - [Update User [Single-valued properties]](#update-user-single-valued-properties) ([Request](#request-5) / [Response](#response-5))
- - [Disable User](#disable-user) ([Request](#request-14) / [Response](#response-14))
- - [Delete User](#delete-user) ([Request](#request-6) / [Response](#response-6))
+
+- [Create User](#create-user) ([Request](#request) / [Response](#response))
+- [Get User](#get-user) ([Request](#request-1) / [Response](#response-1))
+- [Get User by query](#get-user-by-query) ([Request](#request-2) / [Response](#response-2))
+- [Get User by query - Zero results](#get-user-by-queryzero-results) ([Request](#request-3) / [Response](#response-3))
+- [Update User [Multi-valued properties]](#update-user-multi-valued-properties) ([Request](#request-4) / [Response](#response-4))
+- [Update User [Single-valued properties]](#update-user-single-valued-properties) ([Request](#request-5) / [Response](#response-5))
+- [Disable User](#disable-user) ([Request](#request-14) / [Response](#response-14))
+- [Delete User](#delete-user) ([Request](#request-6) / [Response](#response-6))
[Group Operations](#group-operations)
- - [Create Group](#create-group) ([Request](#request-7) / [Response](#response-7))
- - [Get Group](#get-group) ([Request](#request-8) / [Response](#response-8))
- - [Get Group by displayName](#get-group-by-displayname) ([Request](#request-9) / [Response](#response-9))
- - [Update Group [Non-member attributes]](#update-group-non-member-attributes) ([Request](#request-10) / [Response](#response-10))
- - [Update Group [Add Members]](#update-group-add-members) ([Request](#request-11) / [Response](#response-11))
- - [Update Group [Remove Members]](#update-group-remove-members) ([Request](#request-12) / [Response](#response-12))
- - [Delete Group](#delete-group) ([Request](#request-13) / [Response](#response-13))
+
+- [Create Group](#create-group) ([Request](#request-7) / [Response](#response-7))
+- [Get Group](#get-group) ([Request](#request-8) / [Response](#response-8))
+- [Get Group by displayName](#get-group-by-displayname) ([Request](#request-9) / [Response](#response-9))
+- [Update Group [Non-member attributes]](#update-group-non-member-attributes) ([Request](#request-10) / [Response](#response-10))
+- [Update Group [Add Members]](#update-group-add-members) ([Request](#request-11) / [Response](#response-11))
+- [Update Group [Remove Members]](#update-group-remove-members) ([Request](#request-12) / [Response](#response-12))
+- [Delete Group](#delete-group) ([Request](#request-13) / [Response](#response-13))
[Schema discovery](#schema-discovery)
- - [Discover schema](#discover-schema) ([Request](#request-15) / [Response](#response-15))
+
+- [Discover schema](#discover-schema) ([Request](#request-15) / [Response](#response-15))
### User Operations
This section provides example SCIM requests emitted by the Azure AD SCIM client
``` ###### Request+ *GET /Users/5171a35d82074e068ce2*
-###### Response (User not found. Note that the detail is not required, only status.)
+###### Response (User not found. The detail isn't required, only status.)
```json {
This section provides example SCIM requests emitted by the Azure AD SCIM client
##### <a name="request-14"></a>Request *PATCH /Users/5171a35d82074e068ce2 HTTP/1.1*+ ```json { "Operations": [
This section provides example SCIM requests emitted by the Azure AD SCIM client
} } ```+ #### Delete User ##### <a name="request-6"></a>Request
This section provides example SCIM requests emitted by the Azure AD SCIM client
*GET /Groups/40734ae655284ad3abcc?excludedAttributes=members HTTP/1.1* ##### <a name="response-8"></a>Response+ *HTTP/1.1 200 OK*+ ```json { "schemas": ["urn:ietf:params:scim:schemas:core:2.0:Group"],
This section provides example SCIM requests emitted by the Azure AD SCIM client
#### Get Group by displayName ##### <a name="request-9"></a>Request+ *GET /Groups?excludedAttributes=members&filter=displayName eq "displayName" HTTP/1.1* ##### <a name="response-9"></a>Response *HTTP/1.1 200 OK*+ ```json { "schemas": ["urn:ietf:params:scim:api:messages:2.0:ListResponse"],
This section provides example SCIM requests emitted by the Azure AD SCIM client
##### <a name="request-10"></a>Request *PATCH /Groups/fa2ce26709934589afc5 HTTP/1.1*+ ```json { "schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
This section provides example SCIM requests emitted by the Azure AD SCIM client
##### <a name="request-11"></a>Request *PATCH /Groups/a99962b9f99d4c4fac67 HTTP/1.1*+ ```json { "schemas": ["urn:ietf:params:scim:api:messages:2.0:PatchOp"],
This section provides example SCIM requests emitted by the Azure AD SCIM client
*HTTP/1.1 204 No Content* ### Schema discovery+ #### Discover schema ##### <a name="request-15"></a>Request+ *GET /Schemas* + ##### <a name="response-15"></a>Response+ *HTTP/1.1 200 OK*+ ```json { "schemas": [
organization.",
} ``` + ### Security requirements+ **TLS Protocol Versions** The only acceptable TLS protocol versions are TLS 1.2 and TLS 1.3. No other versions of TLS are permitted. No version of SSL is permitted. + - RSA keys must be at least 2,048 bits. - ECC keys must be at least 256 bits, generated using an approved elliptic curve
All services must use X.509 certificates generated using cryptographic keys of s
**Cipher Suites**
-All services must be configured to use the following cipher suites, in the exact order specified below. Note that if you only have an RSA certificate, installed the ECDSA cipher suites do not have any effect. </br>
+All services must be configured to use the following cipher suites, in the exact order specified below. If you only have an RSA certificate, installed the ECDSA cipher suites don't have any effect. </br>
TLS 1.2 Cipher Suites minimum bar:
TLS 1.2 Cipher Suites minimum bar:
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 ### IP Ranges
-The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. Note that you will need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
+
+The Azure AD provisioning service currently operates under the IP Ranges for AzureActiveDirectory as listed [here](https://www.microsoft.com/download/details.aspx?id=56519&WT.mc_id=rss_alldownloads_all). You can add the IP ranges listed under the AzureActiveDirectory tag to allow traffic from the Azure AD provisioning service into your application. You'll need to review the IP range list carefully for computed addresses. An address such as '40.126.25.32' could be represented in the IP range list as '40.126.0.0/18'. You can also programmatically retrieve the IP range list using the following [API](/rest/api/virtualnetwork/servicetags/list).
Azure AD also supports an agent based solution to provide connectivity to applications in private networks (on-premises, hosted in Azure, hosted in AWS, etc.). Customers can deploy a lightweight agent, which provides connectivity to Azure AD without opening any inbound ports, on a server in their private network. Learn more [here](./on-premises-scim-provisioning.md). ## Build a SCIM endpoint
-Now that you have designed your schema and understood the Azure AD SCIM implementation, you can get started developing your SCIM endpoint. Rather than starting from scratch and building the implementation completely on your own, you can rely on a number of open source SCIM libraries published by the SCIM community.
+Now that you've designed your schema and understood the Azure AD SCIM implementation, you can get started developing your SCIM endpoint. Rather than starting from scratch and building the implementation completely on your own, you can rely on many open source SCIM libraries published by the SCIM community.
For guidance on how to build a SCIM endpoint including examples, see [Develop a sample SCIM endpoint](use-scim-to-build-users-and-groups-endpoints.md).
-The open source .NET Core [reference code example](https://aka.ms/SCIMReferenceCode) published by the Azure AD provisioning team is one such resource that can jump start your development. Once you have built your SCIM endpoint, you will want to test it out. You can use the collection of [postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) provided as part of the reference code or run through the sample requests / responses provided [above](#user-operations).
+The open source .NET Core [reference code example](https://aka.ms/SCIMReferenceCode) published by the Azure AD provisioning team is one such resource that can jump start your development. Once you have built your SCIM endpoint, you'll want to test it out. You can use the collection of [Postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) provided as part of the reference code or run through the sample requests / responses provided [above](#user-operations).
> [!Note] > The reference code is intended to help you get started building your SCIM endpoint and is provided "AS IS." Contributions from the community are welcome to help build and maintain the code.
The _Microsoft.SCIM_ project is the library that defines the components of the w
![Breakdown: A request translated into calls to the provider's methods](media/use-scim-to-provision-users-and-groups/scim-figure-3.png)
-The _Microsoft.SCIM.WebHostSample_ project is a Visual Studio ASP.NET Core Web Application, based on the _Empty_ template. This allows the sample code to be deployed as standalone, hosted in containers or within Internet Information Services. It also implements the _Microsoft.SCIM.IProvider_ interface keeping classes in memory as a sample identity store.
+The _Microsoft.SCIM.WebHostSample_ project is an ASP.NET Core Web Application, based on the _Empty_ template. It allows the sample code to be deployed as standalone, hosted in containers or within Internet Information Services. It also implements the _Microsoft.SCIM.IProvider_ interface keeping classes in memory as a sample identity store.
```csharp
- public class Startup
- {
- ...
- public IMonitor MonitoringBehavior { get; set; }
- public IProvider ProviderBehavior { get; set; }
+public class Startup
+{
+ ...
+ public IMonitor MonitoringBehavior { get; set; }
+ public IProvider ProviderBehavior { get; set; }
- public Startup(IWebHostEnvironment env, IConfiguration configuration)
- {
- ...
- this.MonitoringBehavior = new ConsoleMonitor();
- this.ProviderBehavior = new InMemoryProvider();
- }
+ public Startup(IWebHostEnvironment env, IConfiguration configuration)
+ {
...
+ this.MonitoringBehavior = new ConsoleMonitor();
+ this.ProviderBehavior = new InMemoryProvider();
+ }
+ ...
``` ### Building a custom SCIM endpoint
-The SCIM service must have an HTTP address and server authentication certificate of which the root certification authority is one of the following names:
+The SCIM endpoint must have an HTTP address and server authentication certificate of which the root certification authority is one of the following names:
* CNNIC * Comodo
The SCIM service must have an HTTP address and server authentication certificate
The .NET Core SDK includes an HTTPS development certificate that can be used during development, the certificate is installed as part of the first-run experience. Depending on how you run the ASP.NET Core Web Application it will listen to a different port:
-* Microsoft.SCIM.WebHostSample: https://localhost:5001
-* IIS Express: https://localhost:44359/
+* Microsoft.SCIM.WebHostSample: <https://localhost:5001>
+* IIS Express: <https://localhost:44359/>
For more information on HTTPS in ASP.NET Core use the following link: [Enforce HTTPS in ASP.NET Core](/aspnet/core/security/enforcing-ssl) ### Handling endpoint authentication
-Requests from Azure Active Directory include an OAuth 2.0 bearer token. Any service receiving the request should authenticate the issuer as being Azure Active Directory for the expected Azure Active Directory tenant.
+Requests from Azure AD Provisioning Service include an OAuth 2.0 bearer token. The bearer token is a security token that's issued by an authorization server, such as Azure AD and is trusted by your application. You can configure the Azure AD provisions service to use one of the following tokens:
-In the token, the issuer is identified by an iss claim, like `"iss":"https://sts.windows.net/cbb1a5ac-f33b-45fa-9bf5-f37db0fed422/"`. In this example, the base address of the claim value, `https://sts.windows.net`, identifies Azure Active Directory as the issuer, while the relative address segment, _cbb1a5ac-f33b-45fa-9bf5-f37db0fed422_, is a unique identifier of the Azure Active Directory tenant for which the token was issued.
+- A long-lived bearer token. If the SCIM endpoint requires an OAuth bearer token from an issuer other than Azure AD, then copy the required OAuth bearer token into the optional **Secret Token** field. In a development environment, you can use the testing token from the `/scim/token` endpoint. Test tokens shouldn't be used in production environments.
-The audience for the token will be the application template ID for the application in the gallery, each of the applications registered in a single tenant may receive the same `iss` claim with SCIM requests. The application template ID for all custom apps is _8adf8e6e-67b2-4cf2-a259-e3dc5476c621_. The token generated by the Azure AD provisioning service should only be used for testing. It should not be used in production environments.
+- Azure AD bearer token. If **Secret Token** field is left blank, Azure AD includes an OAuth bearer token issued from Azure AD with each request. Apps that use Azure AD as an identity provider can validate this Azure AD-issued token.
-In the sample code, requests are authenticated using the Microsoft.AspNetCore.Authentication.JwtBearer package. The following code enforces that requests to any of the serviceΓÇÖs endpoints are authenticated using the bearer token issued by Azure Active Directory for a specified tenant:
+ - The application that receives requests should validate the token issuer as being Azure AD for an expected Azure AD tenant.
+ - In the token, the issuer is identified by an `iss` claim. For example, `"iss":"https://sts.windows.net/12345678-0000-0000-0000-000000000000/"`. In this example, the base address of the claim value, `https://sts.windows.net` identifies Azure AD as the issuer, while the relative address segment, _12345678-0000-0000-0000-000000000000_, is a unique identifier of the Azure AD tenant for which the token was issued.
+ - The audience for a token is the **Application ID** for the application in the gallery. Applications registered in a single tenant receive the same `iss` claim with SCIM requests. The application ID for all custom apps is _8adf8e6e-67b2-4cf2-a259-e3dc5476c621_. The token generated by the Azure AD provisioning service should only be used for testing. It shouldn't be used in production environments.
+++
+In the sample code, requests are authenticated using the Microsoft.AspNetCore.Authentication.JwtBearer package. The following code enforces that requests to any of the serviceΓÇÖs endpoints are authenticated using the bearer token issued by Azure AD for a specified tenant:
```csharp
- public void ConfigureServices(IServiceCollection services)
+public void ConfigureServices(IServiceCollection services)
+{
+ if (_env.IsDevelopment())
+ {
+ ...
+ }
+ else
+ {
+ services.AddAuthentication(options =>
{
- if (_env.IsDevelopment())
+ options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
+ options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
+ })
+ .AddJwtBearer(options =>
{
+ options.Authority = " https://sts.windows.net/12345678-0000-0000-0000-000000000000/";
+ options.Audience = "8adf8e6e-67b2-4cf2-a259-e3dc5476c621";
...
- }
- else
- {
- services.AddAuthentication(options =>
- {
- options.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
- options.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
- })
- .AddJwtBearer(options =>
- {
- options.Authority = " https://sts.windows.net/cbb1a5ac-f33b-45fa-9bf5-f37db0fed422/";
- options.Audience = "8adf8e6e-67b2-4cf2-a259-e3dc5476c621";
- ...
- });
- }
- ...
- }
+ });
+ }
+ ...
+}
- public void Configure(IApplicationBuilder app)
- {
- ...
- app.UseAuthentication();
- app.UseAuthorization();
- ...
- }
+public void Configure(IApplicationBuilder app)
+{
+ ...
+ app.UseAuthentication();
+ app.UseAuthorization();
+ ...
+}
```
-A bearer token is also required to use of the provided [postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) and perform local debugging using localhost. The sample code uses ASP.NET Core environments to change the authentication options during development stage and enable the use a self-signed token.
+A bearer token is also required to use of the provided [Postman tests](https://github.com/AzureAD/SCIMReferenceCode/wiki/Test-Your-SCIM-Endpoint) and perform local debugging using localhost. The sample code uses ASP.NET Core environments to change the authentication options during development stage and enable the use a self-signed token.
For more information on multiple environments in ASP.NET Core, see [Use multiple environments in ASP.NET Core](/aspnet/core/fundamentals/environments).
private string GenerateJSONWebToken()
***Example 1. Query the service for a matching user***
-Azure Active Directory queries the service for a user with an `externalId` attribute value matching the mailNickname attribute value of a user in Azure AD. The query is expressed as a Hypertext Transfer Protocol (HTTP) request such as this example, wherein jyoung is a sample of a mailNickname of a user in Azure Active Directory.
+Azure AD queries the service for a user with an `externalId` attribute value matching the mailNickname attribute value of a user in Azure AD. The query is expressed as a Hypertext Transfer Protocol (HTTP) request such as this example, wherein jyoung is a sample of a mailNickname of a user in Azure AD.
>[!NOTE] > This is an example only. Not all users will have a mailNickname attribute, and the value a user has may not be unique in the directory. Also, the attribute used for matching (which in this case is `externalId`) is configurable in the [Azure AD attribute mappings](customize-application-attributes.md).
GET https://.../scim/Users?filter=externalId eq jyoung HTTP/1.1
Authorization: Bearer ... ```
-In the sample code the request is translated into a call to the QueryAsync method of the serviceΓÇÖs provider. Here is the signature of that method:
+In the sample code, the request is translated into a call to the QueryAsync method of the serviceΓÇÖs provider. Here's the signature of that method:
```csharp // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
In the sample query, for a user with a given value for the `externalId` attribut
***Example 2. Provision a user***
-If the response to a query to the web service for a user with an `externalId` attribute value that matches the mailNickname attribute value of a user doesn't return any users, then Azure AD requests that the service provision a user corresponding to the one in Azure AD. Here is an example of such a request:
+If the response to a query to the SCIM endpoint for a user with an `externalId` attribute value that matches the mailNickname attribute value of a user doesn't return any users, then Azure AD requests that the service provision a user corresponding to the one in Azure AD. Here's an example of such a request:
-```
+```http
POST https://.../scim/Users HTTP/1.1 Authorization: Bearer ... Content-type: application/scim+json
Content-type: application/scim+json
"manager":null} ```
-In the sample code the request is translated into a call to the CreateAsync method of the serviceΓÇÖs provider. Here is the signature of that method:
+In the sample code, the request is translated into a call to the CreateAsync method of the serviceΓÇÖs provider. Here's the signature of that method:
```csharp // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
In the sample code the request is translated into a call to the CreateAsync meth
Task<Resource> CreateAsync(IRequest<Resource> request); ```
-In a request to provision a user, the value of the resource argument is an instance of the Microsoft.SCIM.Core2EnterpriseUser class, defined in the Microsoft.SCIM.Schemas library. If the request to provision the user succeeds, then the implementation of the method is expected to return an instance of the Microsoft.SCIM.Core2EnterpriseUser class, with the value of the Identifier property set to the unique identifier of the newly provisioned user.
+In a request to a user provisioning, the value of the resource argument is an instance of the Microsoft.SCIM.Core2EnterpriseUser class, defined in the Microsoft.SCIM.Schemas library. If the request to provision the user succeeds, then the implementation of the method is expected to return an instance of the Microsoft.SCIM.Core2EnterpriseUser class, with the value of the Identifier property set to the unique identifier of the newly provisioned user.
***Example 3. Query the current state of a user***
-To update a user known to exist in an identity store fronted by an SCIM, Azure Active Directory proceeds by requesting the current state of that user from the service with a request such as:
+To update a user known to exist in an identity store fronted by an SCIM, Azure AD proceeds by requesting the current state of that user from the service with a request such as:
``` GET ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1 Authorization: Bearer ... ```
-In the sample code the request is translated into a call to the RetrieveAsync method of the serviceΓÇÖs provider. Here is the signature of that method:
+In the sample code, the request is translated into a call to the RetrieveAsync method of the serviceΓÇÖs provider. Here's the signature of that method:
```csharp // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
Task<Resource> RetrieveAsync(IRequest<IResourceRetrievalParameters> request);
In the example of a request to retrieve the current state of a user, the values of the properties of the object provided as the value of the parameters argument are as follows: * Identifier: "54D382A4-2050-4C03-94D1-E769F1D15682"
-* SchemaIdentifier: "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
+* SchemaIdentifier: `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User`
***Example 4. Query the value of a reference attribute to be updated***
-If a reference attribute is to be updated, then Azure Active Directory queries the service to determine whether the current value of the reference attribute in the identity store fronted by the service already matches the value of that attribute in Azure Active Directory. For users, the only attribute of which the current value is queried in this way is the manager attribute. Here is an example of a request to determine whether the manager attribute of a user object currently has a certain value:
-In the sample code the request is translated into a call to the QueryAsync method of the serviceΓÇÖs provider. The value of the properties of the object provided as the value of the parameters argument are as follows:
+If a reference attribute is to be updated, then Azure AD queries the service to determine whether the current value of the reference attribute in the identity store fronted by the service already matches the value of that attribute in Azure AD. For users, the only attribute of which the current value is queried in this way is the manager attribute. Here's an example of a request to determine whether the manager attribute of a user object currently has a certain value:
+In the sample code, the request is translated into a call to the QueryAsync method of the serviceΓÇÖs provider. The value of the properties of the object provided as the value of the parameters argument are as follows:
* parameters.AlternateFilters.Count: 2 * parameters.AlternateFilters.ElementAt(x).AttributePath: "ID"
In the sample code the request is translated into a call to the QueryAsync metho
* parameters.AlternateFilters.ElementAt(y).ComparisonOperator: ComparisonOperator.Equals * parameters.AlternateFilter.ElementAt(y).ComparisonValue: "2819c223-7f76-453a-919d-413861904646" * parameters.RequestedAttributePaths.ElementAt(0): "ID"
-* parameters.SchemaIdentifier: "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
+* parameters.SchemaIdentifier: `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User`
-Here, the value of the index x can be 0 and the value of the index y can be 1, or the value of x can be 1 and the value of y can be 0, depending on the order of the expressions of the filter query parameter.
+The value of the index x can be `0` and the value of the index y can be `1`. Or the value of x can be `1` and the value of y can be `0`. It depends on the order of the expressions of the filter query parameter.
-***Example 5. Request from Azure AD to an SCIM service to update a user***
+***Example 5. Request from Azure AD to an SCIM endpoint to update a user***
-Here is an example of a request from Azure Active Directory to an SCIM service to update a user:
+Here's an example of a request from Azure AD to an SCIM endpoint to update a user:
-```
+```http
PATCH ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1 Authorization: Bearer ... Content-type: application/scim+json
Content-type: application/scim+json
"value":"2819c223-7f76-453a-919d-413861904646"}]}]} ```
-In the sample code the request is translated into a call to the UpdateAsync method of the serviceΓÇÖs provider. Here is the signature of that method:
+In the sample code, the request is translated into a call to the UpdateAsync method of the serviceΓÇÖs provider. Here's the signature of that method:
```csharp // System.Threading.Tasks.Tasks and
In the example of a request to update a user, the object provided as the value o
|Argument|Value| |-|-|
-|ResourceIdentifier.Identifier|"54D382A4-2050-4C03-94D1-E769F1D15682"|
-|ResourceIdentifier.SchemaIdentifier|"urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"|
-|(PatchRequest as PatchRequest2).Operations.Count|1|
-|(PatchRequest as PatchRequest2).Operations.ElementAt(0).OperationName|OperationName.Add|
-|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Path.AttributePath|"manager"|
-|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.Count|1|
-|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Reference|http://.../scim/Users/2819c223-7f76-453a-919d-413861904646|
-|(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Value| 2819c223-7f76-453a-919d-413861904646|
+|`ResourceIdentifier.Identifier`|"54D382A4-2050-4C03-94D1-E769F1D15682"|
+|`ResourceIdentifier.SchemaIdentifier`| `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User`|
+|`(PatchRequest as PatchRequest2).Operations.Count`|1|
+|`(PatchRequest as PatchRequest2).Operations.ElementAt(0).OperationName`| `OperationName.Add`|
+|`(PatchRequest as PatchRequest2).Operations.ElementAt(0).Path.AttributePath`| Manager|
+|`(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.Count`|1|
+|`(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Reference`|`http://.../scim/Users/2819c223-7f76-453a-919d-413861904646`|
+|`(PatchRequest as PatchRequest2).Operations.ElementAt(0).Value.ElementAt(0).Value`| 2819c223-7f76-453a-919d-413861904646|
***Example 6. Deprovision a user***
-To deprovision a user from an identity store fronted by an SCIM service, Azure AD sends a request such as:
+To deprovision a user from an identity store fronted by an SCIM endpoint, Azure AD sends a request such as:
-```
+```http
DELETE ~/scim/Users/54D382A4-2050-4C03-94D1-E769F1D15682 HTTP/1.1 Authorization: Bearer ... ```
-In the sample code the request is translated into a call to the DeleteAsync method of the serviceΓÇÖs provider. Here is the signature of that method:
+In the sample code, the request is translated into a call to the DeleteAsync method of the serviceΓÇÖs provider. Here's the signature of that method:
```csharp // System.Threading.Tasks.Tasks is defined in mscorlib.dll.
Task DeleteAsync(IRequest<IResourceIdentifier> request);
The object provided as the value of the resourceIdentifier argument has these property values in the example of a request to deprovision a user: * ResourceIdentifier.Identifier: "54D382A4-2050-4C03-94D1-E769F1D15682"
-* ResourceIdentifier.SchemaIdentifier: "urn:ietf:params:scim:schemas:extension:enterprise:2.0:User"
+* ResourceIdentifier.SchemaIdentifier: `urn:ietf:params:scim:schemas:extension:enterprise:2.0:User`
-## Integrate your SCIM endpoint with the Azure AD SCIM client
+## Integrate your SCIM endpoint with the Azure AD Provisioning Service
Azure AD can be configured to automatically provision assigned users and groups to applications that implement a specific profile of the [SCIM 2.0 protocol](https://tools.ietf.org/html/rfc7644). The specifics of the profile are documented in [Understand the Azure AD SCIM implementation](#understand-the-azure-ad-scim-implementation). Check with your application provider, or your application provider's documentation for statements of compatibility with these requirements. > [!IMPORTANT]
-> The Azure AD SCIM implementation is built on top of the Azure AD user provisioning service, which is designed to constantly keep users in sync between Azure AD and the target application, and implements a very specific set of standard operations. It's important to understand these behaviors to understand the behavior of the Azure AD SCIM client. For more information, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
+> The Azure AD SCIM implementation is built on top of the Azure AD user provisioning service, which is designed to constantly keep users in sync between Azure AD and the target application, and implements a very specific set of standard operations. It's important to understand these behaviors to understand the behavior of the Azure AD Provisioning Service. For more information, see the section [Provisioning cycles: Initial and incremental](how-provisioning-works.md#provisioning-cycles-initial-and-incremental) in [How provisioning works](how-provisioning-works.md).
### Getting started
-Applications that support the SCIM profile described in this article can be connected to Azure Active Directory using the "non-gallery application" feature in the Azure AD application gallery. Once connected, Azure AD runs a synchronization process every 40 minutes where it queries the application's SCIM endpoint for assigned users and groups, and creates or modifies them according to the assignment details.
+Applications that support the SCIM profile described in this article can be connected to Azure AD using the "non-gallery application" feature in the Azure AD application gallery. Once connected, Azure AD runs a synchronization process every 40 minutes where it queries the application's SCIM endpoint for assigned users and groups, and creates or modifies them according to the assignment details.
**To connect an application that supports SCIM:**
-1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). Note that you can get access a free trial for Azure Active Directory with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
+1. Sign in to the [Azure AD portal](https://aad.portal.azure.com). You can get access a free trial for Azure AD with P2 licenses by signing up for the [developer program](https://developer.microsoft.com/office/dev-program)
1. Select **Enterprise applications** from the left pane. A list of all configured apps is shown, including apps that were added from the gallery. 1. Select **+ New application** > **+ Create your own application**. 1. Enter a name for your application, choose the option "*integrate any other application you don't find in the gallery*" and select **Add** to create an app object. The new app is added to the list of enterprise applications and opens to its app management screen.
+
+ The following screenshot shows the Azure AD application gallery:
- ![Screenshot shows the Azure AD application gallery](media/use-scim-to-provision-users-and-groups/scim-figure-2b-1.png)
- *Azure AD application gallery*
+ ![Screenshot shows the Azure AD application gallery.](media/use-scim-to-provision-users-and-groups/scim-figure-2b-1.png)
+
> [!NOTE] > If you are using the old app gallery experience, follow the screen guide below.
+ The following screenshot shows the Azure AD old app gallery experience:
+ ![Screenshot shows the Azure AD old app gallery experience](media/use-scim-to-provision-users-and-groups/scim-figure-2a.png)
- *Azure AD old app gallery experience*
+
1. In the app management screen, select **Provisioning** in the left panel. 1. In the **Provisioning Mode** menu, select **Automatic**.
+
+ The following screenshot shows the configuring provisioning settings in the Azure portal:
- ![Example: An app's Provisioning page in the Azure portal](media/use-scim-to-provision-users-and-groups/scim-figure-2b.png)<br/>
- *Configuring provisioning in the Azure portal*
+ ![Screenshot of app provisioning page in the Azure portal.](media/use-scim-to-provision-users-and-groups/scim-figure-2b.png)
1. In the **Tenant URL** field, enter the URL of the application's SCIM endpoint. Example: `https://api.contoso.com/scim/` 1. If the SCIM endpoint requires an OAuth bearer token from an issuer other than Azure AD, then copy the required OAuth bearer token into the optional **Secret Token** field. If this field is left blank, Azure AD includes an OAuth bearer token issued from Azure AD with each request. Apps that use Azure AD as an identity provider can validate this Azure AD-issued token. > [!NOTE] > It's ***not*** recommended to leave this field blank and rely on a token generated by Azure AD. This option is primarily available for testing purposes.
-1. Select **Test Connection** to have Azure Active Directory attempt to connect to the SCIM endpoint. If the attempt fails, error information is displayed.
+1. Select **Test Connection** to have Azure AD attempt to connect to the SCIM endpoint. If the attempt fails, error information is displayed.
> [!NOTE] > **Test Connection** queries the SCIM endpoint for a user that doesn't exist, using a random GUID as the matching property selected in the Azure AD configuration. The expected correct response is HTTP 200 OK with an empty SCIM ListResponse message. 1. If the attempts to connect to the application succeed, then select **Save** to save the admin credentials.
-1. In the **Mappings** section, there are two selectable sets of [attribute mappings](customize-application-attributes.md): one for user objects and one for group objects. Select each one to review the attributes that are synchronized from Azure Active Directory to your app. The attributes selected as **Matching** properties are used to match the users and groups in your app for update operations. Select **Save** to commit any changes.
+1. In the **Mappings** section, there are two selectable sets of [attribute mappings](customize-application-attributes.md): one for user objects and one for group objects. Select each one to review the attributes that are synchronized from Azure AD to your app. The attributes selected as **Matching** properties are used to match the users and groups in your app for update operations. Select **Save** to commit any changes.
> [!NOTE] > You can optionally disable syncing of group objects by disabling the "groups" mapping.
Applications that support the SCIM profile described in this article can be conn
1. Under **Settings**, the **Scope** field defines which users and groups are synchronized. Select **Sync only assigned users and groups** (recommended) to only sync users and groups assigned in the **Users and groups** tab. 1. Once your configuration is complete, set the **Provisioning Status** to **On**. 1. Select **Save** to start the Azure AD provisioning service.
-1. If syncing only assigned users and groups (recommended), be sure to select the **Users and groups** tab and assign the users or groups you want to sync.
+1. If syncing only assigned users and groups (recommended), select the **Users and groups** tab. Then, assign the users or groups you want to sync.
Once the initial cycle has started, you can select **Provisioning logs** in the left panel to monitor progress, which shows all actions done by the provisioning service on your app. For more information on how to read the Azure AD provisioning logs, see [Reporting on automatic user account provisioning](check-status-user-account-provisioning.md).
Once the initial cycle has started, you can select **Provisioning logs** in the
## Publish your application to the Azure AD application gallery
-If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. This will make it easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../manage-apps/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use.
+If you're building an application that will be used by more than one tenant, you can make it available in the Azure AD application gallery. It's easy for organizations to discover the application and configure provisioning. Publishing your app in the Azure AD gallery and making provisioning available to others is easy. Check out the steps [here](../manage-apps/v2-howto-app-gallery-listing.md). Microsoft will work with you to integrate your application into our gallery, test your endpoint, and release onboarding [documentation](../saas-apps/tutorial-list.md) for customers to use.
### Gallery onboarding checklist Use the checklist to onboard your application quickly and customers have a smooth deployment experience. The information will be gathered from you when onboarding to the gallery.
The SCIM spec doesn't define a SCIM-specific scheme for authentication and autho
|Authorization method|Pros|Cons|Support| |--|--|--|--| |Username and password (not recommended or supported by Azure AD)|Easy to implement|Insecure - [Your Pa$$word doesn't matter](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/your-pa-word-doesn-t-matter/ba-p/731984)|Not supported for new gallery or non-gallery apps.|
-|Long-lived bearer token|Long-lived tokens do not require a user to be present. They are easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. |
-|OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.|
-|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens do not have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be completely automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.|
+|Long-lived bearer token|Long-lived tokens don't require a user to be present. They're easy for admins to use when setting up provisioning.|Long-lived tokens can be hard to share with an admin without using insecure methods such as email. |Supported for gallery and non-gallery apps. |
+|OAuth authorization code grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. A real user must be present during initial authorization, adding a level of accountability. |Requires a user to be present. If the user leaves the organization, the token is invalid, and authorization will need to be completed again.|Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth code grant on non-gallery is in our backlog, in addition to support for configurable auth / token URLs on the gallery app.|
+|OAuth client credentials grant|Access tokens are much shorter-lived than passwords, and have an automated refresh mechanism that long-lived bearer tokens don't have. Both the authorization code grant and the client credentials grant create the same type of access token, so moving between these methods is transparent to the API. Provisioning can be automated, and new tokens can be silently requested without user interaction. ||Supported for gallery apps, but not non-gallery apps. However, you can provide an access token in the UI as the secret token for short term testing purposes. Support for OAuth client credentials grant on non-gallery is in our backlog.|
> [!NOTE] > It's not recommended to leave the token field blank in the Azure AD provisioning configuration custom app UI. The token generated is primarily available for testing purposes.
The provisioning service supports the [authorization code grant](https://tools.i
- **Token exchange URL**, a URL by the client to exchange an authorization grant for an access token, typically with client authentication. -- **Client ID**, the authorization server issues the registered client a client identifier, which is a unique string representing the registration information provided by the client. The client identifier is not a secret; it is exposed to the resource owner and **must not** be used alone for client authentication.
+- **Client ID**, the authorization server issues the registered client a client identifier, which is a unique string representing the registration information provided by the client. The client identifier isn't a secret; it's exposed to the resource owner and **must not** be used alone for client authentication.
- **Client secret**, a secret generated by the authorization server that should be a unique value known only to the authorization server.
Best practices (recommended, but not required):
* Support multiple redirect URLs. Administrators can configure provisioning from both "portal.azure.com" and "aad.portal.azure.com". Supporting multiple redirect URLs will ensure that users can authorize access from either portal. * Support multiple secrets for easy renewal, without downtime.
-#### How to setup OAuth code grant flow
+#### How to set up OAuth code grant flow
1. Sign in to the Azure portal, go to **Enterprise applications** > **Application** > **Provisioning** and select **Authorize**.
Best practices (recommended, but not required):
1. Third party app redirects user back to Azure portal and provides the grant code
- 1. Azure AD provisioning services calls the token URL and provides the grant code. The third party application responds with the access token, refresh token, and expiry date
+ 1. Azure AD Provisioning Service calls the token URL and provides the grant code. The third party application responds with the access token, refresh token, and expiry date
1. When the provisioning cycle begins, the service checks if the current access token is valid and exchanges it for a new token if needed. The access token is provided in each request made to the app and the validity of the request is checked before each request. > [!NOTE]
-> While it's not possible to setup OAuth on the non-gallery applications, you can manually generate an access token from your authorization server and input it as the secret token to a non-gallery application. This allows you to verify compatibility of your SCIM server with the Azure AD SCIM client before onboarding to the app gallery, which does support the OAuth code grant.
+> While it's not possible to setup OAuth on the non-gallery applications, you can manually generate an access token from your authorization server and input it as the secret token to a non-gallery application. This allows you to verify compatibility of your SCIM server with the Azure AD Provisioning Service before onboarding to the app gallery, which does support the OAuth code grant.
-**Long-lived OAuth bearer tokens:** If your application doesn't support the OAuth authorization code grant flow, instead generate a long lived OAuth bearer token that an administrator can use to setup the provisioning integration. The token should be perpetual, or else the provisioning job will be [quarantined](application-provisioning-quarantine-status.md) when the token expires.
+**Long-lived OAuth bearer tokens:** If your application doesn't support the OAuth authorization code grant flow, instead generate a long lived OAuth bearer token that an administrator can use to set up the provisioning integration. The token should be perpetual, or else the provisioning job will be [quarantined](application-provisioning-quarantine-status.md) when the token expires.
-For additional authentication and authorization methods, let us know on [UserVoice](https://aka.ms/appprovisioningfeaturerequest).
+For more authentication and authorization methods, let us know on [UserVoice](https://aka.ms/appprovisioningfeaturerequest).
### Gallery go-to-market launch check list
-To help drive awareness and demand of our joint integration, we recommend you update your existing documentation and amplify the integration in your marketing channels. The below is a set of checklist activities we recommend you complete to support the launch
+To help drive awareness and demand of our joint integration, we recommend you update your existing documentation and amplify the integration in your marketing channels. We recommend you to complete the following checklist to support the launch:
> [!div class="checklist"] > * Ensure your sales and customer support teams are aware, ready, and can speak to the integration capabilities. Brief your teams, provide them with FAQs and include the integration into your sales materials.
-> * Craft a blog post or press release that describes the joint integration, the benefits and how to get started. [Example: Imprivata and Azure Active Directory Press Release](https://www.imprivata.com/company/press/imprivata-introduces-iam-cloud-platform-healthcare-supported-microsoft)
+> * Craft a blog post or press release that describes the joint integration, the benefits and how to get started. [Example: Imprivata and Azure AD Press Release](https://www.imprivata.com/company/press/imprivata-introduces-iam-cloud-platform-healthcare-supported-microsoft)
> * Leverage your social media like Twitter, Facebook or LinkedIn to promote the integration to your customers. Be sure to include @AzureAD so we can retweet your post. [Example: Imprivata Twitter Post](https://twitter.com/azuread/status/1123964502909779968) > * Create or update your marketing pages/website (e.g. integration page, partner page, pricing page, etc.) to include the availability of the joint integration. [Example: Pingboard integration Page](https://pingboard.com/org-chart-for), [Smartsheet integration page](https://www.smartsheet.com/marketplace/apps/microsoft-azure-ad), [Monday.com pricing page](https://monday.com/pricing/)
-> * Create a help center article or technical documentation on how customers can get started. [Example: Envoy + Microsoft Azure Active Directory integration.](https://envoy.help/en/articles/3453335-microsoft-azure-active-directory-integration/
+> * Create a help center article or technical documentation on how customers can get started. [Example: Envoy + Microsoft Azure AD integration.](https://envoy.help/en/articles/3453335-microsoft-azure-active-directory-integration/
) > * Alert customers of the new integration through your customer communication (monthly newsletters, email campaigns, product release notes).
active-directory User Provisioning Sync Attributes For Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning-sync-attributes-for-mapping.md
Title: Synchronize attributes to Azure Active Directory for mapping
description: When configuring user provisioning with Azure Active Directory and SaaS apps, use the directory extension feature to add source attributes that aren't synchronized by default. -+
active-directory User Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/user-provisioning.md
Title: What is automated app user provisioning in Azure Active Directory description: An introduction to how you can use Azure Active Directory to automatically provision, de-provision, and continuously update user accounts across multiple third-party applications. -+
active-directory What Is Hr Driven Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/what-is-hr-driven-provisioning.md
Title: 'What is HR driven provisioning with Azure Active Directory? | Microsoft
description: Describes overview of HR driven provisioning. -+
active-directory Workday Attribute Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-attribute-reference.md
Title: Workday attribute reference for Azure Active Directory
description: Learn which which attributes that you can fetch from Workday using XPATH queries in Azure Active Directory. -+
active-directory Workday Integration Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-integration-reference.md
Title: Azure Active Directory and Workday integration reference
description: Technical deep dive into Workday-HR driven provisioning in Azure Active Directory -+
active-directory Workday Retrieve Pronoun Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/workday-retrieve-pronoun-information.md
Title: Retrieve pronoun information from Workday
description: Learn how to retrieve pronoun information from Workday -+
active-directory Active Directory App Proxy Protect Ndes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/active-directory-app-proxy-protect-ndes.md
Title: Integrate with Azure Active Directory Application Proxy on an NDES server
description: Guidance on deploying an Azure Active Directory Application Proxy to protect your NDES server. -+
active-directory Application Proxy Add On Premises Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-add-on-premises-application.md
Title: Tutorial - Add an on-premises app - Application Proxy in Azure Active Dir
description: Azure Active Directory (Azure AD) has an Application Proxy service that enables users to access on-premises applications by signing in with their Azure AD account. This tutorial shows you how to prepare your environment for use with Application Proxy. Then, it uses the Azure portal to add an on-premises application to your Azure AD tenant. -+
active-directory Application Proxy Application Gateway Waf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-application-gateway-waf.md
To ensure the connector VMs send requests to the Application Gateway, an [Azure
### Test the application.
-After [adding a user for testing](/azure/active-directory/app-proxy/application-proxy-add-on-premises-application#add-a-user-for-testing), you can test the application by accessing https://www.fabrikam.one. The user will be prompted to authenticate in Azure AD, and upon successful authentication, will access the application.
+After [adding a user for testing](./application-proxy-add-on-premises-application.md#add-a-user-for-testing), you can test the application by accessing https://www.fabrikam.one. The user will be prompted to authenticate in Azure AD, and upon successful authentication, will access the application.
![Screenshot of authentication step.](./media/application-proxy-waf/sign-in-2.png) ![Screenshot of server response.](./media/application-proxy-waf/application-gateway-response.png)
The Application Gateway [Firewall logs][waf-logs] provide more details about the
## Next steps
-To prevent false positives, learn how to [Customize Web Application Firewall rules](/azure/web-application-firewall/ag/application-gateway-customize-waf-rules-portal), configure [Web Application Firewall exclusion lists](/azure/web-application-firewall/ag/application-gateway-waf-configuration?tabs=portal), or [Web Application Firewall custom rules](/azure/web-application-firewall/ag/create-custom-waf-rules).
-
-[waf-overview]: /azure/web-application-firewall/ag/ag-overview
-[appgw_quick]: /azure/application-gateway/quick-create-portal
-[appproxy-add-app]: /azure/active-directory/app-proxy/application-proxy-add-on-premises-application
-[appproxy-optimize]: /azure/active-directory/app-proxy/application-proxy-network-topology
-[appproxy-custom-domain]: /azure/active-directory/app-proxy/application-proxy-configure-custom-domain
-[private-dns]: /azure/dns/private-dns-getstarted-portal
-[waf-logs]: /azure/application-gateway/application-gateway-diagnostics#firewall-log
+To prevent false positives, learn how to [Customize Web Application Firewall rules](../../web-application-firewall/ag/application-gateway-customize-waf-rules-portal.md), configure [Web Application Firewall exclusion lists](../../web-application-firewall/ag/application-gateway-waf-configuration.md?tabs=portal), or [Web Application Firewall custom rules](../../web-application-firewall/ag/create-custom-waf-rules.md).
+[waf-overview]: ../../web-application-firewall/ag/ag-overview.md
+[appgw_quick]: ../../application-gateway/quick-create-portal.md
+[appproxy-add-app]: ./application-proxy-add-on-premises-application.md
+[appproxy-optimize]: ./application-proxy-network-topology.md
+[appproxy-custom-domain]: ./application-proxy-configure-custom-domain.md
+[private-dns]: ../../dns/private-dns-getstarted-portal.md
+[waf-logs]: ../../application-gateway/application-gateway-diagnostics.md#firewall-log
active-directory Application Proxy Back End Kerberos Constrained Delegation How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-back-end-kerberos-constrained-delegation-how-to.md
Title: Troubleshoot Kerberos constrained delegation - App Proxy
description: Troubleshoot Kerberos Constrained Delegation configurations for Application Proxy -+
active-directory Application Proxy Config How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-how-to.md
Title: How to configure an Azure Active Directory Application Proxy application
description: Learn how to create and configure an Azure Active Directory Application Proxy application in a few simple steps -+
active-directory Application Proxy Config Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-problem.md
Title: Problem creating an Azure Active Directory Application Proxy application
description: How to troubleshoot issues creating Application Proxy applications in the Azure Active Directory Admin portal -+
active-directory Application Proxy Config Sso How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-config-sso-how-to.md
Title: Understand single sign-on with an on-premises app using Application Proxy
description: Understand single sign-on with an on-premises app using Application Proxy. -+
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
Title: Complex applications for Azure Active Directory Application Proxy
description: Provides an understanding of complex application in Azure Active Directory Application Proxy, and how to configure one. -+
active-directory Application Proxy Configure Connectors With Proxy Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-connectors-with-proxy-servers.md
Title: Work with existing on-premises proxy servers and Azure Active Directory
description: Covers how to work with existing on-premises proxy servers with Azure Active Directory. -+
active-directory Application Proxy Configure Cookie Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-cookie-settings.md
Title: Application Proxy cookie settings - Azure Active Directory
description: Azure Active Directory (Azure AD) has access and session cookies for accessing on-premises applications through Application Proxy. In this article, you'll find out how to use and configure the cookie settings. -+
active-directory Application Proxy Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-domain.md
Title: Custom domains in Azure Active Directory Application Proxy
description: Configure and manage custom domains in Azure Active Directory Application Proxy. -+
active-directory Application Proxy Configure Custom Home Page https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-custom-home-page.md
Title: Custom home page for published apps - Azure Active Directory Application
description: Covers the basics about Azure Active Directory Application Proxy connectors -+
active-directory Application Proxy Configure For Claims Aware Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-for-claims-aware-applications.md
Title: Claims-aware apps - Azure Active Directory Application Proxy
description: How to publish on-premises ASP.NET applications that accept AD FS claims for secure remote access by your users. -+
active-directory Application Proxy Configure Hard Coded Link Translation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-hard-coded-link-translation.md
Title: Translate links and URLs Azure Active Directory Application Proxy
description: Learn how to redirect hard-coded links for apps published with Azure Active Directory Application Proxy. -+
active-directory Application Proxy Configure Native Client Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-native-client-application.md
Title: Publish native client apps - Azure Active Directory
description: Covers how to enable native client apps to communicate with Azure Active Directory Application Proxy Connector to provide secure remote access to your on-premises apps. -+
active-directory Application Proxy Configure Single Sign On On Premises Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-on-premises-apps.md
Title: SAML single sign-on for on-premises apps with Azure Active Directory Appl
description: Learn how to provide single sign-on for on-premises applications that are secured with SAML authentication. Provide remote access to on-premises apps with Application Proxy. -+
active-directory Application Proxy Configure Single Sign On Password Vaulting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-password-vaulting.md
Title: Single sign-on to apps with Azure Active Directory Application Proxy
description: Turn on single sign-on for your published on-premises applications with Azure Active Directory Application Proxy in the Azure portal. -+
active-directory Application Proxy Configure Single Sign On With Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-headers.md
Title: Header-based single sign-on for on-premises apps with Azure AD App Proxy
description: Learn how to provide single sign-on for on-premises applications that are secured with header-based authentication. -+
active-directory Application Proxy Configure Single Sign On With Kcd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-single-sign-on-with-kcd.md
Title: Kerberos-based single sign-on (SSO) in Azure Active Directory with Applic
description: Covers how to provide single sign-on using Azure Active Directory Application Proxy. -+
active-directory Application Proxy Connectivity No Working Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectivity-no-working-connector.md
Title: No working connector group found for an Azure Active Directory Applicatio
description: Address problems you might encounter when there is no working Connector in a Connector Group for your application with the Azure Active Directory Application Proxy -+
active-directory Application Proxy Connector Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-groups.md
Title: Publish apps on separate networks via connector groups - Azure Active Dir
description: Covers how to create and manage groups of connectors in Azure Active Directory Application Proxy. -+
active-directory Application Proxy Connector Installation Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connector-installation-problem.md
Title: Problem installing the Azure Active Directory Application Proxy Agent Con
description: How to troubleshoot issues you might face when installing the Application Proxy Agent Connector for Azure Active Directory. -+
active-directory Application Proxy Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-connectors.md
Title: Understand Azure Active Directory Application Proxy connectors
description: Learn about the Azure Active Directory Application Proxy connectors. -+
active-directory Application Proxy Debug Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-apps.md
Title: Debug Application Proxy applications - Azure Active Directory
description: Debug issues with Azure Active Directory (Azure AD) Application Proxy applications. -+
active-directory Application Proxy Debug Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-debug-connectors.md
Title: Debug Application Proxy connectors - Azure Active Directory
description: Debug issues with Azure Active Directory (Azure AD) Application Proxy connectors. -+
active-directory Application Proxy Deployment Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-deployment-plan.md
Title: Plan an Azure Active Directory Application Proxy Deployment
description: An end-to-end guide for planning the deployment of Application proxy within your organization -+
active-directory Application Proxy High Availability Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-high-availability-load-balancing.md
Title: High availability and load balancing - Azure Active Directory Application
description: How traffic distribution works with your Application Proxy deployment. Includes tips for how to optimize connector performance and use load balancing for back-end servers. -+
active-directory Application Proxy Integrate With Microsoft Cloud Application Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-microsoft-cloud-application-security.md
Title: Use Application Proxy to integrate on-premises apps with Defender for Cloud Apps - Azure Active Directory description: Configure an on-premises application in Azure Active Directory to work with Microsoft Defender for Cloud Apps. Use the Defender for Cloud Apps Conditional Access App Control to monitor and control sessions in real-time based on Conditional Access policies. You can apply these policies to on-premises applications that use Application Proxy in Azure Active Directory (Azure AD). -+
active-directory Application Proxy Integrate With Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-power-bi.md
Title: Enable remote access to Power BI with Azure Active Directory Application
description: Covers the basics about how to integrate an on-premises Power BI with Azure Active Directory Application Proxy. -+
active-directory Application Proxy Integrate With Remote Desktop Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-remote-desktop-services.md
Title: Publish Remote Desktop with Azure Active Directory Application Proxy
description: Covers how to configure App Proxy with Remote Desktop Services (RDS) -+
active-directory Application Proxy Integrate With Sharepoint Server Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server-saml.md
Title: Publish an on-premises SharePoint farm with Azure Active Directory Applic
description: Covers the basics about how to integrate an on-premises SharePoint farm with Azure Active Directory Application Proxy for SAML. -+
active-directory Application Proxy Integrate With Sharepoint Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-sharepoint-server.md
Title: Enable remote access to SharePoint - Azure Active Directory Application P
description: Covers the basics about how to integrate on-premises SharePoint Server with Azure Active Directory Application Proxy. -+
active-directory Application Proxy Integrate With Tableau https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-tableau.md
Title: Azure Active Directory Application Proxy and Tableau
description: Learn how to use Azure Active Directory (Azure AD) Application Proxy to provide remote access for your Tableau deployment. -+
active-directory Application Proxy Integrate With Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-teams.md
Title: Access Azure Active Directory Application Proxy apps in Teams
description: Use Azure Active Directory Application Proxy to access your on-premises application through Microsoft Teams. -+
active-directory Application Proxy Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-network-topology.md
Title: Network topology considerations for Azure Active Directory Application Pr
description: Covers network topology considerations when using Azure Active Directory Application Proxy. -+
active-directory Application Proxy Page Appearance Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-appearance-broken-problem.md
Title: App page doesn't display correctly for Application Proxy app
description: Guidance when the page isnΓÇÖt displaying correctly in an Application Proxy Application you have integrated with Azure Active Directory -+
active-directory Application Proxy Page Links Broken Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-links-broken-problem.md
Title: Links on the page don't work for an Azure Active Directory Application Pr
description: How to troubleshoot issues with broken links on Application Proxy applications you have integrated with Azure Active Directory -+
active-directory Application Proxy Page Load Speed Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-page-load-speed-problem.md
Title: An Azure Active Directory Application Proxy application takes too long to
description: Troubleshoot page load performance issues with Azure Active Directory Application Proxy -+
active-directory Application Proxy Ping Access Publishing Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-ping-access-publishing-guide.md
Title: Header-based authentication with PingAccess for Azure Active Directory Ap
description: Publish applications with PingAccess and App Proxy to support header-based authentication. -+
active-directory Application Proxy Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-powershell-samples.md
Title: PowerShell samples for Azure Active Directory Application Proxy
description: Use these PowerShell samples for Azure Active Directory Application Proxy to get information about Application Proxy apps and connectors in your directory, assign users and groups to apps, and get certificate information. -+
active-directory Application Proxy Qlik https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-qlik.md
Title: Azure Active Directory Application Proxy and Qlik Sense
description: Integrate Azure Active Directory Application Proxy with Qlik Sense. -+
active-directory Application Proxy Register Connector Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-register-connector-powershell.md
Title: Silent install Azure Active Directory Application Proxy connector
description: Covers how to perform an unattended installation of Azure Active Directory Application Proxy Connector to provide secure remote access to your on-premises apps. -+
active-directory Application Proxy Release Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-release-version-history.md
Title: 'Azure Active Directory Application Proxy: Version release history'
description: This article lists all releases of Azure Active Directory Application Proxy and describes new features and fixed issues. -+
active-directory Application Proxy Remove Personal Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-remove-personal-data.md
Title: Remove personal data - Azure Active Directory Application Proxy description: Remove personal data from connectors installed on devices for Azure Active Directory Application Proxy. -+
active-directory Application Proxy Secure Api Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-secure-api-access.md
Title: Access on-premises APIs with Azure Active Directory Application Proxy
description: Azure Active Directory's Application Proxy lets native apps securely access APIs and business logic you host on-premises or on cloud VMs. -+
active-directory Application Proxy Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-security.md
Title: Security considerations for Azure Active Directory Application Proxy
description: Covers security considerations for using Azure AD Application Proxy -+
active-directory Application Proxy Sign In Bad Gateway Timeout Error https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-sign-in-bad-gateway-timeout-error.md
Title: Can't access this Corporate Application error with Azure Active Directory
description: How to resolve common access issues with Azure Active Directory Application Proxy applications. -+
active-directory Application Proxy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-troubleshoot.md
Title: Troubleshoot Azure Active Directory Application Proxy
description: Covers how to troubleshoot errors in Azure Active Directory Application Proxy. -+
active-directory Application Proxy Understand Cors Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-understand-cors-issues.md
Title: Understand and solve Azure Active Directory Application Proxy CORS issues
description: Provides an understanding of CORS in Azure Active Directory Application Proxy, and how to identify and solve CORS issues. -+
active-directory Application Proxy Wildcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-wildcard.md
Title: Wildcard applications in Azure Active Directory Application Proxy
description: Learn how to use Wildcard applications in Azure Active Directory Application Proxy. -+
active-directory Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy.md
Title: Remote access to on-premises apps - Azure AD Application Proxy
description: Azure Active Directory's Application Proxy provides secure remote access to on-premises web applications. After a single sign-on to Azure AD, users can access both cloud and on-premises applications through an external URL or an internal application portal. For example, Application Proxy can provide remote access and single sign-on to Remote Desktop, SharePoint, Teams, Tableau, Qlik, and line of business (LOB) applications. -+
active-directory Application Sign In Problem On Premises Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-sign-in-problem-on-premises-application-proxy.md
Title: Problem signing in to on-premises app using Azure Active Directory Applic
description: Troubleshooting common issues faced when you are unable to sign in to an on-premises application integrated using the Azure Active Directory Application Proxy -+
active-directory Powershell Assign Group To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-group-to-app.md
Title: PowerShell sample - Assign group to an Azure Active Directory Application
description: PowerShell example that assigns a group to an Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory Powershell Assign User To App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-assign-user-to-app.md
Title: PowerShell sample - Assign user to an Azure Active Directory Application
description: PowerShell example that assigns a user to an Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory Powershell Display Users Group Of App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-display-users-group-of-app.md
Title: PowerShell sample - List users & groups for an Azure Active Directory App
description: PowerShell example that lists all the users and groups assigned to a specific Azure Active Directory (Azure AD) Application Proxy application. -+
active-directory Powershell Get All App Proxy Apps Basic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-basic.md
Title: PowerShell sample - List basic info for Application Proxy apps
description: PowerShell example that lists Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), and object ID (ObjId). -+
active-directory Powershell Get All App Proxy Apps By Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-by-connector-group.md
Title: List Azure Active Directory Application Proxy connector groups for apps
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy Connector groups with the assigned applications. -+
active-directory Powershell Get All App Proxy Apps Extended https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-extended.md
Title: PowerShell sample - List extended info for Azure Active Directory Applica
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications along with the application ID (AppId), name (DisplayName), external URL (ExternalUrl), internal URL (InternalUrl), and authentication type (ExternalAuthenticationType). -+
active-directory Powershell Get All App Proxy Apps With Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-app-proxy-apps-with-policy.md
Title: PowerShell sample - List all Azure Active Directory Application Proxy app
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications in your directory that have a lifetime token policy. -+
active-directory Powershell Get All Connectors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-connectors.md
Title: PowerShell sample - List all Azure Active Directory Application Proxy con
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy connector groups and connectors in your directory. -+
active-directory Powershell Get All Custom Domain No Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domain-no-cert.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps with no
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains but do not have a valid TLS/SSL certificate uploaded. -+
active-directory Powershell Get All Custom Domains And Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-custom-domains-and-certs.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps using c
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using custom domains and certificate information. -+
active-directory Powershell Get All Default Domain Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-default-domain-apps.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps using d
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using default domains (.msappproxy.net). -+
active-directory Powershell Get All Wildcard Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-all-wildcard-apps.md
Title: PowerShell sample - List Azure Active Directory Application Proxy apps us
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are using wildcards. -+
active-directory Powershell Get Custom Domain Identical Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-identical-cert.md
Title: PowerShell sample - Azure Active Directory Application Proxy apps with id
description: PowerShell example that lists all Azure Active Directory (Azure AD) Application Proxy applications that are published with the identical certificate. -+
active-directory Powershell Get Custom Domain Replace Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-get-custom-domain-replace-cert.md
Title: PowerShell sample - Replace certificate in Azure Active Directory Applica
description: PowerShell example that bulk replaces a certificate across Azure Active Directory (Azure AD) Application Proxy applications. -+
active-directory Powershell Move All Apps To Connector Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/scripts/powershell-move-all-apps-to-connector-group.md
Title: PowerShell sample - Move Azure Active Directory Application Proxy apps to
description: Azure Active Directory (Azure AD) Application Proxy PowerShell example used to move all applications currently assigned to a connector group to a different connector group. -+
active-directory What Is Application Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/what-is-application-proxy.md
Title: Publish on-premises apps with Azure Active Directory Application Proxy
description: Understand why to use Application Proxy to publish on-premises web applications externally to remote users. Learn about Application Proxy architecture, connectors, authentication methods, and security benefits. -+
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/whats-new-docs.md
-+ # Azure Active Directory application proxy: What's new
active-directory Concept Authentication Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-methods.md
Previously updated : 07/01/2021 Last updated : 08/17/2022
active-directory Concept Resilient Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-resilient-controls.md
Previously updated : 05/04/2022 Last updated : 08/17/2022
active-directory Concept Sspr Howitworks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-sspr-howitworks.md
Previously updated : 06/14/2021 Last updated : 08/17/2022
active-directory Howto Mfa Userdevicesettings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userdevicesettings.md
Previously updated : 11/04/2020 Last updated : 08/17/2022
active-directory Howto Mfa Userstates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-userstates.md
Previously updated : 06/01/2022 Last updated : 08/17/2022
active-directory Howto Password Ban Bad On Premises Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-password-ban-bad-on-premises-deploy.md
Previously updated : 03/05/2020 Last updated : 08/17/2022
active-directory Troubleshoot Sspr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/troubleshoot-sspr.md
Previously updated : 06/28/2021 Last updated : 08/17/2022
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
Here are examples of possible distributed caches:
services.Configure<MsalDistributedTokenCacheAdapterOptions>(options => { // Optional: Disable the L1 cache in apps that don't use session affinity
- // by setting DisableL1Cache to 'false'.
+ // by setting DisableL1Cache to 'true'.
options.DisableL1Cache = false; // Or limit the memory (by default, this is 500 MB)
active-directory Single Page App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/single-page-app-quickstart.md
Title: "Quickstart: Sign in users in single-page apps (SPA) with auth code"
+ Title: "Quickstart: Sign in users in single-page apps (SPA) with authorization code"
description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
Previously updated : 12/06/2021 Last updated : 08/17/2022 zone_pivot_groups: single-page-app-quickstart #Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my single-page app can sign in users of personal accounts, work accounts, and school accounts.
-# Quickstart: Sign in users in single-page apps (SPA) via the auth code flow
+# Quickstart: Sign in users in single-page apps (SPA) via the authorization code flow
::: zone pivot="devlang-angular" [!INCLUDE [angular](./includes/single-page-app/quickstart-angular.md)]
active-directory Spa Quickstart Portal Javascript Auth Code Angular https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-angular.md
+
+ Title: "Quickstart: Sign in users in JavaScript Angular single-page apps (SPA) with auth code and call Microsoft Graph"
+description: In this quickstart, learn how a JavaScript Angular single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript Angular app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow
++
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Angular single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-angular)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Sign in and get an access token in an Angular SPA using the auth code flow
+>
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript Angular single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> This quickstart uses MSAL Angular v2 with the authorization code flow.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** of `http://localhost:4200/`.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+>
+> #### Step 3: Your app is configured and ready to run
+>
+> We have configured your project with values of your app's properties.
+>
+> #### Step 4: Run the project
+>
+> Run the project with a web server by using Node.js:
+>
+> 1. To start the server, run the following commands from within the project directory:
+> ```console
+> npm install
+> npm start
+> ```
+> 1. Browse to `http://localhost:4200/`.
+>
+> 1. Select **Login** to start the sign-in process and then call the Microsoft Graph API.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click the **Profile** button to display your user information on the page.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> ![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+>
+> ### msal.js
+>
+> The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+>
+> If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-browser @azure/msal-angular@2
+> ```
+>
+> ## Next steps
+>
+> For a detailed step-by-step guide on building the auth code flow application using vanilla JavaScript, see the following tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-auth-code.md)
active-directory Spa Quickstart Portal Javascript Auth Code React https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code-react.md
+
+ Title: "Quickstart: Sign in users in JavaScript React single-page apps (SPA) with auth code and call Microsoft Graph"
+description: In this quickstart, learn how a JavaScript React single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow and call Microsoft Graph.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an app developer, I want to learn how to login, logout, conditionally render components to authenticated users, and acquire an access token for a protected resource such as Microsoft Graph by using the Microsoft identity platform so that my JavaScript React app can sign in users of personal accounts, work accounts, and school accounts.
+
+> # Quickstart: Sign in and get an access token in a React SPA using the auth code flow
++
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: React single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-react)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Sign in and get an access token in a React SPA using the auth code flow
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript React single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow. The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure your application in the Azure portal
+>
+> This code samples requires a **Redirect URI** of `http://localhost:3000/`.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We have configured your project with values of your app's properties.
+>
+> #### Step 4: Run the project
+>
+> Run the project with a web server by using Node.js:
+>
+> 1. To start the server, run the following commands from within the project directory:
+> ```console
+> npm install
+> npm start
+> ```
+> 1. Browse to `http://localhost:3000/`.
+>
+> 1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, click on the **Request Profile Information** to display your profile information on the page.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> ![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+>
+> ### msal.js
+>
+> The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by the Microsoft identity platform.
+>
+> If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-browser @azure/msal-react
+> ```
+>
+> ## Next steps
+>
+> Next, try a step-by-step tutorial to learn how to build a React SPA from scratch that signs in users and calls the > Microsoft Graph API to get user profile data:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call Microsoft Graph from a React single-page app](tutorial-v2-react.md)
active-directory Spa Quickstart Portal Javascript Auth Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/spa-quickstart-portal-javascript-auth-code.md
+
+ Title: "Quickstart: Sign in users in JavaScript single-page apps (SPA) with auth code"
+description: In this quickstart, learn how a JavaScript single-page application (SPA) can sign in users of personal accounts, work accounts, and school accounts by using the authorization code flow.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an app developer, I want to learn how to get access tokens and refresh tokens by using the Microsoft identity platform so that my JavaScript app can sign in users of personal accounts, work accounts, and school accounts.
++
+# Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow with PKCE
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: JavaScript single-page app with user sign-in](single-page-app-quickstart.md?pivots=devlang-javascript)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Sign in users and get an access token in a JavaScript SPA using the auth code flow with PKCE
+>
+> In this quickstart, you download and run a code sample that demonstrates how a JavaScript single-page application (SPA) can sign in users and call Microsoft Graph using the authorization code flow with Proof Key for Code Exchange (PKCE). The code sample demonstrates how to get an access token to call the Microsoft Graph API or any web API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> * Azure subscription - [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+>
+> ### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, add a **Redirect URI** of `http://localhost:3000/`.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-javascript/green-check.png) Your application is configured with these attributes.
+>
+> ### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Your app is configured and ready to run
+>
+> We have configured your project with values of your app's properties.
+>
+> Run the project with a web server by using Node.js.
+>
+> 1. To start the server, run the following commands from within the project directory:
+>
+> ```console
+> npm install
+> npm start
+> ```
+>
+> 1. Go to `http://localhost:3000/`.
+>
+> 1. Select **Sign In** to start the sign-in process and then call the Microsoft Graph API.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, your user profile information is displayed on the page.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> ![Diagram showing the authorization code flow for a single-page application.](media/quickstart-v2-javascript-auth-code/diagram-01-auth-code-flow.png)
+>
+> ### MSAL.js
+>
+> The MSAL.js library signs in users and requests the tokens that are used to access an API that's protected by Microsoft > identity platform. The sample's *https://docsupdatetracker.net/index.html* file contains a reference to the library:
+>
+> ```html
+> <script type="text/javascript" src="https://alcdn.msauth.net/browser/2.0.0-beta.0/js/msal-browser.js" integrity=
+> "sha384-r7Qxfs6PYHyfoBR6zG62DGzptfLBxnREThAlcJyEfzJ4dq5rqExc1Xj3TPFE/9TH" crossorigin="anonymous"></script>
+> ```
+>
+> If you have Node.js installed, you can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-browser
+> ```
+>
+> ## Next steps
+>
+> For a more detailed step-by-step guide on building the application used in this quickstart, see the following tutorial:
+>
+> > [!div class="nextstepaction"]
+> > [Tutorial: Sign in users and call Microsoft Graph](./tutorial-v2-javascript-auth-code.md)
active-directory Web Api Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-aspnet-core.md
+
+ Title: "Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform"
+description: In this quickstart, you download and modify a code sample that demonstrates how to protect an ASP.NET Core web API by using the Microsoft identity platform for authorization.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an application developer, I want to know how to write an ASP.NET Core web API that uses the Microsoft identity platform to authorize API requests from clients.
++
+# Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart:Protect an ASP.NET Core web API](web-api-quickstart.md?pivots=devlang-aspnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Protect an ASP.NET Core web API with the Microsoft identity platform
+>
+> In this quickstart, you download an ASP.NET Core web API code sample and review the way it restricts resource access to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+>
+> ## Prerequisites
+>
+> - Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - [Azure Active Directory tenant](quickstart-create-new-tenant.md)
+> - [.NET Core SDK 3.1+](https://dotnet.microsoft.com/)
+> - [Visual Studio 2019](https://visualstudio.microsoft.com/vs/) or [Visual Studio Code](https://code.visualstudio.com/)
+>
+> ## Step 1: Register the application
+>
+> First, register the web API in your Azure AD tenant and add a scope by following these steps:
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com/).
+> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+> 1. Search for and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. For **Name**, enter a name for your application. For example, enter **AspNetCoreWebApi-Quickstart**. Users of your app will see this name, and you can change it later.
+> 1. Select **Register**.
+> 1. Under **Manage**, select **Expose an API** > **Add a scope**. For **Application ID URI**, accept the default by selecting **Save and continue**, and then enter the following details:
+> - **Scope name**: `access_as_user`
+> - **Who can consent?**: **Admins and users**
+> - **Admin consent display name**: `Access AspNetCoreWebApi-Quickstart`
+> - **Admin consent description**: `Allows the app to access AspNetCoreWebApi-Quickstart as the signed-in user.`
+> - **User consent display name**: `Access AspNetCoreWebApi-Quickstart`
+> - **User consent description**: `Allow the application to access AspNetCoreWebApi-Quickstart on your behalf.`
+> - **State**: **Enabled**
+> 1. Select **Add scope** to complete the scope addition.
+>
+> ## Step 2: Download the ASP.NET Core project
+>
+> [Download the ASP.NET Core solution](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2/archive/aspnetcore3-1.zip) from GitHub.
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> ## Step 3: Configure the ASP.NET Core project
+>
+> In this step, configure the sample code to work with the app registration that you created earlier.
+>
+> 1. Extract the .zip archive into a folder near the root of your drive. For example, extract into *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+>
+> 1. Open the solution in the *webapi* folder in your code editor.
+> 1. Open the *appsettings.json* file and modify the following code:
+>
+> ```json
+> "ClientId": "Enter_the_Application_Id_here",
+> "TenantId": "Enter_the_Tenant_Info_Here"
+> ```
+>
+> - Replace `Enter_the_Application_Id_here` with the application (client) ID of the application that you registered in the Azure portal. You can find the application (client) ID on the app's **Overview** page.
+> - Replace `Enter_the_Tenant_Info_Here` with one of the following:
+> - If your application supports **Accounts in this organizational directory only**, replace this value with the directory (tenant) ID (a GUID) or tenant name (for example, `contoso.onmicrosoft.com`). You can find the directory (tenant) ID on the app's **Overview** page.
+> - If your application supports **Accounts in any organizational directory**, replace this value with `organizations`.
+> - If your application supports **All Microsoft account users**, leave this value as `common`.
+>
+> For this quickstart, don't change any other values in the *appsettings.json* file.
+>
+> ## How the sample works
+>
+> The web API receives a token from a client application, and the code in the web API validates the token. This scenario is explained in more detail in [Scenario: Protected web API](scenario-protected-web-api-overview.md).
+>
+> ### Startup class
+>
+> The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's executed when the hosting process starts. In its `ConfigureServices` method, the `AddMicrosoftIdentityWebApi` extension method provided by *Microsoft.Identity.Web* is called.
+>
+> ```csharp
+> public void ConfigureServices(IServiceCollection services)
+> {
+> services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
+> .AddMicrosoftIdentityWebApi(Configuration, "AzureAd");
+> }
+> ```
+>
+> The `AddAuthentication()` method configures the service to add JwtBearer-based authentication.
+>
+> The line that contains `.AddMicrosoftIdentityWebApi` adds the Microsoft identity platform authorization to your web API. It's then configured to validate access tokens issued by the Microsoft identity platform based on the information in the `AzureAD` section of the *appsettings.json* configuration file:
+>
+> | *appsettings.json* key | Description |
+> ||-|
+> | `ClientId` | Application (client) ID of the application registered in the Azure portal. |
+> | `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+> | `TenantId` | Name of your tenant or its tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+>
+> The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality:
+>
+> ```csharp
+> // The runtime calls this method. Use this method to configure the HTTP request pipeline.
+> public void Configure(IApplicationBuilder app, IHostingEnvironment env)
+> {
+> // more code
+> app.UseAuthentication();
+> app.UseAuthorization();
+> // more code
+> }
+> ```
+>
+> ### Protecting a controller, a controller's method, or a Razor page
+>
+> You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can be started to access the controller if the user isn't authenticated.
+>
+> ```csharp
+> namespace webapi.Controllers
+> {
+> [Authorize]
+> [ApiController]
+> [Route("[controller]")]
+> public class WeatherForecastController : ControllerBase
+> ```
+>
+> ### Validation of scope in the controller
+>
+> The code in the API verifies that the required scopes are in the token by using `HttpContext.VerifyUserHasAnyAcceptedScope> (scopeRequiredByApi);`:
+>
+> ```csharp
+> namespace webapi.Controllers
+> {
+> [Authorize]
+> [ApiController]
+> [Route("[controller]")]
+> public class WeatherForecastController : ControllerBase
+> {
+> // The web API will only accept tokens 1) for users, and 2) having the "access_as_user" scope for this API
+> static readonly string[] scopeRequiredByApi = new string[] { "access_as_user" };
+>
+> [HttpGet]
+> public IEnumerable<WeatherForecast> Get()
+> {
+> HttpContext.VerifyUserHasAnyAcceptedScope(scopeRequiredByApi);
+>
+> // some code here
+> }
+> }
+> }
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> The GitHub repository that contains this ASP.NET Core web API code sample includes instructions and more code samples that show you how to:
+>
+> - Add authentication to a new ASP.NET Core web API.
+> - Call the web API from a desktop application.
+> - Call downstream APIs like Microsoft Graph and other Microsoft APIs.
+>
+> > [!div class="nextstepaction"]
+> > [ASP.NET Core web API tutorials on GitHub](https://github.com/Azure-Samples/active-directory-dotnet-native-aspnetcore-v2)
active-directory Web Api Quickstart Portal Dotnet Native Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-api-quickstart-portal-dotnet-native-aspnet.md
+
+ Title: "Quickstart: Call an ASP.NET web API that is protected by the Microsoft identity platform"
+description: In this quickstart, learn how to call an ASP.NET web API that's protected by the Microsoft identity platform from a Windows Desktop (WPF) application.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an application developer, I want to know how to set up OpenId Connect authentication in a web application that's built by using Node.js with Express.
++
+# Quickstart: Call an ASP.NET web API that's protected by Microsoft identity platform
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Call a protected ASP.NET web API](web-api-quickstart.md?pivots=devlang-aspnet)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Call an ASP.NET web API that's protected by Microsoft identity platform
+>
+> In this quickstart, you download and run a code sample that demonstrates how to protect an ASP.NET web API by restricting access to its resources to authorized accounts only. The sample supports authorization of personal Microsoft accounts and accounts in any Azure Active Directory (Azure AD) organization.
+>
+> The article also uses a Windows Presentation Foundation (WPF) app to demonstrate how you can request an access token to access a web API.
+>
+> ## Prerequisites
+>
+> * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * Visual Studio 2017 or 2019. Download [Visual Studio for free](https://www.visualstudio.com/downloads/).
+>
+> ## Clone or download the sample
+>
+> You can obtain the sample in either of two ways:
+>
+> * Clone it from your shell or command line:
+>
+> ```console
+> git clone https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet.git
+> ```
+>
+> * [Download it as a ZIP file](https://github.com/AzureADQuickStarts/AppModelv2-NativeClient-DotNet/archive/complete.zip).
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+> ## Register the web API (TodoListService)
+>
+> Register your web API in **App registrations** in the Azure portal.
+>
+> 1. Sign in to the [Azure portal](https://portal.azure.com/).
+> 1. If you have access to multiple tenants, use the **Directory + subscription** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to select the tenant in which you want to register an application.
+> 1. Find and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. Enter a **Name** for your application, for example `AppModelv2-NativeClient-DotNet-TodoListService`. Users of your app might see this name, and you can change it later.
+> 1. For **Supported account types**, select **Accounts in any organizational directory**.
+> 1. Select **Register** to create the application.
+> 1. On the app **Overview** page, look for the **Application (client) ID** value, and then record it for later use. You'll need it to configure the Visual Studio configuration file for this project (that is, `ClientId` in the *TodoListService\Web.config* file).
+> 1. Under **Manage**, select **Expose an API** > **Add a scope**. Accept the proposed Application ID URI (`api://{clientId}> `) by selecting **Save and continue**, and then enter the following information:
+>
+> 1. For **Scope name**, enter `access_as_user`.
+> 1. For **Who can consent**, ensure that the **Admins and users** option is selected.
+> 1. In the **Admin consent display name** box, enter `Access TodoListService as a user`.
+> 1. In the **Admin consent description** box, enter `Accesses the TodoListService web API as a user`.
+> 1. In the **User consent display name** box, enter `Access TodoListService as a user`.
+> 1. In the **User consent description** box, enter `Accesses the TodoListService web API as a user`.
+> 1. For **State**, keep **Enabled**.
+> 1. Select **Add scope**.
+>
+> ### Configure the service project
+>
+> Configure the service project to match the registered web API.
+>
+> 1. Open the solution in Visual Studio, and then open the *Web.config* file under the root of the TodoListService project.
+>
+> 1. Replace the value of the `ida:ClientId` parameter with the Client ID (Application ID) value from the application you registered in the **App registrations** portal.
+>
+> ### Add the new scope to the app.config file
+>
+> To add the new scope to the TodoListClient *app.config* file, follow these steps:
+>
+> 1. In the TodoListClient project root folder, open the *app.config* file.
+>
+> 1. Paste the Application ID from the application that you registered for your TodoListService project in the `TodoListServiceScope` parameter, replacing the `{Enter the Application ID of your TodoListService from the app registration portal}` string.
+>
+> > [!NOTE]
+> > Make sure that the Application ID uses the following format: `api://{TodoListService-Application-ID}/access_as_user` (where `{TodoListService-Application-ID}` is the GUID representing the Application ID for your TodoListService app).
+>
+> ## Register the web app (TodoListClient)
+>
+> Register your TodoListClient app in **App registrations** in the Azure portal, and then configure the code in the TodoListClient project. If the client and server are considered the same application, you can reuse the application that's registered in step 2. Use the same application if you want users to sign in with a personal Microsoft account.
+>
+> ### Register the app
+>
+> To register the TodoListClient app, follow these steps:
+>
+> 1. Go to the Microsoft identity platform for developers [App registrations](https://go.microsoft.com/fwlink/?linkid=2083908) portal.
+> 1. Select **New registration**.
+> 1. When the **Register an application page** opens, enter your application's registration information:
+>
+> 1. In the **Name** section, enter a meaningful application name that will be displayed to users of the app (for example, **NativeClient-DotNet-TodoListClient**).
+> 1. For **Supported account types**, select **Accounts in any organizational directory**.
+> 1. Select **Register** to create the application.
+>
+> > [!NOTE]
+> > In the TodoListClient project *app.config* file, the default value of `ida:Tenant` is set to `common`. The possible values are:
+> >
+> > - `common`: You can sign in by using a work or school account or a personal Microsoft account (because you selected **Accounts in any organizational directory** in a previous step).
+> > - `organizations`: You can sign in by using a work or school account.
+> > - `consumers`: You can sign in only by using a Microsoft personal account.
+>
+> 1. On the app **Overview** page, select **Authentication**, and then complete these steps to add a platform:
+>
+> 1. Under **Platform configurations**, select the **Add a platform** button.
+> 1. For **Mobile and desktop applications**, select **Mobile and desktop applications**.
+> 1. For **Redirect URIs**, select the `https://login.microsoftonline.com/common/oauth2/nativeclient` check box.
+> 1. Select **Configure**.
+>
+> 1. Select **API permissions**, and then complete these steps to add permissions:
+>
+> 1. Select the **Add a permission** button.
+> 1. Select the **My APIs** tab.
+> 1. In the list of APIs, select **AppModelv2-NativeClient-DotNet-TodoListService API** or the name you entered for the web API.
+> 1. Select the **access_as_user** permission check box if it's not already selected. Use the Search box if necessary.
+> 1. Select the **Add permissions** button.
+>
+> ### Configure your project
+>
+> Configure your TodoListClient project by adding the Application ID to the *app.config* file.
+>
+> 1. In the **App registrations** portal, on the **Overview** page, copy the value of the **Application (client) ID**.
+>
+> 1. From the TodoListClient project root folder, open the *app.config* file, and then paste the Application ID value in the `ida:ClientId` parameter.
+>
+> ## Run your projects
+>
+> Start both projects. If you are using Visual Studio:
+>
+> 1. Right click on the Visual Studio solution and select **Properties**
+>
+> 1. In the **Common Properties** select **Startup Project** and then **Multiple startup projects**.
+>
+> 1. For both projects choose **Start** as the action
+>
+> 1. Ensure the TodoListService service starts first by moving it to the fist position in the list, using the up arrow.
+>
+> Sign in to run your TodoListClient project.
+>
+> 1. Press F5 to start the projects. The service page opens, as well as the desktop application.
+>
+> 1. In the TodoListClient, at the upper right, select **Sign in**, and then sign in with the same credentials you used to register your application, or sign in as a user in the same directory.
+>
+> If you're signing in for the first time, you might be prompted to consent to the TodoListService web API.
+>
+> To help you access the TodoListService web API and manipulate the *To-Do* list, the sign-in also requests an access token to the *access_as_user* scope.
+>
+> ## Pre-authorize your client application
+>
+> You can allow users from other directories to access your web API by pre-authorizing the client application to access your web API. You do this by adding the Application ID from the client app to the list of pre-authorized applications for your web API. By adding a pre-authorized client, you're allowing users to access your web API without having to provide consent.
+>
+> 1. In the **App registrations** portal, open the properties of your TodoListService app.
+> 1. In the **Expose an API** section, under **Authorized client applications**, select **Add a client application**.
+> 1. In the **Client ID** box, paste the Application ID of the TodoListClient app.
+> 1. In the **Authorized scopes** section, select the scope for the `api://<Application ID>/access_as_user` web API.
+> 1. Select **Add application**.
+>
+> ### Run your project
+>
+> 1. Press <kbd>F5</kbd> to run your project. Your TodoListClient app opens.
+> 1. At the upper right, select **Sign in**, and then sign in by using a personal Microsoft account, such as a *live.com* or *hotmail.com* account, or a work or school account.
+>
+> ## Optional: Limit sign-in access to certain users
+>
+> By default, any personal accounts, such as *outlook.com* or *live.com* accounts, or work or school accounts from organizations that are integrated with Azure AD can request tokens and access your web API.
+>
+> To specify who can sign in to your application, use one of the following options:
+>
+> ### Option 1: Limit access to a single organization (single tenant)
+>
+> You can limit sign-in access to your application to user accounts that are in a single Azure AD tenant, including guest accounts of that tenant. This scenario is common for line-of-business applications.
+>
+> 1. Open the *App_Start\Startup.Auth* file, and then change the value of the metadata endpoint that's passed into the `OpenIdConnectSecurityTokenProvider` to `https://login.microsoftonline.com/{Tenant ID}/v2.0/.well-known/openid-configuration`. You can also use the tenant name, such as `contoso.onmicrosoft.com`.
+> 1. In the same file, set the `ValidIssuer` property on the `TokenValidationParameters` to `https://sts.windows.net/{Tenant ID}/`, and set the `ValidateIssuer` argument to `true`.
+>
+> ### Option 2: Use a custom method to validate issuers
+>
+> You can implement a custom method to validate issuers by using the `IssuerValidator` parameter. For more information about this parameter, see [TokenValidationParameters class](/dotnet/api/microsoft.identitymodel.tokens.tokenvalidationparameters).
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Learn more about the protected web API scenario that the Microsoft identity platform supports.
+> > [!div class="nextstepaction"]
+> > [Protected web API scenario](scenario-protected-web-api-overview.md)
active-directory Web App Quickstart Portal Aspnet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet-core.md
+
+ Title: "Quickstart: Add sign-in with Microsoft Identity to an ASP.NET Core web app"
+description: In this quickstart, you learn how an app implements Microsoft sign-in on an ASP.NET Core web app by using OpenID Connect
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an application developer, I want to know how to write an ASP.NET Core web app that can sign in personal accounts, as well as work and school accounts, from any Azure Active Directory instance.
++
+# Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app
++
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: ASP.NET Core web app with user sign-in](web-app-quickstart.md?pivots=devlang-aspnet-core)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Add sign-in with Microsoft to an ASP.NET Core web app
+>
+> In this quickstart, you download and run a code sample that demonstrates how an ASP.NET Core web app can sign in users from any Azure Active Directory (Azure AD) organization.
+>
+> ### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work:
+> - For **Redirect URI**, enter **https://localhost:44321/** and **https://localhost:44321/signin-oidc**.
+> - For **Front-channel logout URL**, enter **https://localhost:44321/signout-oidc**.
+>
+> The authorization endpoint will issue request ID tokens.
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+>
+> ### Step 2: Download the ASP.NET Core project
+>
+> Run the project.
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We've configured your project with values of your app's properties, and it's ready to run.
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> ## More information
+>
+> This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET Core application.
+>
+> > [!div class="sxs-lookup"]
+> > ### How the sample works
+> >
+> > ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-core-webapp/aspnetcorewebapp-intro.svg)
+>
+> ### Startup class
+>
+> The *Microsoft.AspNetCore.Authentication* middleware uses a `Startup` class that's run when the hosting process starts:
+>
+> ```csharp
+> public void ConfigureServices(IServiceCollection services)
+> {
+> services.AddAuthentication(OpenIdConnectDefaults.AuthenticationScheme)
+> .AddMicrosoftIdentityWebApp(Configuration.GetSection("AzureAd"));
+>
+> services.AddControllersWithViews(options =>
+> {
+> var policy = new AuthorizationPolicyBuilder()
+> .RequireAuthenticatedUser()
+> .Build();
+> options.Filters.Add(new AuthorizeFilter(policy));
+> });
+> services.AddRazorPages()
+> .AddMicrosoftIdentityUI();
+> }
+> ```
+>
+> The `AddAuthentication()` method configures the service to add cookie-based authentication. This authentication is used in browser scenarios and to set the challenge to OpenID Connect.
+>
+> The line that contains `.AddMicrosoftIdentityWebApp` adds Microsoft identity platform authentication to your application. The application is then configured to sign in users based on the following information in the `AzureAD` section of the *appsettings.json* configuration file:
+>
+> | *appsettings.json* key | Description |
+> ||-|
+> | `ClientId` | Application (client) ID of the application registered in the Azure portal. |
+> | `Instance` | Security token service (STS) endpoint for the user to authenticate. This value is typically `https://login.microsoftonline.com/`, indicating the Azure public cloud. |
+> | `TenantId` | Name of your tenant or the tenant ID (a GUID), or `common` to sign in users with work or school accounts or Microsoft personal accounts. |
+>
+> The `Configure()` method contains two important methods, `app.UseAuthentication()` and `app.UseAuthorization()`, that enable their named functionality. Also in the `Configure()` method, you must register Microsoft Identity Web routes with at least one call to `endpoints.MapControllerRoute()` or a call to `endpoints.MapControllers()`:
+>
+> ```csharp
+> app.UseAuthentication();
+> app.UseAuthorization();
+>
+> app.UseEndpoints(endpoints =>
+> {
+> endpoints.MapControllerRoute(
+> name: "default",
+> pattern: "{controller=Home}/{action=Index}/{id?}");
+> endpoints.MapRazorPages();
+> });
+> ```
+>
+> ### Attribute for protecting a controller or methods
+>
+> You can protect a controller or controller methods by using the `[Authorize]` attribute. This attribute restricts access to the controller or methods by allowing only authenticated users. An authentication challenge can then be started to access the controller if the user isn't authenticated.
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> The GitHub repo that contains this ASP.NET Core tutorial includes instructions and more code samples that show you how to:
+>
+> - Add authentication to a new ASP.NET Core web application.
+> - Call Microsoft Graph, other Microsoft APIs, or your own web APIs.
+> - Add authorization.
+> - Sign in users in national clouds or with social identities.
+>
+> > [!div class="nextstepaction"]
+> > [ASP.NET Core web app tutorials on GitHub](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/)
active-directory Web App Quickstart Portal Aspnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-aspnet.md
+
+ Title: "Quickstart: ASP.NET web app that signs in users"
+description: Download and run a code sample that shows how an ASP.NET web app can sign in Azure AD users.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an application developer, I want to see a sample ASP.NET web app that can sign in Azure AD users.
++
+# Quickstart: ASP.NET web app that signs in Azure AD users
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: ASP.NET web app that signs in users](web-app-quickstart.md?pivots=devlang-aspnet)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: ASP.NET web app that signs in Azure AD users
+>
+> In this quickstart, you download and run a code sample that demonstrates an ASP.NET web application that can sign in users with Azure Active Directory (Azure AD) accounts.
+>
+> #### Step 1: Configure your application in the Azure portal
+> For the code sample in this quickstart to work, enter **https://localhost:44368/** for **Redirect URI**.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute.
+>
+> #### Step 2: Download the project
+>
+> Run the project by using Visual Studio 2019.
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> [!INCLUDE [active-directory-develop-path-length-tip](../../../includes/active-directory-develop-path-length-tip.md)]
+>
+>
+> #### Step 3: Your app is configured and ready to run
+> We've configured your project with values of your app's properties.
+>
+> 1. Extract the .zip file to a local folder that's close to the root folder. For example, extract to *C:\Azure-Samples*.
+>
+> We recommend extracting the archive into a directory near the root of your drive to avoid errors caused by path length limitations on Windows.
+> 2. Open the solution in Visual Studio (*AppModelv2-WebApp-OpenIDConnect-DotNet.sln*).
+> 3. Depending on the version of Visual Studio, you might need to right-click the project > **AppModelv2-WebApp-OpenIDConnect-DotNet** and then select **Restore NuGet packages**.
+> 4. Open the Package Manager Console by selecting **View** > **Other Windows** > **Package Manager Console**. Then run `Update-Package Microsoft.CodeDom.Providers.DotNetCompilerPlatform -r`.
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> ## More information
+>
+> This section gives an overview of the code required to sign in users. This overview can be useful to understand how the code works, what the main arguments are, and how to add sign-in to an existing ASP.NET application.
+>
+>
+> ### How the sample works
+>
+> ![Diagram of the interaction between the web browser, the web app, and the Microsoft identity platform in the sample app.](media/quickstart-v2-aspnet-webapp/aspnetwebapp-intro.svg)
+>
+> ### OWIN middleware NuGet packages
+>
+> You can set up the authentication pipeline with cookie-based authentication by using OpenID Connect in ASP.NET with OWIN middleware packages. You can install these packages by running the following commands in Package Manager Console within Visual Studio:
+>
+> ```powershell
+> Install-Package Microsoft.Owin.Security.OpenIdConnect
+> Install-Package Microsoft.Owin.Security.Cookies
+> Install-Package Microsoft.Owin.Host.SystemWeb
+> ```
+>
+> ### OWIN startup class
+>
+> The OWIN middleware uses a *startup class* that runs when the hosting process starts. In this quickstart, the *startup.cs* file is in the root folder. The following code shows the parameters that this quickstart uses:
+>
+> ```csharp
+> public void Configuration(IAppBuilder app)
+> {
+> app.SetDefaultSignInAsAuthenticationType(CookieAuthenticationDefaults.AuthenticationType);
+>
+> app.UseCookieAuthentication(new CookieAuthenticationOptions());
+> app.UseOpenIdConnectAuthentication(
+> new OpenIdConnectAuthenticationOptions
+> {
+> // Sets the client ID, authority, and redirect URI as obtained from Web.config
+> ClientId = clientId,
+> Authority = authority,
+> RedirectUri = redirectUri,
+> // PostLogoutRedirectUri is the page that users will be redirected to after sign-out. In this case, it's using the home page
+> PostLogoutRedirectUri = redirectUri,
+> Scope = OpenIdConnectScope.OpenIdProfile,
+> // ResponseType is set to request the code id_token, which contains basic information about the signed-in user
+> ResponseType = OpenIdConnectResponseType.CodeIdToken,
+> // ValidateIssuer set to false to allow personal and work accounts from any organization to sign in to your application
+> // To only allow users from a single organization, set ValidateIssuer to true and the 'tenant' setting in Web.> config to the tenant name
+> // To allow users from only a list of specific organizations, set ValidateIssuer to true and use the ValidIssuers parameter
+> TokenValidationParameters = new TokenValidationParameters()
+> {
+> ValidateIssuer = false // Simplification (see note below)
+> },
+> // OpenIdConnectAuthenticationNotifications configures OWIN to send notification of failed authentications to > the OnAuthenticationFailed method
+> Notifications = new OpenIdConnectAuthenticationNotifications
+> {
+> AuthenticationFailed = OnAuthenticationFailed
+> }
+> }
+> );
+> }
+> ```
+>
+> > |Where | Description |
+> > |||
+> > | `ClientId` | The application ID from the application registered in the Azure portal. |
+> > | `Authority` | The security token service (STS) endpoint for the user to authenticate. It's usually `https://login.microsoftonline.com/{tenant}/v2.0` for the public cloud. In that URL, *{tenant}* is the name of your tenant, your tenant ID, or `common` for a reference to the common endpoint. (The common endpoint is used for multitenant applications.) |
+> > | `RedirectUri` | The URL where users are sent after authentication against the Microsoft identity platform. |
+> > | `PostLogoutRedirectUri` | The URL where users are sent after signing off. |
+> > | `Scope` | The list of scopes being requested, separated by spaces. |
+> > | `ResponseType` | The request that the response from authentication contains an authorization code and an ID token. |
+> > | `TokenValidationParameters` | A list of parameters for token validation. In this case, `ValidateIssuer` is set to `false` to indicate that it can accept sign-ins from any personal, work, or school account type. |
+> > | `Notifications` | A list of delegates that can be run on `OpenIdConnect` messages. |
+>
+>
+> > [!NOTE]
+> > Setting `ValidateIssuer = false` is a simplification for this quickstart. In real applications, validate the issuer. See the samples to understand how to do that.
+>
+> ### Authentication challenge
+>
+> You can force a user to sign in by requesting an authentication challenge in your controller:
+>
+> ```csharp
+> public void SignIn()
+> {
+> if (!Request.IsAuthenticated)
+> {
+> HttpContext.GetOwinContext().Authentication.Challenge(
+> new AuthenticationProperties{ RedirectUri = "/" },
+> OpenIdConnectAuthenticationDefaults.AuthenticationType);
+> }
+> }
+> ```
+>
+> > [!TIP]
+> > Requesting an authentication challenge by using this method is optional. You'd normally use it when you want a view to be accessible from both authenticated and unauthenticated users. Alternatively, you can protect controllers by using the method described in the next section.
+>
+> ### Attribute for protecting a controller or a controller actions
+>
+> You can protect a controller or controller actions by using the `[Authorize]` attribute. This attribute restricts access to the controller or actions by allowing only authenticated users to access the actions in the controller. An authentication challenge will then happen automatically when an unauthenticated user tries to access one of the actions or controllers decorated by the `[Authorize]` attribute.
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> For a complete step-by-step guide on building applications and new features, including a full explanation of this quickstart, try out the ASP.NET tutorial.
+>
+> > [!div class="nextstepaction"]
+> > [Add sign-in to an ASP.NET web app](tutorial-v2-asp-webapp.md)
active-directory Web App Quickstart Portal Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-java.md
+
+ Title: "Quickstart: Add sign-in with Microsoft to a Java web app"
+description: In this quickstart, you'll learn how to add sign-in with Microsoft to a Java web application by using OpenID Connect.
+++++++ Last updated : 08/16/2022++++
+# Quickstart: Add sign-in with Microsoft to a Java web app
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Java web app with user sign-in](web-app-quickstart.md?pivots=devlang-java)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> > [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Add sign-in with Microsoft to a Java web app
+>
+> In this quickstart, you download and run a code sample that demonstrates how a Java web application can sign in users and call the Microsoft Graph API. Users from any Azure Active Directory (Azure AD) organization can sign in to the application.
+>
+> For an overview, see the [diagram of how the sample works](#how-the-sample-works).
+>
+> ## Prerequisites
+>
+> To run this sample, you need:
+>
+> - [Java Development Kit (JDK)](https://openjdk.java.net/) 8 or later.
+> - [Maven](https://maven.apache.org/).
+>
+>
+> #### Step 1: Configure your application in the Azure portal
+>
+> To use the code sample in this quickstart:
+>
+> 1. Add reply URLs `https://localhost:8443/msal4jsample/secure/aad` and `https://localhost:8443/msal4jsample/graph/me`.
+> 1. Create a client secret.
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with these attributes.
+>
+> #### Step 2: Download the code sample
+>
+> Download the project and extract the .zip file into a folder near the root of your drive. For example, *C:\Azure-Samples*.
+>
+> To use HTTPS with localhost, provide the `server.ssl.key` properties. To generate a self-signed certificate, use the keytool utility (included in JRE).
+>
+> Here's an example:
+> ```
+> keytool -genkeypair -alias testCert -keyalg RSA -storetype PKCS12 -keystore keystore.p12 -storepass password
+>
+> server.ssl.key-store-type=PKCS12
+> server.ssl.key-store=classpath:keystore.p12
+> server.ssl.key-store-password=password
+> server.ssl.key-alias=testCert
+> ```
+> Put the generated keystore file in the *resources* folder.
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!div class="sxs-lookup"]
+> > > [!NOTE]
+> > > `Enter_the_Supported_Account_Info_Here`
+>
+> > [!div class="sxs-lookup"]
+>
+> #### Step 3: Run the code sample
+>
+> To run the project, take one of these steps:
+>
+> - Run it directly from your IDE by using the embedded Spring Boot server.
+> - Package it to a WAR file by using [Maven](https://maven.apache.org/plugins/maven-war-plugin/usage.html), and then deploy it to a J2EE container solution like [Apache Tomcat](http://tomcat.apache.org/).
+>
+> ##### Running the project from an IDE
+>
+> To run the web application from an IDE, select run, and then go to the home page of the project. For this sample, the standard home page URL is https://localhost:8443.
+>
+> 1. On the front page, select the **Login** button to redirect users to Azure Active Directory and prompt them for credentials.
+>
+> 1. After users are authenticated, they're redirected to `https://localhost:8443/msal4jsample/secure/aad`. They're now signed in, and the page will show information about the user account. The sample UI has these buttons:
+> - **Sign Out**: Signs the current user out of the application and redirects that user to the home page.
+> - **Show User Info**: Acquires a token for Microsoft Graph and calls Microsoft Graph with a request that contains the token, which returns basic information about the signed-in user.
+>
+> ##### Running the project from Tomcat
+>
+> If you want to deploy the web sample to Tomcat, make a couple changes to the source code.
+>
+> 1. Open *ms-identity-java-webapp/src/main/java/com.microsoft.azure.msalwebsample/MsalWebSampleApplication*.
+>
+> - Delete all source code and replace it with this code:
+>
+> ```Java
+> package com.microsoft.azure.msalwebsample;
+>
+> import org.springframework.boot.SpringApplication;
+> import org.springframework.boot.autoconfigure.SpringBootApplication;
+> import org.springframework.boot.builder.SpringApplicationBuilder;
+> import org.springframework.boot.web.servlet.support.SpringBootServletInitializer;
+>
+> @SpringBootApplication
+> public class MsalWebSampleApplication extends SpringBootServletInitializer {
+>
+> public static void main(String[] args) {
+> SpringApplication.run(MsalWebSampleApplication.class, args);
+> }
+>
+> @Override
+> protected SpringApplicationBuilder configure(SpringApplicationBuilder builder) {
+> return builder.sources(MsalWebSampleApplication.class);
+> }
+> }
+> ```
+>
+> 2. Tomcat's default HTTP port is 8080, but you need an HTTPS connection over port 8443. To configure this setting:
+> - Go to *tomcat/conf/server.xml*.
+> - Search for the `<connector>` tag, and replace the existing connector with this connector:
+>
+> ```xml
+> <Connector
+> protocol="org.apache.coyote.http11.Http11NioProtocol"
+> port="8443" maxThreads="200"
+> scheme="https" secure="true" SSLEnabled="true"
+> keystoreFile="C:/Path/To/Keystore/File/keystore.p12" keystorePass="KeystorePassword"
+> clientAuth="false" sslProtocol="TLS"/>
+> ```
+>
+> 3. Open a Command Prompt window. Go to the root folder of this sample (where the pom.xml file is located), and run `mvn > package` to build the project.
+> - This command will generate a *msal-web-sample-0.1.0.war* file in your */targets* directory.
+> - Rename this file to *msal4jsample.war*.
+> - Deploy the WAR file by using Tomcat or any other J2EE container solution.
+> - To deploy the msal4jsample.war file, copy it to the */webapps/* directory in your Tomcat installation, and then start the Tomcat server.
+>
+> 4. After the file is deployed, go to https://localhost:8443/msal4jsample by using a browser.
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as a confidential client. Because the client secret is added as plain text to your project files, for security reasons we recommend that you use a certificate instead of a client secret before using the application in a production environment. For more information on how to use a certificate, see [Certificate credentials for application authentication](./active-directory-certificate-credentials.md).
+>
+> ## More information
+>
+> ### How the sample works
+> ![Diagram that shows how the sample app generated by this quickstart works.](media/quickstart-v2-java-webapp/java-quickstart.svg)
+>
+> ### Get MSAL
+>
+> MSAL for Java (MSAL4J) is the Java library used to sign in users and request tokens that are used to access an API that's protected by the Microsoft identity platform.
+>
+> Add MSAL4J to your application by using Maven or Gradle to manage your dependencies by making the following changes to the > application's pom.xml (Maven) or build.gradle (Gradle) file.
+>
+> In pom.xml:
+>
+> ```xml
+> <dependency>
+> <groupId>com.microsoft.azure</groupId>
+> <artifactId>msal4j</artifactId>
+> <version>1.0.0</version>
+> </dependency>
+> ```
+>
+> In build.gradle:
+>
+> ```$xslt
+> compile group: 'com.microsoft.azure', name: 'msal4j', version: '1.0.0'
+> ```
+>
+> ### Initialize MSAL
+>
+> Add a reference to MSAL for Java by adding the following code at the start of the file where you'll be using MSAL4J:
+>
+> ```Java
+> import com.microsoft.aad.msal4j.*;
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> For a more in-depth discussion of building web apps that sign in users on the Microsoft identity platform, see the multipart scenario series:
+>
+> > [!div class="nextstepaction"]
+> > [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md?tabs=java)
active-directory Web App Quickstart Portal Node Js Passport https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js-passport.md
+
+ Title: "Quickstart: Add user sign-in to a Node.js web app"
+description: In this quickstart, you learn how to implement authentication in a Node.js web application using OpenID Connect.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an application developer, I want to know how to set up OpenID Connect authentication in a web application built using Node.js with Express.
++
+# Quickstart: Add sign in using OpenID Connect to a Node.js web app
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Add user sign-in to a Node.js web app built with the Express framework](web-app-quickstart.md?pivots=devlang-nodejs-passport)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Add sign in using OpenID Connect to a Node.js web app
+>
+> In this quickstart, you download and run a code sample that demonstrates how to set up OpenID Connect authentication in a web application built using Node.js with Express. The sample is designed to run on any platform.
+>
+> ## Prerequisites
+>
+> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - [Node.js](https://nodejs.org/en/download/).
+>
+> ## Register your application
+>
+> 1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+> 1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="./media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you want to register the application.
+> 1. Search for and select **Azure Active Directory**.
+> 1. Under **Manage**, select **App registrations** > **New registration**.
+> 1. Enter a **Name** for your application, for example `MyWebApp`. Users of your app might see this name, and you can change it later.
+> 1. In the **Supported account types** section, select **Accounts in any organizational directory and personal Microsoft accounts (e.g. Skype, Xbox, Outlook.com)**.
+>
+> If there are more than one redirect URIs, add these from the **Authentication** tab later after the app has been successfully created.
+>
+> 1. Select **Register** to create the app.
+> 1. On the app's **Overview** page, find the **Application (client) ID** value and record it for later. You'll need this > value to configure the application later in this project.
+> 1. Under **Manage**, select **Authentication**.
+> 1. Select **Add a platform** > **Web**.
+> 1. In the **Redirect URIs** section, enter `http://localhost:3000/auth/openid/return`.
+> 1. Enter a **Front-channel logout URL** `https://localhost:3000`.
+> 1. In the **Implicit grant and hybrid flows** section, select **ID tokens** as this sample requires the [Implicit grant flow](./v2-oauth2-implicit-grant-flow.md) to be enabled to sign-in the user.
+> 1. Select **Configure**.
+> 1. Under **Manage**, select **Certificates & secrets** > **Client secrets** > **New client secret**.
+> 1. Enter a key description (for instance app secret).
+> 1. Select a key duration of either **In 1 year, In 2 years,** or **Never Expires**.
+> 1. Select **Add**. The key value will be displayed. Copy the key value and save it in a safe location for later use.
+>
+>
+> ## Download the sample application and modules
+>
+> Next, clone the sample repo and install the NPM modules.
+>
+> From your shell or command line:
+>
+> `$ git clone git@github.com:AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
+>
+> or
+>
+> `$ git clone https://github.com/AzureADQuickStarts/AppModelv2-WebApp-OpenIDConnect-nodejs.git`
+>
+> From the project root directory, run the command:
+>
+> `$ npm install`
+>
+> ## Configure the application
+>
+> Provide the parameters in `exports.creds` in config.js as instructed.
+>
+> * Update `<tenant_name>` in `exports.identityMetadata` with the Azure AD tenant name of the format \*.onmicrosoft.com.
+> * Update `exports.clientID` with the Application ID noted from app registration.
+> * Update `exports.clientSecret` with the Application secret noted from app registration.
+> * Update `exports.redirectUrl` with the Redirect URI noted from app registration.
+>
+> **Optional configuration for production apps:**
+>
+> * Update `exports.destroySessionUrl` in config.js, if you want to use a different `post_logout_redirect_uri`.
+>
+> * Set `exports.useMongoDBSessionStore` in config.js to true, if you want to use [mongoDB](https://www.mongodb.com) or other [compatible session stores](https://github.com/expressjs/session#compatible-session-stores).
+> The default session store in this sample is `express-session`. The default session store is not suitable for production.
+>
+> * Update `exports.databaseUri`, if you want to use mongoDB session store and a different database URI.
+>
+> * Update `exports.mongoDBSessionMaxAge`. Here you can specify how long you want to keep a session in mongoDB. The unit is second(s).
+>
+> ## Build and run the application
+>
+> Start mongoDB service. If you are using mongoDB session store in this app, you have to [install mongoDB](http://www.mongodb.org/) and start the service first. If you are using the default session store, you can skip this step.
+>
+> Run the app using the following command from your command line.
+>
+> ```
+> $ node app.js
+> ```
+>
+> **Is the server output hard to understand?:** We use `bunyan` for logging in this sample. The console won't make much sense to you unless you also install bunyan and run the server like above but pipe it through the bunyan binary:
+>
+> ```
+> $ npm install -g bunyan
+>
+> $ node app.js | bunyan
+> ```
+>
+> ### You're done!
+>
+> You will have a server successfully running on `http://localhost:3000`.
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+> Learn more about the web app scenario that the Microsoft identity platform supports:
+> > [!div class="nextstepaction"]
+> > [Web app that signs in users scenario](scenario-web-app-sign-user-overview.md)
active-directory Web App Quickstart Portal Node Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-node-js.md
+
+ Title: "Quickstart: Add authentication to a Node.js web app with MSAL Node"
+description: In this quickstart, you learn how to implement authentication with a Node.js web app and the Microsoft Authentication Library (MSAL) for Node.js.
+++++++ Last updated : 08/16/2022+++
+#Customer intent: As an application developer, I want to know how to set up authentication in a web application built using Node.js and MSAL Node.
+
+# Quickstart: Sign in users and get an access token in a Node.js web app using the authorization code flow
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Node.js web app that signs in users with MSAL Node](web-app-quickstart.md?pivots=devlang-nodejs-msal)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Sign in users and get an access token in a Node.js web app using the authorization code flow
+>
+> In this quickstart, you download and run a code sample that demonstrates how a Node.js web app can sign in users by using the authorization code flow. The code sample also demonstrates how to get an access token to call Microsoft Graph API.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> This quickstart uses the Microsoft Authentication Library for Node.js (MSAL Node) with the authorization code flow.
+>
+> ## Prerequisites
+>
+> * An Azure subscription. [Create an Azure subscription for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> * [Node.js](https://nodejs.org/en/download/)
+> * [Visual Studio Code](https://code.visualstudio.com/download) or another code editor
+>
+> #### Step 1: Configure the application in Azure portal
+> For the code sample for this quickstart to work, you need to create a client secret and add the following reply URL: `http:/> /localhost:3000/redirect`.
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](media/quickstart-v2-windows-desktop/green-check.png) Your application is configured with these > attributes.
+>
+> #### Step 2: Download the project
+>
+> Run the project with a web server by using Node.js.
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> #### Step 3: Your app is configured and ready to run
+>
+> Run the project by using Node.js.
+>
+> 1. To start the server, run the following commands from within the project directory:
+>
+> ```console
+> npm install
+> npm start
+> ```
+>
+> 1. Go to `http://localhost:3000/`.
+>
+> 1. Select **Sign In** to start the sign-in process.
+>
+> The first time you sign in, you're prompted to provide your consent to allow the application to access your profile and sign you in. After you're signed in successfully, you will see a log message in the command line.
+>
+> ## More information
+>
+> ### How the sample works
+>
+> The sample hosts a web server on localhost, port 3000. When a web browser accesses this site, the sample immediately redirects the user to a Microsoft authentication page. Because of this, the sample does not contain any HTML or display elements. Authentication success displays the message "OK".
+>
+> ### MSAL Node
+>
+> The MSAL Node library signs in users and requests the tokens that are used to access an API that's protected by Microsoft identity platform. You can download the latest version by using the Node.js Package Manager (npm):
+>
+> ```console
+> npm install @azure/msal-node
+> ```
+>
+> ## Next steps
+>
+> > [!div class="nextstepaction"]
+> > [Adding Auth to an existing web app - GitHub code sample >](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-node-samples/auth-code)
active-directory Web App Quickstart Portal Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart-portal-python.md
+
+ Title: "Quickstart: Add sign-in with Microsoft to a Python web app"
+description: In this quickstart, learn how a Python web app can sign in users, get an access token from the Microsoft identity platform, and call the Microsoft Graph API.
+++++++ Last updated : 08/16/2022++++
+# Quickstart: Add sign-in with Microsoft to a Python web app
+
+> [!div renderon="docs"]
+> Welcome! This probably isn't the page you were expecting. While we work on a fix, this link should take you to the right article:
+>
+> > [Quickstart: Python web app with user sign-in](web-app-quickstart.md?pivots=devlang-python)
+>
+> We apologize for the inconvenience and appreciate your patience while we work to get this resolved.
+
+> [!div renderon="portal" id="display-on-portal" class="sxs-lookup"]
+> # Quickstart: Add sign-in with Microsoft to a Python web app
+>
+> In this quickstart, you download and run a code sample that demonstrates how a Python web application can sign in users and get an access token to call the Microsoft Graph API. Users with a personal Microsoft Account or an account in any Azure Active Directory (Azure AD) organization can sign into the application.
+>
+> See [How the sample works](#how-the-sample-works) for an illustration.
+>
+> ## Prerequisites
+>
+> - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+> - [Python 2.7+](https://www.python.org/downloads/release/python-2713) or [Python 3+](https://www.python.org/downloads/release/python-364/)
+> - [Flask](http://flask.pocoo.org/), [Flask-Session](https://pypi.org/project/Flask-Session/), [requests](https://github.com/psf/requests/graphs/contributors)
+> - [MSAL Python](https://github.com/AzureAD/microsoft-authentication-library-for-python)
+>
+> #### Step 1: Configure your application in Azure portal
+>
+> For the code sample in this quickstart to work:
+>
+> 1. Add a reply URL as `http://localhost:5000/getAToken`.
+> 1. Create a Client Secret.
+> 1. Add Microsoft Graph API's User.ReadBasic.All delegated permission.
+>
+> <button id="makechanges" class="nextstepaction configure-app-button"> Make these changes for me </button>
+>
+> > [!div id="appconfigured" class="alert alert-info"]
+> > ![Already configured](./media/quickstart-v2-aspnet-webapp/green-check.png) Your application is configured with this attribute
+>
+> #### Step 2: Download your project
+>
+> Download the project and extract the zip file to a local folder closer to the root folder - for example, **C:\Azure-Samples**
+>
+> > [!div class="nextstepaction"]
+> > <button id="downloadsample" class="download-sample-button">Download the code sample</button>
+>
+> > [!NOTE]
+> > `Enter_the_Supported_Account_Info_Here`
+>
+> #### Step 3: Run the code sample
+>
+> 1. You will need to install MSAL Python library, Flask framework, Flask-Sessions for server-side session management and requests using pip as follows:
+>
+> ```shell
+> pip install -r requirements.txt
+> ```
+>
+> 2. Run `app.py` from shell or command line:
+>
+> ```shell
+> python app.py
+> ```
+>
+> > [!IMPORTANT]
+> > This quickstart application uses a client secret to identify itself as confidential client. Because the client secret is added as a plain-text to your project files, for security reasons, it is recommended that you use a certificate instead of a client secret before considering the application as production application. For more information on how to use a certificate, see [these instructions](./active-directory-certificate-credentials.md).
+>
+> ## More information
+>
+> ### How the sample works
+> ![Shows how the sample app generated by this quickstart works](media/quickstart-v2-python-webapp/python-quickstart.svg)
+>
+> ### Getting MSAL
+> MSAL is the library used to sign in users and request tokens used to access an API protected by the Microsoft identity Platform.
+> You can add MSAL Python to your application using Pip.
+>
+> ```Shell
+> pip install msal
+> ```
+>
+> ### MSAL initialization
+> You can add the reference to MSAL Python by adding the following code to the top of the file where you will be using MSAL:
+>
+> ```Python
+> import msal
+> ```
+>
+> [!INCLUDE [Help and support](../../../includes/active-directory-develop-help-support-include.md)]
+>
+> ## Next steps
+>
+> Learn more about web apps that sign in users in our multi-part scenario series.
+>
+> > [!div class="nextstepaction"]
+> > [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
active-directory Directory Delegated Administration Primer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delegated-administration-primer.md
Title: Delegated administration in Azure Active Directory
description: The relationship between older delegated admin permissions and new granular delegated admin permissions in Azure Active Directory keywords: -+ Last updated 06/23/2022
active-directory Directory Delete Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-delete-howto.md
description: Explains how to prepare an Azure AD tenant for deletion, including
documentationcenter: '' -+
active-directory Directory Overview User Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-overview-user-model.md
Title: Users, groups, licensing, and roles in Azure Active Directory
description: The relationship between users and licenses assigned, administrator roles, group membership in Azure Active Directory keywords: -+ Last updated 06/23/2022
active-directory Directory Self Service Signup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-self-service-signup.md
description: Use self-service sign-up in an Azure Active Directory (Azure AD) or
documentationcenter: '' -+ editor: ''
active-directory Directory Service Limits Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/directory-service-limits-restrictions.md
description: Usage constraints and other service limits for the Azure Active Dir
documentationcenter: '' -+ editor: ''
active-directory Domains Admin Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-admin-takeover.md
description: How to take over a DNS domain name in an unmanaged Azure AD organiz
documentationcenter: '' -+
active-directory Domains Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-manage.md
description: Management concepts and how-tos for managing a domain name in Azure
documentationcenter: '' -+
active-directory Domains Verify Custom Subdomain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/domains-verify-custom-subdomain.md
description: Change default subdomain authentication settings inherited from roo
documentationcenter: '' -+
active-directory Groups Assign Sensitivity Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-assign-sensitivity-labels.md
description: Learn how to assign sensitivity labels to groups. See troubleshooti
documentationcenter: '' -+
active-directory Groups Bulk Download Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download-members.md
description: Add users in bulk in the Azure admin center.
-+ Last updated 06/23/2022
active-directory Groups Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-download.md
description: Download group properties in bulk in the Azure admin center in Azur
-+ Last updated 03/24/2022
active-directory Groups Bulk Import Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-import-members.md
description: Add group members in bulk in the Azure Active Directory admin cente
-+ Last updated 06/24/2022
active-directory Groups Bulk Remove Members https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-bulk-remove-members.md
description: Remove group members in bulk operations in the Azure admin center.
-+ Last updated 09/22/2021
active-directory Groups Change Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-change-type.md
description: Learn how to convert existing groups from static to dynamic members
documentationcenter: '' -+
active-directory Groups Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-create-rule.md
description: How to create or update a group membership rule in the Azure portal
documentationcenter: '' -+
active-directory Groups Dynamic Membership https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-membership.md
description: How to create membership rules to automatically populate groups, an
documentationcenter: '' -+ Previously updated : 06/23/2022 Last updated : 08/18/2022
The following device attributes can be used.
accountEnabled | true false | device.accountEnabled -eq true deviceCategory | a valid device category name | device.deviceCategory -eq "BYOD" deviceId | a valid Azure AD device ID | device.deviceId -eq "d4fe7726-5966-431c-b3b8-cddc8fdb717d"
- deviceManagementAppId | a valid MDM application ID in Azure AD | device.deviceManagementAppId -eq "0000000a-0000-0000-c000-000000000000" for Intune MDM app ID
+ deviceManagementAppId | a valid MDM application ID in Azure AD | device.deviceManagementAppId -eq "0000000a-0000-0000-c000-000000000000" for Microsoft Intune managed or "54b943f8-d761-4f8d-951e-9cea1846db5a" for System Center Configuration Manager Co-managed devices
deviceManufacturer | any string value | device.deviceManufacturer -eq "Samsung" deviceModel | any string value | device.deviceModel -eq "iPad Air" displayName | any string value | device.displayName -eq "Rob iPhone"
active-directory Groups Dynamic Rule More Efficient https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-more-efficient.md
description: How to optimize your membership rules to automatically populate gro
documentationcenter: '' -+
active-directory Groups Dynamic Rule Validation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-rule-validation.md
description: How to test members against a membership rule for a dynamic group i
documentationcenter: '' -+
active-directory Groups Dynamic Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-dynamic-tutorial.md
description: In this tutorial, you use groups with user membership rules to add
documentationcenter: '' -+
active-directory Groups Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-lifecycle.md
description: How to set up expiration for Microsoft 365 groups in Azure Active D
documentationcenter: '' -+ editor: ''
active-directory Groups Members Owners Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-members-owners-search.md
description: Search and filter groups members and owners in the Azure portal.
documentationcenter: '' -+
active-directory Groups Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-naming-policy.md
description: How to set up naming policy for Microsoft 365 groups in Azure Activ
documentationcenter: '' -+
active-directory Groups Quickstart Expiration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-expiration.md
description: Expiration for Microsoft 365 groups - Azure Active Directory
documentationcenter: '' -+
active-directory Groups Quickstart Naming Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-quickstart-naming-policy.md
description: Explains how to add new users or delete existing users in Azure Act
documentationcenter: '' -+
active-directory Groups Restore Deleted https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-restore-deleted.md
Title: Restore a deleted Microsoft 365 group - Azure AD | Microsoft Docs
description: How to restore a deleted group, view restorable groups, and permanently delete a group in Azure Active Directory -+
active-directory Groups Saasapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-saasapps.md
description: How to use groups in Azure Active Directory to assign access to Saa
documentationcenter: '' -+
active-directory Groups Self Service Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-self-service-management.md
description: Create and manage security groups or Microsoft 365 groups in Azure
documentationcenter: '' -+ editor: ''
active-directory Groups Settings Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-cmdlets.md
description: How manage the settings for groups using Azure Active Directory cmd
documentationcenter: '' -+
active-directory Groups Settings V2 Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-settings-v2-cmdlets.md
description: This page provides PowerShell examples to help you manage your grou
keywords: Azure AD, Azure Active Directory, PowerShell, Groups, Group management -+
active-directory Groups Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-troubleshooting.md
Title: Fix problems with dynamic group memberships - Azure AD | Microsoft Docs
description: Troubleshooting tips for dynamic group membership in Azure Active Directory -+
active-directory Groups Write Back Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/groups-write-back-portal.md
Title: Group writeback portal operations (preview) in Azure Active Directory
description: The access points for group writeback to on-premises Active Directory in the Azure Active Directory admin center. keywords: -+ Previously updated : 07/21/2022 Last updated : 08/18/2022
active-directory Licensing Directory Independence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-directory-independence.md
description: Understanding the data independence of your Azure Active Directory
documentationcenter: '' -+
active-directory Licensing Group Advanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-group-advanced.md
keywords: Azure AD licensing documentationcenter: '' -+
active-directory Licensing Groups Assign https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-assign.md
keywords: Azure AD licensing documentationcenter: '' -+
active-directory Licensing Groups Change Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-change-licenses.md
keywords: Azure AD licensing documentationcenter: '' -+ editor: ''
active-directory Licensing Groups Migrate Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-migrate-users.md
keywords: Azure AD licensing documentationcenter: '' -+ editor: ''
active-directory Licensing Groups Resolve Problems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-groups-resolve-problems.md
keywords: Azure AD licensing documentationcenter: '' -+
active-directory Licensing Ps Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-ps-examples.md
keywords: Azure AD licensing documentationcenter: '' -+
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
keywords: Azure Active Directory licensing service plans documentationcenter: '' -+ editor: ''
active-directory Linkedin Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-integration.md
Title: Admin consent for LinkedIn account connections - Azure AD | Microsoft Doc
description: Explains how to enable or disable LinkedIn integration account connections in Microsoft apps in Azure Active Directory -+
active-directory Linkedin User Consent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/linkedin-user-consent.md
Title: LinkedIn data sharing and consent - Azure Active Directory | Microsoft Do
description: Explains how LinkedIn integration shares data via Microsoft apps in Azure Active Directory -+
active-directory Signin Account Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-account-support.md
Title: Does my Azure AD sign-in page accept Microsoft accounts | Microsoft Docs
description: How on-screen messaging reflects username lookup during sign-in -+
active-directory Signin Realm Discovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/signin-realm-discovery.md
Title: Username lookup during sign-in - Azure Active Directory | Microsoft Docs
description: How on-screen messaging reflects username lookup during sign-in in Azure Active Directory -+
active-directory Users Bulk Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-add.md
description: Add users in bulk in the Azure AD admin center in Azure Active Dire
-+ Last updated 06/24/2022
active-directory Users Bulk Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-delete.md
description: Delete users in bulk in the Azure admin center in Azure Active Dire
-+ Last updated 06/24/2022
active-directory Users Bulk Download https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-download.md
description: Download user records in bulk in the Azure admin center in Azure Ac
-+ Last updated 06/24/2022
active-directory Users Bulk Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-bulk-restore.md
description: Restore deleted users in bulk in the Azure AD admin center in Azure
-+ Last updated 06/24/2022
active-directory Users Close Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-close-account.md
Title: Close a work or school account in an unmanaged Azure AD organization
description: How to close your work or school account in an unmanaged Azure Active Directory. -+
active-directory Users Restrict Guest Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-restrict-guest-permissions.md
description: Restrict guest user access permissions using the Azure portal, Powe
-+ Last updated 06/24/2022
active-directory Users Revoke Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-revoke-access.md
-+ Last updated 06/24/2022
active-directory Users Search Enhanced https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-search-enhanced.md
description: Describes how Azure Active Directory enables user search, filtering
documentationcenter: '' -+ editor: ''
active-directory Users Sharing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/users-sharing-accounts.md
description: Describes how Azure Active Directory enables organizations to secur
documentationcenter: '' -+ editor: ''
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
With outbound settings, you select which of your users and groups will be able t
- When you're done selecting the users and groups you want to add, choose **Select**. > [!NOTE]
- > When targeting your users and groups, you won't be able to select users who have configured [SMS-based authentication](/azure/active-directory/authentication/howto-authentication-sms-signin). This is because users who have a "federated credential" on their user object are blocked to prevent external users from being added to outbound access settings. As a workaround, you can use the [Microsoft Graph API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-1.0) to add the user's object ID directly or target a group the user belongs to.
+ > When targeting your users and groups, you won't be able to select users who have configured [SMS-based authentication](../authentication/howto-authentication-sms-signin.md). This is because users who have a "federated credential" on their user object are blocked to prevent external users from being added to outbound access settings. As a workaround, you can use the [Microsoft Graph API](/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-1.0) to add the user's object ID directly or target a group the user belongs to.
1. Select the **External applications** tab.
When you remove an organization from your Organizational settings, the default c
## Next steps - See [Configure external collaboration settings](external-collaboration-settings-configure.md) for B2B collaboration with non-Azure AD identities, social identities, and non-IT managed external accounts.-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)
+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)
active-directory External Collaboration Settings Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/external-collaboration-settings-configure.md
Previously updated : 05/05/2022 Last updated : 08/22/2022
For B2B collaboration with other Azure AD organizations, you should also review
- **Guest users have limited access to properties and memberships of directory objects**: (Default) This setting blocks guests from certain directory tasks, like enumerating users, groups, or other directory resources. Guests can see membership of all non-hidden groups. [Learn more about default guest permissions](../fundamentals/users-default-permissions.md#member-and-guest-users).
- - **Guest user access is restricted to properties and memberships of their own directory objects (most restrictive)**: With this setting, guests can access only their own profiles. Guests are not allowed to see other users' profiles, groups, or group memberships.
+ - **Guest user access is restricted to properties and memberships of their own directory objects (most restrictive)**: With this setting, guests can access only their own profiles. Guests aren't allowed to see other users' profiles, groups, or group memberships.
1. Under **Guest invite settings**, choose the appropriate settings: ![Screenshot showing Guest invite settings.](./media/external-collaboration-settings-configure/guest-invite-settings.png)
- - **Anyone in the organization can invite guest users including guests and non-admins (most inclusive)**: To allow guests in the organization to invite other guests including those who are not members of an organization, select this radio button.
+ - **Anyone in the organization can invite guest users including guests and non-admins (most inclusive)**: To allow guests in the organization to invite other guests including those who aren't members of an organization, select this radio button.
- **Member users and users assigned to specific admin roles can invite guest users including guests with member permissions**: To allow member users and users who have specific administrator roles to invite guests, select this radio button. - **Only users assigned to specific admin roles can invite guest users**: To allow only those users with administrator roles to invite guests, select this radio button. The administrator roles include [Global Administrator](../roles/permissions-reference.md#global-administrator), [User Administrator](../roles/permissions-reference.md#user-administrator), and [Guest Inviter](../roles/permissions-reference.md#guest-inviter). - **No one in the organization can invite guest users including admins (most restrictive)**: To deny everyone in the organization from inviting guests, select this radio button.
For B2B collaboration with other Azure AD organizations, you should also review
![Screenshot showing Self-service sign up via user flows setting.](./media/external-collaboration-settings-configure/self-service-sign-up-setting.png)
+1. Under **External user leave settings**, you can control whether external users can remove themselves from your organization. If you set this option to **No**, external users will need to contact your admin or privacy contact to be removed.
+
+ - **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact.
+ - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin or privacy contact to request removal from your organization.
+
+ > [!IMPORTANT]
+ > You can configure **External user leave settings** only if you have [added your privacy information](../fundamentals/active-directory-properties-area.md) to your Azure AD tenant. Otherwise, this setting will be unavailable.
+
+ ![Screenshot showing External user leave settings in the portal.](media/external-collaboration-settings-configure/external-user-leave-settings.png)
+ 1. Under **Collaboration restrictions**, you can choose whether to allow or deny invitations to the domains you specify and enter specific domain names in the text boxes. For multiple domains, enter each domain on a new line. For more information, see [Allow or block invitations to B2B users from specific organizations](allow-deny-list.md). ![Screenshot showing Collaboration restrictions settings.](./media/external-collaboration-settings-configure/collaboration-restrictions.png)
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Previously updated : 06/30/2022 Last updated : 08/22/2022
adobe-target: true
# Leave an organization as an external user
-An Azure Active Directory (Azure AD) B2B collaboration or B2B direct connect user can decide to leave an organization at any time if they no longer need to use apps from that organization or maintain any association.
+As an Azure Active Directory (Azure AD) B2B collaboration or B2B direct connect user, you can decide to leave an organization at any time if you no longer need to use apps from that organization or maintain any association.
-B2B collaboration and B2B direct connect users can usually leave an organization on their own without having to contact an administrator. This option won't be available if it's not allowed by the organization, or if the user's account has been disabled. The user will need to contact the tenant admin, who can delete the account.
+You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization.
[!INCLUDE [GDPR-related guidance](../../../includes/gdpr-dsr-and-stp-note.md)]
-## Leave an organization
+## What organizations do I belong to?
-In your My Account portal, on the Organizations page, you can view and manage the organizations you have access to:
--- **Home organization**: Your home organization is listed first. This is the organization that owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization. (If you don't have an assigned home organization, you'll just see a single heading that says Organizations with the list of your associated organizations.)
-
-- **Other organizations you collaborate with**: You'll also see the other organizations that you've signed in to previously using your work or school account. You can leave any of these organizations at any time.-
-To leave an organization, follow these steps.
-
-1. Go to your **My Account** page by doing one of the following:
+1. To view the organizations you belong to, first open your **My Account** page by doing one of the following:
- If you're using a work or school account, go to https://myaccount.microsoft.com and sign in. - If you're using a personal account, go to https://myapps.microsoft.com and sign in, and then select your account icon in the upper right and select **View account**. Or, use a My Account URL that includes your tenant information to go directly to your My Account page (examples are shown in the following note). + > [!NOTE] > If you use the email one-time passcode feature when signing in, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: `https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com` or `https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789`. 1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
-1. Under **Other organizations you collaborate with**, find the organization that you want to leave, and select **Leave**.
+1. The **Organizations** page appears, where you can view and manage the organizations you belong to.
+
+ ![Screenshot showing the list of organizations you belong to.](media/leave-the-organization/organization-list.png)
+
+ - **Home organization**: Your home organization is listed first. This is the organization that owns your work or school account. Because your account is managed by your administrator, you're not allowed to leave your home organization (you'll see there's no option to **Leave**). If you don't have an assigned home organization, you'll just see a single heading that says **Organizations** with the list of your associated organizations.
+
+ - **Other organizations you collaborate with**: You'll also see the other organizations that you've signed in to previously using your work or school account. You can decide to leave any of these organizations at any time.
+
+## How to leave an organization
+
+If your organization allows users to remove themselves from external organizations, you can follow these steps to leave an organization.
+
+1. Open your **Organizations** page. (Follow the steps in [What organizations do I belong to](#what-organizations-do-i-belong-to), above.)
+
+1. Under **Other organizations you collaborate with** (or **Organizations** if you don't have a home organization), find the organization that you want to leave, and then select **Leave**.
![Screenshot showing Leave organization option in the user interface.](media/leave-the-organization/leave-org.png)+ 1. When asked to confirm, select **Leave**.
+1. If you select **Leave** for an organization but you see the following message, it means youΓÇÖll need to contact the organization's admin or privacy contact and ask them to remove you from their organization.
+
+ ![Screenshot showing the message when you need permission to leave an organization.](media/leave-the-organization/need-permission-leave.png)
+
+## Why canΓÇÖt I leave an organization?
+
+In the **Home organization** section, there's no option to **Leave** your organization. Only an administrator can remove your account from your home organization.
-## Account removal
+For the external organizations listed under **Other organizations you collaborate with**, you might not be able to leave on your own, for example when:
-When a B2B collaboration user leaves an organization, the user's account is "soft deleted" in the directory. By default, the user object moves to the **Deleted users** area in Azure AD, but permanent deletion doesn't start for 30 days. This soft deletion enables the administrator to restore the user account, including groups and permissions, if the user makes a request to restore the account before it's permanently deleted.
+
+- the organization you want to leave doesnΓÇÖt allow users to leave by themselves
+- your account has been disabled
+
+In these cases, you can select **Leave**, but then you'll see a message saying you need to contact the admin or privacy contact for that organization to ask them to remove you.
+
+## More information for administrators
+
+Administrators can use the **External user leave settings** to control whether external users can remove themselves from their organization. If you disallow the ability for external users to remove themselves from your organization, external users will need to contact your admin or privacy contact to be removed.
+
+> [!IMPORTANT]
+> You can configure **External user leave settings** only if you have [added your privacy information](../fundamentals/active-directory-properties-area.md) to your Azure AD tenant. Otherwise, this setting will be unavailable. We recommend adding your privacy information to allow external users to review your policies and email your privacy contact when necessary.
+
+1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account and open the Azure Active Directory service.
+
+1. Select **External Identities** > **External collaboration settings**.
+
+1. Under **External user leave** settings, choose whether to allow external users to leave your organization themselves:
+
+ - **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact.
+ - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin or privacy contact to request removal from your organization.
+
+ ![Screenshot showing External user leave settings in the portal.](media/leave-the-organization/external-user-leave-settings.png)
+
+### Account removal
+
+When a B2B collaboration user leaves an organization, the user's account is "soft deleted" in the directory. By default, the user object moves to the **Deleted users** area in Azure AD, but permanent deletion doesn't start for 30 days. This soft deletion enables the administrator to restore the user account, including groups and permissions, if the user makes a request to restore the account before it's permanently deleted.
If desired, a tenant administrator can permanently delete the account at any time during the soft-delete period with the following steps. This action is irrevocable. 1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory**.
-2. Under **Manage**, select **Users**.
-3. Select **Deleted users**.
-4. Select the check box next to a deleted user, and then select **Delete permanently**.
+
+1. Under **Manage**, select **Users**.
+
+1. Select **Deleted users**.
+
+1. Select the check box next to a deleted user, and then select **Delete permanently**.
Once permanent deletion begins, whether it's initiated by the admin or the end of the soft deletion period, it can take up to an additional 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
Some organizations use a list of known ΓÇÿbad actorΓÇÖ domains provided by their
You can control both inbound and outbound access using Cross Tenant Access Settings. In addition, you can trust MFA, Compliant device, and hybrid Azure Active Directory joined device (HAADJ) claims from all or a subset of external Azure AD tenants. When you configure an organization specific policy, it applies to the entire Azure AD tenant and will cover all users from that tenant regardless of the userΓÇÖs domain suffix.
-You can enable collaboration across Microsoft clouds such as Microsoft Azure China 21Vianet or Microsoft Azure Government with additional configuration. Determine if any of your collaboration partners reside in a different Microsoft cloud. If so, you should [enable collaboration with these partners using Cross Tenant Access Settings](/azure/active-directory/external-identities/cross-cloud-settings).
+You can enable collaboration across Microsoft clouds such as Microsoft Azure China 21Vianet or Microsoft Azure Government with additional configuration. Determine if any of your collaboration partners reside in a different Microsoft cloud. If so, you should [enable collaboration with these partners using Cross Tenant Access Settings](../external-identities/cross-cloud-settings.md).
If you wish to allow inbound access to only specific tenants (allowlist), you can set the default policy to block access and then create organization policies to granularly allow access on a per user, group, and application basis.
See the following articles on securing external access to resources. We recommen
8. [Secure access with Sensitivity labels](8-secure-access-sensitivity-labels.md)
-9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
+9. [Secure access to Microsoft Teams, OneDrive, and SharePoint](9-secure-access-teams-sharepoint.md)
active-directory Active Directory Access Create New Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-access-create-new-tenant.md
Title: Quickstart - Access & create new tenant - Azure AD
description: Instructions about how to find Azure Active Directory and how to create a new tenant for your organization. -+ Previously updated : 12/22/2021 Last updated : 08/17/2022
If you're not going to continue to use this application, you can delete the tena
- Ensure that you're signed in to the directory that you want to delete through the **Directory + subscription** filter in the Azure portal. Switch to the target directory if needed. - Select **Azure Active Directory**, and then on the **Contoso - Overview** page, select **Delete directory**.
- The tenant and its associated information is deleted.
+ The tenant and its associated information are deleted.
![Overview page, with highlighted Delete directory button](media/active-directory-access-create-new-tenant/azure-ad-delete-new-tenant.png) ## Next steps -- Change or add additional domain names, see [How to add a custom domain name to Azure Active Directory](add-custom-domain.md)
+- Change or add other domain names, see [How to add a custom domain name to Azure Active Directory](add-custom-domain.md)
- Add users, see [Add or delete a new user](add-users-azure-active-directory.md)
active-directory Active Directory Accessmanagement Managing Group Owners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-accessmanagement-managing-group-owners.md
Title: Add or remove group owners - Azure Active Directory | Microsoft Docs
description: Instructions about how to add or remove group owners using Azure Active Directory. -+ Previously updated : 09/11/2018 Last updated : 08/17/2022 # Add or remove group owners in Azure Active Directory+ Azure Active Directory (Azure AD) groups are owned and managed by group owners. Group owners can be users or service principals, and are able to manage the group including membership. Only existing group owners or group-managing administrators can assign group owners. Group owners aren't required to be members of the group.
-When a group has no owner, group-managing administrators are still able to manage the group. It is recommended for every group to have at least one owner. Once owners are assigned to a group, the last owner of the group cannot be removed. Please make sure to select another owner before removing the last owner from the group.
+When a group has no owner, group-managing administrators are still able to manage the group. It is recommended for every group to have at least one owner. Once owners are assigned to a group, the last owner of the group can't be removed. Make sure to select another owner before removing the last owner from the group.
## Add an owner to a group Below are instructions for adding a user as an owner to a group using the Azure AD portal. To add a service principal as an owner of a group, follow the instructions to do so using [PowerShell](/powershell/module/Azuread/Add-AzureADGroupOwner).
Remove an owner from a group using Azure AD.
![User's information page with Remove option highlighted](media/active-directory-accessmanagement-managing-group-owners/remove-owner-info-blade.png)
- After you remove the owner, you can return to the **Owners** page and see the name has been removed from the list of owners.
+ After you remove the owner, you can return to the **Owners** page, and see the name has been removed from the list of owners.
## Next steps - [Managing access to resources with Azure Active Directory groups](active-directory-manage-groups.md)
active-directory Active Directory Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-architecture.md
Title: Architecture overview - Azure Active Directory | Microsoft Docs
description: Learn what an Azure Active Directory tenant is and how to manage Azure using Azure Active Directory. -+ Previously updated : 07/08/2022 Last updated : 08/17/2022
All directory *reads* are serviced from *secondary replicas*, which are at datac
Scalability is the ability of a service to expand to meet increasing performance demands. Write scalability is achieved by partitioning the data. Read scalability is achieved by replicating data from one partition to multiple secondary replicas distributed throughout the world.
-Requests from directory applications are routed to the datacenter that they are physically closest to. Writes are transparently redirected to the primary replica to provide read-write consistency. Secondary replicas significantly extend the scale of partitions because the directories are typically serving reads most of the time.
+Requests from directory applications are routed to the closest datacenter. Writes are transparently redirected to the primary replica to provide read-write consistency. Secondary replicas significantly extend the scale of partitions because the directories are typically serving reads most of the time.
Directory applications connect to the nearest datacenters. This connection improves performance, and therefore scaling out is possible. Since a directory partition can have many secondary replicas, secondary replicas can be placed closer to the directory clients. Only internal directory service components that are write-intensive target the active primary replica directly.
Azure ADΓÇÖs partition design is simplified compared to the enterprise AD design
#### Fault tolerance
-A system is more available if it is tolerant to hardware, network, and software failures. For each partition on the directory, a highly available master replica exists: The primary replica. Only writes to the partition are performed at this replica. This replica is being continuously and closely monitored, and writes can be immediately shifted to another replica (which becomes the new primary) if a failure is detected. During failover, there could be a loss of write availability typically of 1-2 minutes. Read availability is not affected during this time.
+A system is more available if it is tolerant to hardware, network, and software failures. For each partition on the directory, a highly available master replica exists: The primary replica. Only writes to the partition are performed at this replica. This replica is being continuously and closely monitored, and writes can be immediately shifted to another replica (which becomes the new primary) if a failure is detected. During failover, there could be a loss of write availability typically of 1-2 minutes. Read availability isn't affected during this time.
Read operations (which outnumber writes by many orders of magnitude) only go to secondary replicas. Since secondary replicas are idempotent, loss of any one replica in a given partition is easily compensated by directing the reads to another replica, usually in the same datacenter. #### Data durability
-A write is durably committed to at least two datacenters prior to it being acknowledged. This happens by first committing the write on the primary, and then immediately replicating the write to at least one other datacenter. This write action ensures that a potential catastrophic loss of the datacenter hosting the primary does not result in data loss.
+A write is durably committed to at least two datacenters prior to it being acknowledged. This happens by first committing the write on the primary, and then immediately replicating the write to at least one other datacenter. This write action ensures that a potential catastrophic loss of the datacenter hosting the primary doesn't result in data loss.
Azure AD maintains a zero [Recovery Time Objective (RTO)](https://en.wikipedia.org/wiki/Recovery_time_objective) to not lose data on failovers. This includes:
Azure ADΓÇÖs replicas are stored in datacenters located throughout the world. Fo
Azure AD operates across datacenters with the following characteristics: * Authentication, Graph, and other AD services reside behind the Gateway service. The Gateway manages load balancing of these services. It will fail over automatically if any unhealthy servers are detected using transactional health probes. Based on these health probes, the Gateway dynamically routes traffic to healthy datacenters.
-* For *reads*, the directory has secondary replicas and corresponding front-end services in an active-active configuration operating in multiple datacenters. In case of a failure of an entire datacenter, traffic will be automatically routed to a different datacenter.
-* For *writes*, the directory will fail over primary (master) replica across datacenters via planned (new primary is synchronized to old primary) or emergency failover procedures. Data durability is achieved by replicating any commit to at least two datacenters.
+* For *reads*, the directory has secondary replicas and corresponding front-end services in an active-active configuration operating in multiple datacenters. If a datacenter fails, traffic is automatically routed to a different datacenter.
+* For *writes*, the directory will fail over the primary replica across datacenters via planned (new primary is synchronized to old primary) or emergency failover procedures. Data durability is achieved by replicating any commit to at least two datacenters.
#### Data consistency
The directory model is one of eventual consistencies. One typical problem with d
Azure AD provides read-write consistency for applications targeting a secondary replica by routing its writes to the primary replica, and synchronously pulling the writes back to the secondary replica.
-Application writes using the Microsoft Graph API of Azure AD are abstracted from maintaining affinity to a directory replica for read-write consistency. The Microsoft Graph API service maintains a logical session, which has affinity to a secondary replica used for reads; affinity is captured in a ΓÇ£replica tokenΓÇ¥ that the service caches using a distributed cache in the secondary replica datacenter. This token is then used for subsequent operations in the same logical session. To continue using the same logical session, subsequent requests must be routed to the same Azure AD datacenter. It is not possible to continue a logical session if the directory client requests are being routed to multiple Azure AD datacenters; if this happens then the client has multiple logical sessions which have independent read-write consistencies.
+Application writes using the Microsoft Graph API of Azure AD are abstracted from maintaining affinity to a directory replica for read-write consistency. The Microsoft Graph API service maintains a logical session, which has affinity to a secondary replica used for reads; affinity is captured in a ΓÇ£replica tokenΓÇ¥ that the service caches using a distributed cache in the secondary replica datacenter. This token is then used for subsequent operations in the same logical session. To continue using the same logical session, subsequent requests must be routed to the same Azure AD datacenter. It isn't possible to continue a logical session if the directory client requests are being routed to multiple Azure AD datacenters; if this happens then the client has multiple logical sessions that have independent read-write consistencies.
>[!NOTE] >Writes are immediately replicated to the secondary replica to which the logical session's reads were issued. #### Service-level backup
-Azure AD implements daily backup of directory data and can use these backups to restore data in case of any service-wide issue.
+Azure AD implements daily backup of directory data and can use these backups to restore data if there is any service-wide issue.
The directory also implements soft deletes instead of hard deletes for selected object types. The tenant administrator can undo any accidental deletions of these objects within 30 days. For more information, see the [API to restore deleted objects](/graph/api/directory-deleteditems-restore).
The directory also implements soft deletes instead of hard deletes for selected
Running a high availability service requires world-class metrics and monitoring capabilities. Azure AD continually analyzes and reports key service health metrics and success criteria for each of its services. There is also continuous development and tuning of metrics and monitoring and alerting for each scenario, within each Azure AD service and across all services.
-If any Azure AD service is not working as expected, action is immediately taken to restore functionality as quickly as possible. The most important metric Azure AD tracks is how quickly live site issues can be detected and mitigated for customers. We invest heavily in monitoring and alerts to minimize time to detect (TTD Target: <5 minutes) and operational readiness to minimize time to mitigate (TTM Target: <30 minutes).
+If any Azure AD service isn't working as expected, action is immediately taken to restore functionality as quickly as possible. The most important metric Azure AD tracks is how quickly live site issues can be detected and mitigated for customers. We invest heavily in monitoring and alerts to minimize time to detect (TTD Target: <5 minutes) and operational readiness to minimize time to mitigate (TTM Target: <30 minutes).
#### Secure operations
-Using operational controls such as multi-factor authentication (MFA) for any operation, as well as auditing of all operations. In addition, using a just-in-time elevation system to grant necessary temporary access for any operational task-on-demand on an ongoing basis. For more information, see [The Trusted Cloud](https://azure.microsoft.com/support/trust-center).
+Using operational controls such as multi-factor authentication (MFA) for any operation, and auditing of all operations. In addition, using a just-in-time elevation system to grant necessary temporary access for any operational task-on-demand on an ongoing basis. For more information, see [The Trusted Cloud](https://azure.microsoft.com/support/trust-center).
## Next steps
active-directory Active Directory Compare Azure Ad To Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-compare-azure-ad-to-ad.md
Previously updated : 12/23/2021 Last updated : 08/17/2022
active-directory Active Directory Data Storage Australia Newzealand https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-australia-newzealand.md
Title: Customer data storage for Australian and New Zealand customers - Azure AD
description: Learn about where Azure Active Directory stores customer-related data for its Australian and New Zealand customers. -+ Previously updated : 01/12/2022 Last updated : 08/17/2022 # Customer Data storage for Australian and New Zealand customers in Azure Active Directory
-Azure Active Directory (Azure AD) stores its Customer Data in a geographical location based on the country you provided when you signed up for a Microsoft Online service. Microsoft Online services include Microsoft 365 and Azure.
+Azure AD stores identity data in a location chosen based on the address provided by your organization when subscribing to a Microsoft service like Microsoft 365 or Azure. Microsoft Online services include Microsoft 365 and Azure.
For information about where Azure AD and other Microsoft services' data is located, see the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center. From February 26, 2020, Microsoft began storing Azure ADΓÇÖs Customer Data for new tenants with an Australian or New Zealand billing address within the Australian datacenters.
-Additionally, certain Azure AD features do not yet support storage of Customer Data in Australia. Please go to the [Azure AD data map](https://msit.powerbi.com/view?r=eyJrIjoiYzEyZTc5OTgtNTdlZS00ZTVkLWExN2ItOTM0OWU4NjljOGVjIiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9), for specific feature information. For example, Microsoft Azure AD Multi-Factor Authentication stores Customer Data in the US and processes it globally. See [Data residency and customer data for Azure AD Multi-Factor Authentication](../authentication/concept-mfa-data-residency.md).
+Additionally, certain Azure AD features don't yet support storage of Customer Data in Australia. Go to the [Azure AD data map](https://msit.powerbi.com/view?r=eyJrIjoiYzEyZTc5OTgtNTdlZS00ZTVkLWExN2ItOTM0OWU4NjljOGVjIiwidCI6IjcyZjk4OGJmLTg2ZjEtNDFhZi05MWFiLTJkN2NkMDExZGI0NyIsImMiOjV9), for specific feature information. For example, Microsoft Azure AD Multi-Factor Authentication stores Customer Data in the US and processes it globally. See [Data residency and customer data for Azure AD Multi-Factor Authentication](../authentication/concept-mfa-data-residency.md).
> [!NOTE] > Microsoft products, services, and third-party applications that integrate with Azure AD have access to Customer Data. Evaluate each product, service, and application you use to determine how Customer Data is processed by that specific product, service, and application, and whether they meet your company's data storage requirements. For more information about Microsoft services' data residency, see the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center.
active-directory Active Directory Data Storage Australia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-australia.md
Title: Identity data storage for Australian and New Zealand customers - Azure AD
description: Learn about where Azure Active Directory stores identity-related data for its Australian and New Zealand customers. -+ Previously updated : 12/13/2019 Last updated : 08/17/2022 # Identity data storage for Australian and New Zealand customers in Azure Active Directory
-Identity data is stored by Azure AD in a geographical location based on the address provided by your organization when subscribing for a Microsoft Online service such as Microsoft 365 and Azure. For information on where your Identity Customer Data is stored, you can use the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
+Azure AD stores identity data in a location chosen based on the address provided by your organization when subscribing to a Microsoft service like Microsoft 365 or Azure. For information on where your Identity Customer Data is stored, you can use the [Where is your data located?](https://www.microsoft.com/trustcenter/privacy/where-your-data-is-located) section of the Microsoft Trust Center.
> [!NOTE] > Services and applications that integrate with Azure AD have access to Identity Customer Data. Evaluate each service and application you use to determine how Identity Customer Data is processed by that specific service and application, and whether they meet your company's data storage requirements. For more information about Microsoft services' data residency, see the Where is your data located? section of the Microsoft Trust Center.
All other Azure AD services store customer data in global datacenters. To locate
## Microsoft Azure AD Multi-Factor Authentication (MFA)
-MFA stores Identity Customer Data in global datacenters. To learn more about the user information collected and stored by cloud-based Azure AD MFA and Azure MFA Server, see [Azure Multi-Factor Authentication user data collection](../authentication/concept-mfa-data-residency.md).
+MFA stores Identity Customer Data in global datacenters. To learn more about the user information collected and stored by cloud-based Azure AD MFA and Azure AD Multi-Factor Authentication Server, see [Azure Active Directory Multi-Factor Authentication user data collection](../authentication/concept-mfa-data-residency.md).
## Next steps+ For more information about any of the features and functionality described above, see these articles: - [What is Multi-Factor Authentication?](../authentication/concept-mfa-howitworks.md)
active-directory Active Directory Data Storage Eu https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-data-storage-eu.md
Title: Identity data storage for European customers - Azure AD
description: Learn about where Azure Active Directory stores identity-related data for its European customers. -+ Previously updated : 07/20/2022 Last updated : 08/17/2022 # Identity data storage for European customers in Azure Active Directory
-Identity data is stored by Azure AD in a geographical location based on the address provided by your organization when it subscribed for a Microsoft Online service such as Microsoft 365 and Azure. For information on where your identity data is stored, you can use the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center.
+Azure AD stores identity data in a location chosen based on the address provided by your organization when subscribing to a Microsoft service like Microsoft 365 or Azure. For information on where your identity data is stored, you can use the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center.
For customers who provided an address in Europe, Azure AD keeps most of the identity data within European datacenters. This document provides information on any data that is stored outside of Europe by Azure AD services.
For cloud-based Azure AD Multi-Factor Authentication, authentication is complete
* Device vendor-specific services, such as Apple Push Notifications, may be outside Europe. * Multi-factor authentication requests using OATH codes that originate from EU datacenters are validated in the EU.
-For more information about what user information is collected by Azure Multi-Factor Authentication Server (MFA Server) and cloud-based Azure AD MFA, see [Azure Multi-Factor Authentication user data collection](../authentication/howto-mfa-reporting-datacollection.md).
+For more information about what user information is collected by Azure Active Directory Multi-Factor Authentication Server (MFA Server) and cloud-based Azure AD MFA, see [Azure Active Directory Multi-Factor Authentication user data collection](../authentication/howto-mfa-reporting-datacollection.md).
## Microsoft Azure Active Directory B2B (Azure AD B2B)
For more info about federation in Microsoft Exchange server, see the [Federation
## Other considerations
-Services and applications that integrate with Azure AD have access to identity data. Evaluate each service and application you use to determine how identity data is processed by that specific service and application, and whether they meet your company's data storage requirements.
+Services and applications that integrate with Azure AD have access to identity data. Review how each service and application processes identity data, and verify that they meet your company's data storage requirements.
For more information about Microsoft services' data residency, see the [Where your data is located](https://www.microsoft.com/trust-center/privacy/data-location) section of the Microsoft Trust Center. ## Next steps+ For more information about any of the features and functionality described above, see these articles:+ - [What is Multi-Factor Authentication?](../authentication/concept-mfa-howitworks.md) - [Azure AD self-service password reset](../authentication/concept-sspr-howitworks.md)
active-directory Active Directory Get Started Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-get-started-premium.md
Title: Sign up for premium editions - Azure Active Directory| Microsoft Docs
description: Instructions about how to sign up for Azure Active Directory Premium editions. -+ Previously updated : 09/07/2017 Last updated : 08/17/2022
# Sign up for Azure Active Directory Premium editions+ You can purchase and associate Azure Active Directory (Azure AD) Premium editions with your Azure subscription. If you need to create a new Azure subscription, you'll also need to activate your licensing plan and Azure AD service access. Before you sign up for Active Directory Premium 1 or Premium 2, you must first determine which of your existing subscription or plan to use:
Before you sign up for Active Directory Premium 1 or Premium 2, you must first d
Signing up using your Azure subscription with previously purchased and activated Azure AD licenses, automatically activates the licenses in the same directory. If that's not the case, you must still activate your license plan and your Azure AD access. For more information about activating your license plan, see [Activate your new license plan](#activate-your-new-license-plan). For more information about activating your Azure AD access, see [Activate your Azure AD access](#activate-your-azure-ad-access). ## Sign up using your existing Azure or Microsoft 365 subscription+ As an Azure or Microsoft 365 subscriber, you can purchase the Azure Active Directory Premium editions online. For detailed steps, see [Buy or remove licenses](/microsoft-365/commerce/licenses/buy-licenses?view=o365-worldwide&preserve-view=true). ## Sign up using your Enterprise Mobility + Security licensing plan+ Enterprise Mobility + Security is a suite, comprised of Azure AD Premium, Azure Information Protection, and Microsoft Intune. If you already have an EMS license, you can get started with Azure AD, using one of these licensing options: For more information about EMS, see [Enterprise Mobility + Security web site](https://www.microsoft.com/cloud-platform/enterprise-mobility-security).
For more information about EMS, see [Enterprise Mobility + Security web site](ht
- Purchase [Enterprise Mobility + Security E3 licenses](https://signup.microsoft.com/Signup?OfferId=4BBA281F-95E8-4136-8B0F-037D6062F54C&ali=1) ## Sign up using your Microsoft Volume Licensing plan+ Through your Microsoft Volume Licensing plan, you can sign up for Azure AD Premium using one of these two programs, based on the number of licenses you want to get: - **For 250 or more licenses.** [Microsoft Enterprise Agreement](https://www.microsoft.com/en-us/licensing/licensing-programs/enterprise.aspx)
Through your Microsoft Volume Licensing plan, you can sign up for Azure AD Premi
For more information about volume licensing purchase options, see [How to purchase through Volume Licensing](https://www.microsoft.com/en-us/licensing/how-to-buy/how-to-buy.aspx). ## Activate your new license plan+ If you signed up using a new Azure AD license plan, you must activate it for your organization, using the confirmation email sent after purchase. ### To activate your license plan-- Open the confirmation email you received from Microsoft after you signed up, and then click either **Sign In** or **Sign Up**.+
+- Open the confirmation email you received from Microsoft after you signed up, and then select either **Sign In** or **Sign Up**.
![Confirmation email with sign in and sign up links](media/active-directory-get-started-premium/MOLSEmail.png)
If you signed up using a new Azure AD license plan, you must activate it for you
![Create account profile page, with sample information](media/active-directory-get-started-premium/MOLSAccountProfile.png)
-When you're done, you will see a confirmation box thanking you for activating the license plan for your tenant.
+When you're done, you'll see a confirmation box thanking you for activating the license plan for your tenant.
![Confirmation box with thank you](media/active-directory-get-started-premium/MOLSThankYou.png)
After your purchased licenses are provisioned in your directory, you'll receive
### To activate your Azure AD access
-1. Open the **Welcome email**, and then click **Sign In**.
+1. Open the **Welcome email**, and then select **Sign In**.
![Welcome email, with highlighted sign in link](media/active-directory-get-started-premium/AADEmail.png)
active-directory Active Directory Groups Create Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-create-azure-portal.md
Title: Create a basic group and add members - Azure Active Directory | Microsoft
description: Instructions about how to create a basic group using Azure Active Directory. -+ Previously updated : 06/05/2020 Last updated : 08/17/2022
# Create a basic group and add members using Azure Active Directory+ You can create a basic group using the Azure Active Directory (Azure AD) portal. For the purposes of this article, a basic group is added to a single resource by the resource owner (administrator) and includes specific members (employees) that need to access that resource. For more complex scenarios, including dynamic memberships and rule creation, see the [Azure Active Directory user management documentation](../enterprise-users/index.yml). ## Group and membership types+ There are several group and membership types. The following information explains each group and membership type and why they are used, to help you decide which options to use when you create a group. ### Group types:
There are several group and membership types. The following information explains
- **Microsoft 365**. Provides collaboration opportunities by giving members access to a shared mailbox, calendar, files, SharePoint site, and more. This option also lets you give people outside of your organization access to the group. A Microsoft 365 group can have only users as its members. Both users and service principals can be owners of a Microsoft 365 group. For more info about Microsoft 365 Groups, see [Learn about Microsoft 365 Groups](https://support.office.com/article/learn-about-office-365-groups-b565caa1-5c40-40ef-9915-60fdb2d97fa2). ### Membership types:+ - **Assigned.** Lets you add specific users to be members of this group and to have unique permissions. For the purposes of this article, we're using this option.-- **Dynamic user.** Lets you use dynamic membership rules to automatically add and remove members. If a member's attributes change, the system looks at your dynamic group rules for the directory to see if the member meets the rule requirements (is added) or no longer meets the rules requirements (is removed).
+- **Dynamic user.** Lets you use dynamic membership rules to automatically add and remove members. If a member's attributes change, the system looks at your directory's dynamic group rules to see if the member meets the rule requirements (is added) or no longer meets the rules requirements (is removed).
- **Dynamic device.** Lets you use dynamic group rules to automatically add and remove devices. If a device's attributes change, the system looks at your dynamic group rules for the directory to see if the device meets the rule requirements (is added) or no longer meets the rules requirements (is removed). > [!IMPORTANT]
You can create a basic group and add your members at the same time. To create a
## Turn off group welcome email
-When any new Microsoft 365 group is created, whether with dynamic or static membership, a welcome notification is sent to all users who are added to the group. When any attributes of a user or device change, all dynamic group rules in the organization are processed for potential membership changes. Users who are added then also receive the welcome notification. You can turn this behavior off in [Exchange PowerShell](/powershell/module/exchange/users-and-groups/Set-UnifiedGroup).
+When any new Microsoft 365 group is created, whether with dynamic or static membership, a welcome notification is sent to all users who are added to the group. When any user or device attributes change, all dynamic group rules in the organization are processed for potential membership changes. Users who are added then also receive the welcome notification. You can turn off this behavior in [Exchange PowerShell](/powershell/module/exchange/users-and-groups/Set-UnifiedGroup).
## Next steps
active-directory Active Directory Groups Delete Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-delete-group.md
Title: Delete a group - Azure Active Directory | Microsoft Docs
description: Instructions about how to delete a group using Azure Active Directory. -+ Previously updated : 08/29/2018 Last updated : 08/17/2022
# Delete a group using Azure Active Directory+ You can delete an Azure Active Directory (Azure AD) group for any number of reasons, but typically it will be because you: - Incorrectly set the **Group type** to the wrong option.
You can delete an Azure Active Directory (Azure AD) group for any number of reas
- No longer need the group. ## To delete a group+ 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory. 2. Select **Azure Active Directory**, and then select **Groups**.
active-directory Active Directory Groups Members Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-members-azure-portal.md
Title: Add or remove group members - Azure Active Directory | Microsoft Docs
description: Instructions about how to add or remove members from a group using Azure Active Directory. -+ Previously updated : 08/23/2018 Last updated : 08/17/2022
active-directory Active Directory Groups Membership Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-membership-azure-portal.md
Title: Add or remove a group from another group - Azure AD
description: Instructions about how to add or remove a group from another group using Azure Active Directory. -+ Previously updated : 10/19/2018 Last updated : 08/17/2022
# Add or remove a group from another group using Azure Active Directory+ This article helps you to add and remove a group from another group using Azure Active Directory. >[!Note] >If you're trying to delete the parent group, see [How to update or delete a group and its members](active-directory-groups-delete-group.md). ## Add a group to another group+ You can add an existing Security group to another existing Security group (also known as nested groups), creating a member group (subgroup) and a parent group. The member group inherits the attributes and properties of the parent group, saving you configuration time.
->[!Important]
+>[!IMPORTANT]
>We don't currently support:<ul><li>Adding groups to a group synced with on-premises Active Directory.</li><li>Adding Security groups to Microsoft 365 groups.</li><li>Adding Microsoft 365 groups to Security groups or other Microsoft 365 groups.</li><li>Assigning apps to nested groups.</li><li>Applying licenses to nested groups.</li><li>Adding distribution groups in nesting scenarios.</li><li>Adding security groups as members of mail-enabled security groups</li><li> Adding groups as members of a role-assignable group.</li></ul> ### To add a group as a member of another group
You can add an existing Security group to another existing Security group (also
3. On the **Groups - All groups** page, search for and select the group that's to become a member of another group. For this exercise, we're using the **MDM policy - West** group.
- >[!Note]
+ >[!NOTE]
>You can add your group as a member to only one group at a time. Additionally, the **Select Group** box filters the display based on matching your entry to any part of a user or device name. However, wildcard characters aren't supported. ![Groups - All groups page with MDM policy - West group selected](media/active-directory-groups-membership-azure-portal/group-all-groups-screen.png)
You can add an existing Security group to another existing Security group (also
6. For a more detailed view of the group and member relationship, select the group name (**MDM policy - All org**) and take a look at the **MDM policy - West** page details. ## Remove a group from another group+ You can remove an existing Security group from another Security group. However, removing the group also removes any inherited attributes and properties for its members. ### To remove a member group from another group+ 1. On the **Groups - All groups** page, search for and select the group that's to be removed as a member of another group. For this exercise, we're again using the **MDM policy - West** group. 2. On the **MDM policy - West overview** page, select **Group memberships**.
You can remove an existing Security group from another Security group. However,
![Group membership page showing both the member and the group details](media/active-directory-groups-membership-azure-portal/group-membership-remove.png) ## Additional information+ These articles provide additional information on Azure Active Directory. - [View your groups and members](active-directory-groups-view-azure-portal.md)
active-directory Active Directory Groups Settings Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-settings-azure-portal.md
Title: Edit your group information - Azure Active Directory | Microsoft Docs
description: Instructions about how to edit your group's information using Azure Active Directory. -+ Previously updated : 08/27/2018 Last updated : 08/17/2022
Using Azure Active Directory (Azure AD), you can edit a group's settings, including updating its name, description, or membership type. ## To edit your group settings+ 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator account for the directory. 2. Select **Azure Active Directory**, and then select **Groups**.
Using Azure Active Directory (Azure AD), you can edit a group's settings, includ
- **Object ID.** You can't change the Object ID, but you can copy it to use in your PowerShell commands for the group. For more info about using PowerShell cmdlets, see [Azure Active Directory cmdlets for configuring group settings](../enterprise-users/groups-settings-v2-cmdlets.md). ## Next steps+ These articles provide additional information on Azure Active Directory. - [View your groups and members](active-directory-groups-view-azure-portal.md)
active-directory Active Directory Groups View Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-groups-view-azure-portal.md
Title: Quickstart - View groups & members - Azure AD
description: Instructions about how to search for and view your organization's groups and their assigned members. -+ Previously updated : 09/24/2018 Last updated : 08/17/2022
# Quickstart: View your organization's groups and members in Azure Active Directory+ You can view your organization's existing groups and group members using the Azure portal. Groups are used to manage users (members) that all need the same access and permissions for potentially restricted apps and services. In this quickstart, youΓÇÖll view all of your organization's existing groups and view the assigned members.
In this quickstart, youΓÇÖll view all of your organization's existing groups and
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. ## Prerequisites+ Before you begin, youΓÇÖll need to: - Create an Azure Active Directory tenant. For more information, see [Access the Azure Active Directory portal and create a new tenant](active-directory-access-create-new-tenant.md). ## Sign in to the Azure portal+ You must sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. ## Create a new group + Create a new group, named _MDM policy - West_. For more information about creating a group, see [How to create a basic group and add members](active-directory-groups-create-azure-portal.md). 1. Select **Azure Active Directory**, **Groups**, and then select **New group**.
Create a new user, named _Alain Charon_. A user must exist before being added as
3. Copy the auto-generated password provided in the **Password** box, and then select **Create**. ## Add a group member+ Now that you have a group and a user, you can add _Alain Charon_ as a member to the _MDM policy - West_ group. For more information about adding group members, see [How to add or remove group members](active-directory-groups-members-azure-portal.md). 1. Select **Azure Active Directory** > **Groups**.
active-directory Active Directory How Subscriptions Associated Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-how-subscriptions-associated-directory.md
Title: Add an existing Azure subscription to your tenant - Azure AD
description: Instructions about how to add an existing Azure subscription to your Azure Active Directory (Azure AD) tenant. -+ Previously updated : 03/05/2021 Last updated : 08/17/2022
Before you can associate or add your subscription, do the following tasks:
- Users that have been assigned roles using Azure RBAC will lose their access. - Service Administrator and Co-Administrators will lose access.
- - If you have any key vaults, they'll be inaccessible and you'll have to fix them after association.
+ - If you have any key vaults, they'll be inaccessible, and you'll have to fix them after association.
- If you have any managed identities for resources such as Virtual Machines or Logic Apps, you must re-enable or recreate them after the association. - If you have a registered Azure Stack, you'll have to re-register it after association. - For more information, see [Transfer an Azure subscription to a different Azure AD directory](../../role-based-access-control/transfer-subscription.md).
To associate an existing subscription to your Azure AD directory, follow these s
:::image type="content" source="media/active-directory-how-subscriptions-associated-directory/edit-directory-ui.png" alt-text="Screenshot that shows the Change the directory page with a sample directory and the Change button highlighted.":::
- After the directory is changed for the subscription, you will get a success message.
+ After the directory is changed for the subscription, you'll get a success message.
1. Select **Switch directories** on the subscription page to go to your new directory.
active-directory Active Directory How To Find Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-how-to-find-tenant.md
Title: How to find your tenant ID - Azure Active Directory
description: Instructions about how to find and Azure Active Directory tenant ID to an existing Azure subscription. -+ Previously updated : 10/30/2020 Last updated : 08/17/2022
For Microsoft 365 CLI, use the cmdlet **tenant id** as shown in the following ex
m365 tenant id get ```
-For more information, see the Microsoft 365 [tenant id get](https://pnp.github.io/cli-microsoft365/cmd/tenant/id/id-get/) command reference.
+For more information, see the Microsoft 365 [tenant ID get](https://pnp.github.io/cli-microsoft365/cmd/tenant/id/id-get/) command reference.
## Next steps
active-directory Active Directory Licensing Whatis Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-licensing-whatis-azure-portal.md
description: Learn about Azure Active Directory group-based licensing, including
keywords: Azure AD licensing -+ Previously updated : 10/29/2018 Last updated : 08/17/2022
Here are the main features of group-based licensing:
- A user can be a member of multiple groups with license policies specified. A user can also have some licenses that were directly assigned, outside of any groups. The resulting user state is a combination of all assigned product and service licenses. If a user is assigned same license from multiple sources, the license will be consumed only once. -- In some cases, licenses cannot be assigned to a user. For example, there might not be enough available licenses in the tenant, or conflicting services might have been assigned at the same time. Administrators have access to information about users for whom Azure AD could not fully process group licenses. They can then take corrective action based on that information.
+- In some cases, licenses can't be assigned to a user. For example, there might not be enough available licenses in the tenant, or conflicting services might have been assigned at the same time. Administrators have access to information about users for whom Azure AD couldn't fully process group licenses. They can then take corrective action based on that information.
## Your feedback is welcome!
active-directory Active Directory Manage Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-manage-groups.md
Title: Manage app & resource access using groups - Azure AD
description: Learn about how to manage access to your organization's cloud-based apps, on-premises apps, and resources using Azure Active Directory groups. -+ Previously updated : 01/08/2020 Last updated : 08/17/2022
There are four ways to assign resource access rights to your users:
## Can users join groups without being assigned? The group owner can let users find their own groups to join, instead of assigning them. The owner can also set up the group to automatically accept all users that join or to require approval.
-After a user requests to join a group, the request is forwarded to the group owner. If it's required, the owner can approve the request and the user is notified of the group membership. However, if you have multiple owners and one of them disapproves, the user is notified, but isn't added to the group. For more information and instructions about how to let your users request to join groups, see [Set up Azure AD so users can request to join groups](../enterprise-users/groups-self-service-management.md)
+After a user requests to join a group, the request is forwarded to the group owner. If it's required, the owner can approve the request, and the user is notified of the group membership. However, if you have multiple owners and one of them disapproves, the user is notified, but isn't added to the group. For more information and instructions about how to let your users request to join groups, see [Set up Azure AD so users can request to join groups](../enterprise-users/groups-self-service-management.md)
## Next steps Now that you have a bit of an introduction to access management using groups, you start to manage your resources and apps.
active-directory Active Directory Ops Guide Auth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-auth.md
Previously updated : 10/31/2019 Last updated : 08/17/2022
active-directory Active Directory Ops Guide Govern https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-govern.md
Previously updated : 10/31/2019 Last updated : 08/17/2022
There are eight aspects to a secure Identity governance. This list will help you
## Next steps
-Get started with the [Azure AD operational checks and actions](active-directory-ops-guide-ops.md).
+Get started with the [Azure AD operational checks and actions](active-directory-ops-guide-ops.md).
active-directory Active Directory Ops Guide Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-iam.md
Previously updated : 10/31/2019 Last updated : 08/17/2022
There are five aspects to a secure Identity infrastructure. This list will help
## Next steps
-Get started with the [Authentication management checks and actions](active-directory-ops-guide-auth.md).
+Get started with the [Authentication management checks and actions](active-directory-ops-guide-auth.md).
active-directory Active Directory Ops Guide Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-intro.md
Previously updated : 10/31/2019 Last updated : 08/17/2022
active-directory Active Directory Ops Guide Ops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-ops-guide-ops.md
Previously updated : 10/31/2019 Last updated : 08/17/2022
There are seven aspects to a secure Identity infrastructure. This list will help
## Next steps
-Refer to the [Azure AD deployment plans](active-directory-deployment-plans.md) for implementation details on any capabilities you haven't deployed.
+Refer to the [Azure AD deployment plans](active-directory-deployment-plans.md) for implementation details on any capabilities you haven't deployed.
active-directory Active Directory Properties Area https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-properties-area.md
Title: Add your organization's privacy info - Azure Active Directory | Microsoft
description: Instructions about how to add your organization's privacy info to the Azure Active Directory Properties area. -+ Previously updated : 04/17/2018 Last updated : 08/17/2022
You add your organization's privacy information in the **Properties** area of Az
- **Technical contact.** Type the email address for the person to contact for technical support within your organization.
- - **Global privacy contact.** Type the email address for the person to contact for inquiries about personal data privacy. This person is also who Microsoft contacts if there's a data breach related to Azure Active Directory services . If there's no person listed here, Microsoft contacts your global administrators. For Microsoft 365 related privacy incident notifications please see [Microsoft 365 Message center FAQs](/microsoft-365/admin/manage/message-center?preserve-view=true&view=o365-worldwide#frequently-asked-questions)
+ - **Global privacy contact.** Type the email address for the person to contact for inquiries about personal data privacy. This person is also who Microsoft contacts if there's a data breach related to Azure Active Directory services. If there's no person listed here, Microsoft contacts your global administrators. For Microsoft 365 related privacy incident notifications, see [Microsoft 365 Message center FAQs](/microsoft-365/admin/manage/message-center?preserve-view=true&view=o365-worldwide#frequently-asked-questions)
- **Privacy statement URL.** Type the link to your organization's document that describes how your organization handles both internal and external guest's data privacy.
active-directory Active Directory Troubleshooting Support Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-troubleshooting-support-howto.md
Title: Find help and open a support ticket - Azure Active Directory | Microsoft
description: Instructions about how to get help and open a support ticket for Azure Active Directory. -+ Previously updated : 08/28/2017 Last updated : 08/17/2022
active-directory Active Directory Users Assign Role Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md
Title: Assign Azure AD roles to users - Azure Active Directory | Microsoft Docs
description: Instructions about how to assign administrator and non-administrator roles to users with Azure Active Directory. -+ Previously updated : 08/31/2020 Last updated : 08/17/2022
active-directory Active Directory Users Profile Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md
Title: Add or update user profile information - Azure AD
description: Instructions about how to add information to a user's profile in Azure Active Directory, including a picture and job details. -+ Previously updated : 06/10/2021 Last updated : 08/17/2022
active-directory Active Directory Users Reset Password Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-reset-password-azure-portal.md
Title: Reset a user's password - Azure Active Directory | Microsoft Docs
description: Instructions about how to reset a user's password using Azure Active Directory. -+ ms.assetid: fad5624b-2f13-4abc-b3d4-b347903a8f16 Previously updated : 06/07/2022 Last updated : 08/17/2022
active-directory Active Directory Users Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-restore.md
Title: Restore or permanently remove recently deleted user - Azure AD
description: How to view restorable users, restore a deleted user, or permanently delete a user with Azure Active Directory. -+ Previously updated : 10/23/2020 Last updated : 08/17/2022
# Restore or remove a recently deleted user using Azure Active Directory+ After you delete a user, the account remains in a suspended state for 30 days. During that 30-day window, the user account can be restored, along with all its properties. After that 30-day window passes, the permanent deletion process is automatically started. You can view your restorable users, restore a deleted user, or permanently delete a user using Azure Active Directory (Azure AD) in the Azure portal.
You can view your restorable users, restore a deleted user, or permanently delet
>Neither you nor Microsoft customer support can restore a permanently deleted user. ## Required permissions+ You must have one of the following roles to restore and permanently delete users. - Global administrator
You must have one of the following roles to restore and permanently delete users
- User administrator ## View your restorable users+ You can see all the users that were deleted less than 30 days ago. These users can be restored. ### To view your restorable users+ 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the organization. 2. Select **Azure Active Directory**, select **Users**, and then select **Deleted users**.
You can see all the users that were deleted less than 30 days ago. These users c
## Restore a recently deleted user
-When a user account is deleted from the organization, the account is in a suspended state and all the related organization information is preserved. When you restore a user, this organization information is also restored.
+When a user account is deleted from the organization, the account is in a suspended state. All of the account's organization information is preserved. When you restore a user, this organization information is also restored.
-> [!Note]
+> [!NOTE]
> Once a user is restored, licenses that were assigned to the user at the time of deletion are also restored even if there are no seats available for those licenses. If you are then consuming more licenses more than you purchased, your organization could be temporarily out of compliance for license usage. ### To restore a user
active-directory Active Directory Whatis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-whatis.md
Title: What is Azure Active Directory?
description: Learn about Azure Active Directory, including terminology, available licenses, and a list of associated features. -+ Previously updated : 01/27/2022 Last updated : 08/17/2022
active-directory Add Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-custom-domain.md
Title: Add your custom domain - Azure Active Directory | Microsoft Docs
description: Instructions about how to add a custom domain using Azure Active Directory. -+ Previously updated : 10/25/2019 Last updated : 08/17/2022
active-directory Add Users Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/add-users-azure-active-directory.md
Previously updated : 02/16/2022 Last updated : 08/17/2022
active-directory Azure Active Directory Parallel Identity Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-parallel-identity-options.md
na Previously updated : 11/18/2021 Last updated : 08/17/2022
active-directory Concept Fundamentals Mfa Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/concept-fundamentals-mfa-get-started.md
Last updated 03/18/2020
-+
active-directory Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/customize-branding.md
Title: Add branding to your organization's sign-in page - Azure AD
description: Instructions about how to add your organization's branding to the Azure Active Directory sign-in page. -+ Previously updated : 07/03/2021 Last updated : 08/17/2022
# Add branding to your organization's Azure Active Directory sign-in page+ Use your organization's logo and custom color schemes to provide a consistent look-and-feel on your Azure Active Directory (Azure AD) sign-in pages. Your sign-in pages appear when users sign in to your organization's web-based apps, such as Microsoft 365, which uses Azure AD as your identity provider. >[!NOTE] >Adding custom branding requires you to have either Azure Active Directory Premium 1, Premium 2, or Office 365 (for Office 365 apps) licenses. For more information about licensing and editions, see [Sign up for Azure AD Premium](active-directory-get-started-premium.md).<br><br>Azure AD Premium editions are available for customers in China using the worldwide instance of Azure Active Directory. Azure AD Premium editions aren't currently supported in the Azure service operated by 21Vianet in China. For more information, talk to us using the [Azure Active Directory Forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789). ## Customize your Azure AD sign-in page+ You can customize your Azure AD sign-in pages, which appear when users sign in to your organization's tenant-specific apps, such as `https://outlook.com/contoso.com`, or when passing a domain variable, such as `https://passwordreset.microsoftonline.com/?whr=contoso.com`.
-Your custom branding won't immediately appear when your users go to sites such as, www\.office.com. Instead, the user has to sign-in before your customized branding appears. After the user has signed in, the branding may take 15 minutes or longer to appear.
+Your custom branding won't immediately appear when your users go to sites such as, www\.office.com. Instead, the user has to sign-in before your customized branding appears. After the user has signed in, the branding may take 15 minutes, or longer to appear.
> [!NOTE] > **All branding elements are optional and will remain default when unchanged.** For example, if you specify a banner logo with no background image, the sign-in page will show your logo with a default background image from the destination site such as Microsoft 365.<br><br>Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or business guests sign in using a personal Microsoft account, the sign-in page won't reflect the branding of your organization. ### To configure your branding for the first time+ 1. Sign in to the [Azure portal](https://portal.azure.com/) using a Global administrator account for the directory. 2. Select **Azure Active Directory**, and then select **Company branding**, and then select **Configure**.
Your custom branding won't immediately appear when your users go to sites such a
- **Sign-in page background image.** Select a .png or .jpg image file to appear as the background for your sign-in pages. The image will be anchored to the center of the browser, and will scale to the size of the viewable space. You can't select an image larger than 1920x1080 pixels in size or that has a file size more than 300,000 bytes.
- It's recommended to use images without a strong subject focus, e.g., an opaque white box appears in the center of the screen, and could cover any part of the image depending on the dimensions of the viewable space.
+ It's recommended to use images without a strong subject focus, for example, an opaque white box appears in the center of the screen, and could cover any part of the image depending on the dimensions of the viewable space.
- **Banner logo.** Select a .png or .jpg version of your logo to appear on the sign-in page after the user enters a username and on the **My Apps** portal page.
- The image can't be taller than 60 pixels or wider than 280 pixels, and the file shouldnΓÇÖt be larger than 10KB. We recommend using a transparent image since the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small.
+ The image can't be taller than 60 pixels or wider than 280 pixels, and the file shouldnΓÇÖt be larger than 10 KB. We recommend using a transparent image since the background might not match your logo background. We also recommend not adding padding around the image or it might make your logo look small.
- **Username hint.** Type the hint text that appears to users if they forget their username. This text must be Unicode, without links or code, and can't exceed 64 characters. If guests sign in to your app, we suggest not adding this hint.
You can't change your original configuration's language from your default langua
![Contoso - Company branding page, with the new language configuration shown](media/customize-branding/company-branding-french-config.png) ## Add your custom branding to pages
-Add your custom branding to pages by modifying the end of the URL with the text, `?whr=yourdomainname`. This specific modification works on different types of pages, including the Multi-Factor Authentication (MFA) setup page, the Self-service Password Reset (SSPR) setup page, and the sign in page.
+Add your custom branding to pages by modifying the end of the URL with the text, `?whr=yourdomainname`. This specific modification works on different types of pages, including the Multi-Factor Authentication (MFA) setup page, the Self-service Password Reset (SSPR) setup page, and the sign-in page.
Whether an application supports customized URLs for branding or not depends on the specific application, and should be checked before attempting to add a custom branding to a page.
active-directory License Users Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/license-users-groups.md
Title: Assign or remove licenses - Azure Active Directory | Microsoft Docs
description: Instructions about how to assign or remove Azure Active Directory licenses from your users or groups. --+ ms.assetid: f8b932bc-8b4f-42b5-a2d3-f2c076234a78 Previously updated : 12/14/2020 Last updated : 08/17/2022
# Assign or remove licenses in the Azure Active Directory portal
-Many Azure Active Directory (Azure AD) services require you to license each of your users or groups (and associated members) for that service. Only users with active licenses will be able to access and use the licensed Azure AD services for which that's true. Licenses are applied per tenant and do not transfer to other tenants.
+Many Azure Active Directory (Azure AD) services require you to license each of your users or groups (and associated members) for that service. Only users with active licenses will be able to access and use the licensed Azure AD services for which that's true. Licenses are applied per tenant and don't transfer to other tenants.
## Available license plans
There are several license plans available for the Azure AD service, including:
For specific information about each license plan and the associated licensing details, see [What license do I need?](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing). To sign up for Azure AD premium license plans see [here](./active-directory-get-started-premium.md).
-Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. Any user whose usage location is not specified inherits the location of the Azure AD organization.
+Not all Microsoft services are available in all locations. Before a license can be assigned to a group, you must specify the **Usage location** for all members. You can set this value in the **Azure Active Directory &gt; Users &gt; Profile &gt; Settings** area in Azure AD. Any user whose usage location isn't specified inherits the location of the Azure AD organization.
## View license plans and plan details
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
This project has two primary initiatives. The first is to plan and implement a V
For more information, see:
-* [Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure](/azure/virtual-desktop/deploy-azure-ad-joined-vm)
+* [Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure](../../virtual-desktop/deploy-azure-ad-joined-vm.md)
* [Windows 365 planning guide](/windows-365/enterprise/planning-guide)
Azure AD Domain Services allows you to migrate application servers to the cloud
[Establish an Azure AD footprint](road-to-the-cloud-establish.md)
-[Implement a cloud-first approach](road-to-the-cloud-implement.md)
+[Implement a cloud-first approach](road-to-the-cloud-implement.md)
active-directory Secure With Azure Ad Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-best-practices.md
When designing isolated environments, it's important to consider the following p
* **Use only modern authentication** - Applications deployed in isolated environments must use claims-based modern authentication (for example, SAML, * Auth, OAuth2, and OpenID Connect) to use capabilities such as federation, Azure AD B2B collaboration, delegation, and the consent framework. This way, legacy applications that have dependency on legacy authentication methods such as NT LAN Manager (NTLM) won't carry forward in isolated environments.
-* **Enforce strong authentication** - Strong authentication must always be used when accessing the isolated environment services and infrastructure. Whenever possible, [passwordless authentication](/azure/active-directory/authentication/concept-authentication-passwordless) such as [Windows for Business Hello](/windows/security/identity-protection/hello-for-business/hello-overview) or a [FIDO2 security keys](/azure/active-directory/authentication/howto-authentication-passwordless-security-key)) should be used.
+* **Enforce strong authentication** - Strong authentication must always be used when accessing the isolated environment services and infrastructure. Whenever possible, [passwordless authentication](../authentication/concept-authentication-passwordless.md) such as [Windows for Business Hello](/windows/security/identity-protection/hello-for-business/hello-overview) or a [FIDO2 security keys](../authentication/howto-authentication-passwordless-security-key.md)) should be used.
* **Deploy secure workstations** - [Secure workstations](/security/compass/privileged-access-devices) provide the mechanism to ensure that the platform and the identity that platform represents is properly attested and secured against exploitation. Two other approaches to consider are:
Provision [emergency access accounts](../roles/security-emergency-access.md) for
Use [Azure managed identities](../managed-identities-azure-resources/overview.md) for Azure resources that require a service identity. Check the [list of services that support managed identities](../managed-identities-azure-resources/managed-identities-status.md) when designing your Azure solutions.
-If managed identities aren't supported or not possible, consider [provisioning service principal objects](/azure/active-directory/develop/app-objects-and-service-principals).
+If managed identities aren't supported or not possible, consider [provisioning service principal objects](../develop/app-objects-and-service-principals.md).
### Hybrid service accounts
Below are some specific recommendations for Azure solutions. For general guidanc
* Define Conditional Access policies for [security information registration](../conditional-access/howto-conditional-access-policy-registration.md) that reflects a secure root of trust process on-premises (for example, for workstations in physical locations, identifiable by IP addresses, that employees must visit in person for verification).
-* Consider managing Conditional Access policies at scale with automation using [MS Graph CA API](/azure/active-directory/conditional-access/howto-conditional-access-apis)). For example, you can use the API to configure, manage, and monitor CA policies consistently across tenants.
+* Consider managing Conditional Access policies at scale with automation using [MS Graph CA API](../conditional-access/howto-conditional-access-apis.md)). For example, you can use the API to configure, manage, and monitor CA policies consistently across tenants.
* Consider using Conditional Access to restrict workload identities. Create a policy to limit or better control access based on location or other relevant circumstances.
Below are some considerations when designing a governed subscription lifecycle p
## Operations
-The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/), [Azure Security Benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](/azure/active-directory/fundamentals/active-directory-ops-guide-ops) for detailed guidance to operate individual environments.
+The following are additional operational considerations for Azure AD, specific to multiple isolated environments. Check the [Azure Cloud Adoption Framework](/azure/cloud-adoption-framework/manage/), [Azure Security Benchmark](/security/benchmark/azure/) and [Azure AD Operations guide](./active-directory-ops-guide-ops.md) for detailed guidance to operate individual environments.
### Cross-environment roles and responsibilities
The following scenarios must be explicitly monitored and investigated:
* Assignment to Azure resources using dedicated accounts for MCA billing tasks.
-* **Privileged role activity** - Configure and review security [alerts generated by Azure AD PIM](/azure/active-directory/privileged-identity-management/pim-how-to-configure-security-alerts). If locking down direct RBAC assignments isn't fully enforceable with technical controls (for example, Owner role has to be granted to product teams to do their job), then monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly to access the subscription with Azure RBAC.
+* **Privileged role activity** - Configure and review security [alerts generated by Azure AD PIM](../privileged-identity-management/pim-how-to-configure-security-alerts.md). If locking down direct RBAC assignments isn't fully enforceable with technical controls (for example, Owner role has to be granted to product teams to do their job), then monitor direct assignment of privileged roles outside PIM by generating alerts whenever a user is assigned directly to access the subscription with Azure RBAC.
* **Classic role assignments** - Organizations should use the modern Azure RBAC role infrastructure instead of the classic roles. As a result, the following events should be monitored:
Similarly, Azure Monitor can be integrated with ITSM systems through the [IT Ser
* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
-* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
+* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
active-directory Secure With Azure Ad Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-fundamentals.md
Non-production environments are commonly referred to as sandbox environments.
* Devices
-**Human identities** are user objects that generally represent people in an organization. These identities are either created and managed directly in Azure AD or are synchronized from an on-premises Active Directory to Azure AD for a given organization. These types of identities are referred to as **local identities**. There can also be user objects invited from a partner organization or a social identity provider using [Azure AD B2B collaboration](/azure/active-directory/external-identities/what-is-b2b). In this content, we refer to these types of identity as **external identities**.
+**Human identities** are user objects that generally represent people in an organization. These identities are either created and managed directly in Azure AD or are synchronized from an on-premises Active Directory to Azure AD for a given organization. These types of identities are referred to as **local identities**. There can also be user objects invited from a partner organization or a social identity provider using [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). In this content, we refer to these types of identity as **external identities**.
-**Non-human identities** include any identity not associated with a human. This type of identity is an object such as an application that requires an identity to run. In this content, we refer to this type of identity as a **workload identity**. Various terms are used to describe this type of identity, including [application objects and service principals](/azure/marketplace/manage-aad-apps).
+**Non-human identities** include any identity not associated with a human. This type of identity is an object such as an application that requires an identity to run. In this content, we refer to this type of identity as a **workload identity**. Various terms are used to describe this type of identity, including [application objects and service principals](../../marketplace/manage-aad-apps.md).
* **Application object**. An Azure AD application is defined by its one and only application object. The object resides in the Azure AD tenant where the application registered. The tenant is known as the application's "home" tenant.
Non-production environments are commonly referred to as sandbox environments.
* **Multi-tenant** applications allow identities from any Azure AD tenant to authenticate.
-* **Service principal object**. Although there are [exceptions](/azure/marketplace/manage-aad-apps), application objects can be considered the *definition* of an application. Service principal objects can be considered an instance of an application. Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
+* **Service principal object**. Although there are [exceptions](../../marketplace/manage-aad-apps.md), application objects can be considered the *definition* of an application. Service principal objects can be considered an instance of an application. Service principals generally reference an application object, and one application object can be referenced by multiple service principals across directories.
**Service principal objects** are also directory identities that can perform tasks independently from human intervention. The service principal defines the access policy and permissions for a user or application in the Azure AD tenant. This mechanism enables core features such as authentication of the user or application during sign-in and authorization during resource access.
-Azure AD allows application and service principal objects to authenticate with a password (also known as an application secret), or with a certificate. The use of passwords for service principals is discouraged and [we recommend using a certificate](/azure/active-directory/develop/howto-create-service-principal-portal) whenever possible.
+Azure AD allows application and service principal objects to authenticate with a password (also known as an application secret), or with a certificate. The use of passwords for service principals is discouraged and [we recommend using a certificate](../develop/howto-create-service-principal-portal.md) whenever possible.
-* **Managed identities for Azure resources**. Managed identities are special service principals in Azure AD. This type of service principal can be used to authenticate against services that support Azure AD authentication without needing to store credentials in your code or handle secrets management. For more information, see [What are managed identities for Azure resources?](/azure/active-directory/managed-identities-azure-resources/overview)
+* **Managed identities for Azure resources**. Managed identities are special service principals in Azure AD. This type of service principal can be used to authenticate against services that support Azure AD authentication without needing to store credentials in your code or handle secrets management. For more information, see [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
* **Device identity**: A device identity is an identity that verifies that the device being used in the authentication flow has undergone a process to attest that the device is legitimate and meets the technical requirements specified by the organization. Once the device has successfully completed this process, the associated identity can be used to further control access to an organization's resources. With Azure AD, devices can authenticate with a certificate. Some legacy scenarios required a human identity to be used in *non-human* scenarios. For example, when service accounts being used in on-premises applications such as scripts or batch jobs require access to Azure AD. This pattern isn't recommended and we recommend you use [certificates](../authentication/concept-certificate-based-authentication-technical-deep-dive.md). However, if you do use a human identity with password for authentication, protect your Azure AD accounts with [Azure Active Directory Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md).
-**Hybrid identity**. A hybrid identity is an identity that spans on-premises and cloud environments. This provides the benefit of being able to use the same identity to access on-premises and cloud resources. The source of authority in this scenario is typically an on-premises directory, and the identity lifecycle around provisioning, de-provisioning and resource assignment is also driven from on-premises. For more information, see [Hybrid identity documentation](/azure/active-directory/hybrid/).
+**Hybrid identity**. A hybrid identity is an identity that spans on-premises and cloud environments. This provides the benefit of being able to use the same identity to access on-premises and cloud resources. The source of authority in this scenario is typically an on-premises directory, and the identity lifecycle around provisioning, de-provisioning and resource assignment is also driven from on-premises. For more information, see [Hybrid identity documentation](../hybrid/index.yml).
**Directory objects**. An Azure AD tenant contains the following common objects:
Azure AD provides industry-leading strong authentication options that organizati
**Application access policies**. Azure AD provides capabilities to further control and secure access to your organization's applications.
-**Conditional Access**. Azure AD Conditional Access policies are tools to bring user and device context into the authorization flow when accessing Azure AD resources. Organizations should explore use of Conditional Access policies to allow, deny, or enhance authentication based on user, risk, device, and network context. For more information, see the [Azure AD Conditional Access documentation](/azure/active-directory/conditional-access/).
+**Conditional Access**. Azure AD Conditional Access policies are tools to bring user and device context into the authorization flow when accessing Azure AD resources. Organizations should explore use of Conditional Access policies to allow, deny, or enhance authentication based on user, risk, device, and network context. For more information, see the [Azure AD Conditional Access documentation](../conditional-access/index.yml).
**Azure AD Identity Protection**. This feature enables organizations to automate the detection and remediation of identity-based risks, investigate risks, and export risk detection data to third-party utilities for further analysis. For more information, see [overview on Azure AD Identity Protection](../identity-protection/overview-identity-protection.md).
Azure AD provides industry-leading strong authentication options that organizati
Azure AD also provides a portal and the Microsoft Graph API to allow organizations to manage identities or integrate Azure AD identity management into existing workflows or automation. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api).
-**Device management**. Azure AD is used to manage the lifecycle and integration with cloud and on-premises device management infrastructures. It also is used to define policies to control access from cloud or on-premises devices to your organizational data. Azure AD provides the lifecycle services of devices in the directory and the credential provisioning to enable authentication. It also manages a key attribute of a device in the system that is the level of trust. This detail is important when designing a resource access policy. For more information, see [Azure AD Device Management documentation](/azure/active-directory/devices/).
+**Device management**. Azure AD is used to manage the lifecycle and integration with cloud and on-premises device management infrastructures. It also is used to define policies to control access from cloud or on-premises devices to your organizational data. Azure AD provides the lifecycle services of devices in the directory and the credential provisioning to enable authentication. It also manages a key attribute of a device in the system that is the level of trust. This detail is important when designing a resource access policy. For more information, see [Azure AD Device Management documentation](../devices/index.yml).
**Configuration management**. Azure AD has service elements that need to be configured and managed to ensure the service is configured to an organization's requirements. These elements include domain management, SSO configuration, and application management to name but a few. Azure AD provides a portal and the Microsoft Graph API to allow organizations to manage these elements or integrate into existing processes. To learn more about Microsoft Graph, see [Use the Microsoft Graph API](/graph/use-the-api).
Azure AD also provides a portal and the Microsoft Graph API to allow organizatio
* Applications used to access
-Azure AD also provides information on the actions that are being performed within Azure AD, and reports on security risks. For more information, see [Azure Active Directory reports and monitoring](/azure/active-directory/reports-monitoring/).
+Azure AD also provides information on the actions that are being performed within Azure AD, and reports on security risks. For more information, see [Azure Active Directory reports and monitoring](../reports-monitoring/index.yml).
**Auditing**. Auditing provides traceability through logs for all changes done by specific features within Azure AD. Examples of activities found in audit logs include changes made to any resources within Azure AD like adding or removing users, apps, groups, roles, and policies. Reporting in Azure AD enables you to audit sign-in activities, risky sign-ins, and users flagged for risk. For more information, see [Audit activity reports in the Azure Active Directory portal](../reports-monitoring/concept-audit-logs.md).
Azure AD also provides information on the actions that are being performed withi
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-introduction.md
Having a set of directory objects in the Azure AD tenant boundary engenders the
## Administrative units for role management
-Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](/azure/active-directory/roles/permissions-reference) role to regional support specialists, so they can manage users only in the region that they support. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only:
+Administrative units restrict permissions in a role to any portion of your organization that you define. You could, for example, use administrative units to delegate the [Helpdesk Administrator](../roles/permissions-reference.md) role to regional support specialists, so they can manage users only in the region that they support. An administrative unit is an Azure AD resource that can be a container for other Azure AD resources. An administrative unit can contain only:
* Users
In the following diagram, administrative units are used to segment the Azure AD
![Diagram that shows Azure AD Administrative units.](media/secure-with-azure-ad-introduction/administrative-units.png)
-For more information on administrative units, see [Administrative units in Azure Active Directory](/azure/active-directory/roles/administrative-units).
+For more information on administrative units, see [Administrative units in Azure Active Directory](../roles/administrative-units.md).
### Common reasons for resource isolation
Configuration settings in Azure AD can impact any resource in the Azure AD tenan
* Bypass security requirements >[!NOTE]
->Using [Named Locations](/azure/active-directory/conditional-access/location-condition) can present some challenges to your [zero-trust journey](https://www.microsoft.com/security/business/zero-trust). Verify that using Named Locations fits into your security strategy and principles.
+>Using [Named Locations](../conditional-access/location-condition.md) can present some challenges to your [zero-trust journey](https://www.microsoft.com/security/business/zero-trust). Verify that using Named Locations fits into your security strategy and principles.
Allowed authentication methods: Global administrators set the authentication methods allowed for the tenant. * **Self-service options**. Global Administrators set self-service options such as self-service-password reset and create Microsoft 365 groups at the tenant level.
Who should have the ability to administer the environment and its resources? The
Given the interdependence between an Azure AD tenant and its resources, it's critical to understand the security and operational risks of compromise or error. If you're operating in a federated environment with synchronized accounts, an on-premises compromise can lead to an Azure AD compromise.
-* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the impact of compromised non-privileged identities is largely contained, compromised administrators can have broad impact. For example, if an Azure AD global administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Azure AD Administrator Roles](/azure/active-directory/roles/delegate-by-task). Similarly, ensure that you create CA policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](/security/compass/privileged-access-strategy).
+* **Identity compromise** - Within the boundary of a tenant, any identity can be assigned any role, given the one providing access has sufficient privileges. While the impact of compromised non-privileged identities is largely contained, compromised administrators can have broad impact. For example, if an Azure AD global administrator account is compromised, Azure resources can become compromised. To mitigate risk of identity compromise, or bad actors, implement [tiered administration](/security/compass/privileged-access-access-model) and ensure that you follow principles of least privilege for [Azure AD Administrator Roles](../roles/delegate-by-task.md). Similarly, ensure that you create CA policies that specifically exclude test accounts and test service principals from accessing resources outside of the test applications. For more information on privileged access strategy, see [Privileged access: Strategy](/security/compass/privileged-access-strategy).
* **Federated environment compromise**
Incorporating zero-trust principles into your Azure AD design strategy can help
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Multiple Tenants https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-multiple-tenants.md
Another approach could have been to utilize the capabilities of Azure AD Connect
## Multi-tenant resource isolation
-A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](/azure/active-directory/external-identities/what-is-b2b). Similarly, organizations can implement [Azure Lighthouse](/azure/lighthouse/overview) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
+A new tenant provides the ability to have a separate set of administrators. Organizations can choose to use corporate identities through [Azure AD B2B collaboration](../external-identities/what-is-b2b.md). Similarly, organizations can implement [Azure Lighthouse](../../lighthouse/overview.md) for cross-tenant management of Azure resources so that non-production Azure subscriptions can be managed by identities in the production counterpart. Azure Lighthouse can't be used to manage services outside of Azure, such as Intune or Microsoft Endpoint Manager. For Managed Service Providers (MSPs), [Microsoft 365 Lighthouse](/microsoft-365/lighthouse/m365-lighthouse-overview?view=o365-worldwide) is an admin portal that helps secure and manage devices, data, and users at scale for small- and medium-sized business (SMB) customers who are using Microsoft 365 Business Premium, Microsoft 365 E3, or Windows 365 Business.
This will allow users to continue to use their corporate credentials, while achieving the benefits of separation as described above.
Devices: This tenant contains a reduced number of devices; only those that are n
* [Resource isolation in a single tenant](secure-with-azure-ad-single-tenant.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Secure With Azure Ad Resource Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-with-azure-ad-resource-management.md
Before any resource management request can be executed by Resource Manager, a se
* **Valid user check** - The user requesting to manage the resource must have an account in the Azure AD tenant associated with the subscription of the managed resource.
-* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](/azure/role-based-access-control/overview). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
+* **User permission check** - Permissions are assigned to users using [role-based access control (RBAC)](../../role-based-access-control/overview.md). An RBAC role specifies a set of permissions a user may take on a specific resource. RBAC helps you manage who has access to Azure resources, what they can do with those resources, and what areas they have access to.
* **Azure policy check** - [Azure policies](../../governance/policy/overview.md) specify the operations allowed or explicitly denied for a specific resource. For example, a policy can specify that users are only allowed (or not allowed) to deploy a specific type of virtual machine.
Conditional Access: A key benefit of using Azure AD for signing into Azure virtu
**Challenges**: The list below highlights key challenges with using this option for identity isolation.
-* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](/azure/automation/update-management/overview) to manage patching and updates of these servers.
+* No central management or configuration of servers. For example, there's no Group Policy that can be applied to a group of servers. Organizations should consider deploying [Update Management in Azure](../../automation/update-management/overview.md) to manage patching and updates of these servers.
* Not suitable for multi-tiered applications that have requirements to authenticate with on-premises mechanisms such as Windows Integrated Authentication across these servers or services. If this is a requirement for the organization, then it's recommended that you explore the Standalone Active Directory Domain Services, or the Azure Active Directory Domain Services scenarios described in this section.
For this isolated model, it's assumed that there's no connectivity to the VNet t
* [Resource isolation with multiple tenants](secure-with-azure-ad-multiple-tenants.md)
-* [Best practices](secure-with-azure-ad-best-practices.md)
+* [Best practices](secure-with-azure-ad-best-practices.md)
active-directory Whats New Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-archive.md
For more information about how to better secure your organization by using autom
In September 2021, we have added following 44 new applications in our App gallery with Federation support
-[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Facebook Work Accounts](../saas-apps/facebook-work-accounts-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://app.visult.io/), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/us/sign-up/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
+[Studybugs](https://studybugs.com/signin), [Yello](https://yello.co/yello-for-microsoft-teams/), [LawVu](../saas-apps/lawvu-tutorial.md), [Formate eVo Mail](https://www.document-genetics.co.uk/formate-evo-erp-output-management), [Revenue Grid](https://app.revenuegrid.com/login), [Orbit for Office 365](https://azuremarketplace.microsoft.com/marketplace/apps/aad.orbitforoffice365?tab=overview), [Upmarket](https://app.upmarket.ai/), [Alinto Protect](https://protect.alinto.net/), [Cloud Concinnity](https://cloudconcinnity.com/), [Matlantis](https://matlantis.com/), [ModelGen for Visio (MG4V)](https://crecy.com.au/model-gen/), [NetRef: Classroom Management](https://oauth.net-ref.com/microsoft/sso), [VergeSense](../saas-apps/vergesense-tutorial.md), [iAuditor](../saas-apps/iauditor-tutorial.md), [Secutraq](https://secutraq.net/login), [Active and Thriving](../saas-apps/active-and-thriving-tutorial.md), [Inova](https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize?client_id=1bacdba3-7a3b-410b-8753-5cc0b8125f81&response_type=code&redirect_uri=https:%2f%2fbroker.partneringplace.com%2fpartner-companion%2f&code_challenge_method=S256&code_challenge=YZabcdefghijklmanopqrstuvwxyz0123456789._-~&scope=1bacdba3-7a3b-410b-8753-5cc0b8125f81/.default), [TerraTrue](../saas-apps/terratrue-tutorial.md), [Facebook Work Accounts](../saas-apps/facebook-work-accounts-tutorial.md), [Beyond Identity Admin Console](../saas-apps/beyond-identity-admin-console-tutorial.md), [Visult](https://visult.app), [ENGAGE TAG](https://app.engagetag.com/), [Appaegis Isolation Access Cloud](../saas-apps/appaegis-isolation-access-cloud-tutorial.md), [CrowdStrike Falcon Platform](../saas-apps/crowdstrike-falcon-platform-tutorial.md), [MY Emergency Control](https://my-emergency.co.uk/app/auth/login), [AlexisHR](../saas-apps/alexishr-tutorial.md), [Teachme Biz](../saas-apps/teachme-biz-tutorial.md), [Zero Networks](../saas-apps/zero-networks-tutorial.md), [Mavim iMprove](https://improve.mavimcloud.com/), [Azumuta](https://app.azumuta.com/login?microsoft=true), [Frankli](https://beta.frankli.io/login), [Amazon Managed Grafana](../saas-apps/amazon-managed-grafana-tutorial.md), [Productive](../saas-apps/productive-tutorial.md), [Create!Webフロー](../saas-apps/createweb-tutorial.md), [Evercate](https://evercate.com/us/sign-up/), [Ezra Coaching](../saas-apps/ezra-coaching-tutorial.md), [Baldwin Safety and Compliance](../saas-apps/baldwin-safety-&-compliance-tutorial.md), [Nulab Pass (Backlog,Cacoo,Typetalk)](../saas-apps/nulab-pass-tutorial.md), [Metatask](../saas-apps/metatask-tutorial.md), [Contrast Security](../saas-apps/contrast-security-tutorial.md), [Animaker](../saas-apps/animaker-tutorial.md), [Traction Guest](../saas-apps/traction-guest-tutorial.md), [True Office Learning - LIO](../saas-apps/true-office-learning-lio-tutorial.md), [Qiita Team](../saas-apps/qiita-team-tutorial.md)
You can also find the documentation of all the applications here: https://aka.ms/AppsTutorial
active-directory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new.md
For more information, see:[Customize app SAML token claims - Microsoft Entra | M
You can now create trusts on both user and resource forests. On-premises AD DS users can't authenticate to resources in the Azure AD DS resource forest until you create an outbound trust to your on-premises AD DS. An outbound trust requires network connectivity to your on-premises virtual network on which you have installed Azure AD Domain Service. On a user forest, trusts can be created for on-premises AD forests that aren't synchronized to Azure AD DS.
-To learn more about trusts and how to deploy your own, visit [How trust relationships work for forests in Active Directory](/azure/active-directory-domain-services/concepts-forest-trust).
+To learn more about trusts and how to deploy your own, visit [How trust relationships work for forests in Active Directory](../../active-directory-domain-services/concepts-forest-trust.md).
Note that end users are encouraged to enable the optional telemetry setting in t
Previously to set up and administer your AAD-DS instance you needed top level permissions of Azure Contributor and Azure AD Global Admin. Now for both initial creation, and ongoing administration, you can utilize more fine grain permissions for enhanced security and control. The prerequisites now minimally require: - You need [Application Administrator](../roles/permissions-reference.md#application-administrator) and [Groups Administrator](../roles/permissions-reference.md#groups-administrator) Azure AD roles in your tenant to enable Azure AD DS.-- You need [Domain Services Contributor](/azure/role-based-access-control/built-in-roles#domain-services-contributor) Azure role to create the required Azure AD DS resources.
+- You need [Domain Services Contributor](../../role-based-access-control/built-in-roles.md#domain-services-contributor) Azure role to create the required Azure AD DS resources.
Check out these resources to learn more: -- [Tutorial - Create an Azure Active Directory Domain Services managed domain | Microsoft Docs](/azure/active-directory-domain-services/tutorial-create-instance#prerequisites)
+- [Tutorial - Create an Azure Active Directory Domain Services managed domain | Microsoft Docs](../../active-directory-domain-services/tutorial-create-instance.md#prerequisites)
- [Least privileged roles by task - Azure Active Directory | Microsoft Docs](../roles/delegate-by-task.md#domain-services)-- [Azure built-in roles - Azure RBAC | Microsoft Docs](/azure/role-based-access-control/built-in-roles#domain-services-contributor)
+- [Azure built-in roles - Azure RBAC | Microsoft Docs](../../role-based-access-control/built-in-roles.md#domain-services-contributor)
We've improved the Privileged Identity management (PIM) time to role activation
-
------
active-directory Four Steps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/four-steps.md
na Previously updated : 06/20/2019 Last updated : 08/17/2022
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
You can view the existing writeback settings on Microsoft 365 groups in the port
[![Screenshot of Microsoft 365 group properties.](media/how-to-connect-group-writeback/group-2.png)](media/how-to-connect-group-writeback/group-2.png#lightbox)
-You can also view the writeback state via MS Graph: [Get group](https://docs.microsoft.com/graph/api/group-get?view=graph-rest-beta&tabs=http)
+You can also view the writeback state via MS Graph: [Get group](/graph/api/group-get?tabs=http&view=graph-rest-beta)
Example: `GET https://graph.microsoft.com/beta/groups?$filter=groupTypes/any(c:c eq 'Unified')&$select=id,displayName,writebackConfiguration`
Finally, you can also view the writeback state via PowerShell using the [Micros
For groups that haven't been created yet, you can view whether or not they're going to be automatically written back.
-To see the default behavior in your environment for newly created groups use MS Graph: [directorySetting](https://docs.microsoft.com/graph/api/resources/directorysetting?view=graph-rest-beta)
+To see the default behavior in your environment for newly created groups use MS Graph: [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta)
Example: `GET https://graph.microsoft.com/beta/Settings`
To see the default behavior in your environment for newly created groups use MS
If a `directorySetting` named **Group.Unified** exists with a `NewUnifiedGroupWritebackDefault` value of **false**, Microsoft 365 groups **won't automatically** be enabled for write-back when they're created. If the value is not specified or it is set to true, newly created Microsoft 365 groups **will automatically** be written back.
-You can also use the PowerShell cmdlet [AzureADDirectorySetting](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-settings-cmdlets)
+You can also use the PowerShell cmdlet [AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md)
Example: `(Get-AzureADDirectorySetting | ? { $_.DisplayName -eq "Group.Unified"} | FL *).values`
You can also use the PowerShell cmdlet [AzureADDirectorySetting](https://docs.mi
If a `directorySetting` is returned with a `NewUnifiedGroupWritebackDefault` value of **false**, Microsoft 365 groups **won't automatically** be enabled for write-back when they're created. If the value is not specified or it is set to **true**, newly created Microsoft 365 groups **will automatically** be written back. ### Discover if AD has been prepared for Exchange
-To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server, Active Directory Exchange Server, Exchange Server Active Directory, Exchange 2019 Active Directory](https://docs.microsoft.com/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019#how-do-you-know-this-worked)
+To verify if Active Directory has been prepared for Exchange, see [Prepare Active Directory and domains for Exchange Server, Active Directory Exchange Server, Exchange Server Active Directory, Exchange 2019 Active Directory](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019#how-do-you-know-this-worked)
## Public preview prerequisites The following are prerequisites for group writeback.
The following are prerequisites for group writeback.
- Azure AD Connect version 2.0.89.0 or later - **Optional**: Exchange Server 2016 CU15 or later - Only needed for configuring cloud groups with Exchange Hybrid.
- - See [Configure Microsoft 365 Groups with on-premises Exchange hybrid](https://docs.microsoft.com/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites) for more information.
- - If you haven't [prepared AD for Exchange](https://docs.microsoft.com/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019), mail related attributes of groups won't be written back.
+ - See [Configure Microsoft 365 Groups with on-premises Exchange hybrid](/exchange/hybrid-deployment/set-up-microsoft-365-groups#prerequisites) for more information.
+ - If you haven't [prepared AD for Exchange](/Exchange/plan-and-deploy/prepare-ad-and-domains?view=exchserver-2019), mail related attributes of groups won't be written back.
## Choosing the right approach Choosing the right deployment approach for your organization will depend on the current state of group writeback in your environment and the desired writeback behavior.
If you plan to make changes to the default behavior, we recommend that you do so
While this release has undergone extensive testing, you may still encounter issues. One of the goals of this public preview release is to find and fix any such issues before moving to General Availability.ΓÇ» While support is provided for this public preview release, Microsoft may not always be able to fix all issues you may encounter immediately. For this reason, it's recommended that you use your best judgment before deploying this release in your production environment.ΓÇ» Limitations and known issues specific to Group writeback: -- Cloud [distribution list groups](https://docs.microsoft.com/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online cannot be written back to AD, only Microsoft 365 and Azure AD security groups are supported.
+- Cloud [distribution list groups](/exchange/recipients-in-exchange-online/manage-distribution-groups/manage-distribution-groups) created in Exchange Online cannot be written back to AD, only Microsoft 365 and Azure AD security groups are supported.
- To be backwards compatible with the current version of group writeback, when you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups, by default. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md). - When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory, until hard deleted in Azure AD. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md) - Group Writeback does not support writeback of nested group members that have a scope of ‘Domain local’ in AD, since Azure AD security groups are written back with scope ‘Universal’. If you have a nested group like this, you'll see an export error in Azure AD Connect with the message “A universal group cannot have a local group as a member.” The resolution is to remove the member with scope ‘Domain local’ from the Azure AD group or update the nested group member scope in AD to ‘Global’ or ‘Universal’ group.
While this release has undergone extensive testing, you may still encounter issu
- [Modify Azure AD Connect group writeback default behavior](how-to-connect-modify-group-writeback.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md)-- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory How To Connect Install Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-automatic-upgrade.md
Azure AD Connect automatic upgrade is a feature that regularly checks for newer
Note that for security reasons the agent that performs the automatic upgrade validates the new build of Azure AD Connect based on the digital signature of the downloaded version. >[!NOTE]
-> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](https://docs.microsoft.com/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
+> Azure Active Directory (AD) Connect follows the [Modern Lifecycle Policy](/lifecycle/policies/modern). Changes for products and services under the Modern Lifecycle Policy may be more frequent and require customers to be alert for forthcoming modifications to their product or service.
>
-> Product governed by the Modern Policy follow a [continuous support and servicing model](https://docs.microsoft.com/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported.
+> Product governed by the Modern Policy follow a [continuous support and servicing model](/lifecycle/overview/product-end-of-support-overview). Customers must take the latest update to remain supported.
> > For products and services governed by the Modern Lifecycle Policy, Microsoft's policy is to provide a minimum 30 days' notification when customers are required to take action in order to avoid significant degradation to the normal use of the product or service.
Here is a list of the most common messages you find. It does not list all, but t
|UpgradeNotSupportedAADHealthUploadDisabled|Health data uploads have been disabled from the portal| ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
If the original version of group writeback is already enabled and in use in your
### Disable automatic writeback of all Microsoft 365 groups 1. To configure directory settings to disable automatic writeback of newly created Microsoft 365 groups, update the `NewUnifiedGroupWritebackDefault` setting to false.
- 2. To do this via PowerShell, use the: [New-AzureADDirectorySetting](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-settings-cmdlets) cmdlet.
+ 2. To do this via PowerShell, use the: [New-AzureADDirectorySetting](../enterprise-users/groups-settings-cmdlets.md) cmdlet.
Example: ```PowerShell $TemplateId = (Get-AzureADDirectorySettingTemplate | where {$_.DisplayName -eq "Group.Unified" }).Id
If the original version of group writeback is already enabled and in use in your
$Setting["NewUnifiedGroupWritebackDefault"] = "False" New-AzureADDirectorySetting -DirectorySetting $Setting ```
- 3. Via MS Graph: [directorySetting](https://docs.microsoft.com/graph/api/resources/directorysetting?view=graph-rest-beta)
+ 3. Via MS Graph: [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta)
### Disable writeback for each existing Microsoft 365 group. -- Portal: [Entra admin portal](https://docs.microsoft.com/azure/active-directory/enterprise-users/groups-write-back-portal)
+- Portal: [Entra admin portal](../enterprise-users/groups-write-back-portal.md)
- PowerShell: [Microsoft Identity Tools PowerShell Module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16) Example: `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Update-MsIdGroupWritebackConfiguration -WriteBackEnabled $false` -- MS Graph: [Update group](https://docs.microsoft.com/graph/api/group-update?view=graph-rest-beta&tabs=http)
+- MS Graph: [Update group](/graph/api/group-update?tabs=http&view=graph-rest-beta)
If the original version of group writeback is already enabled and in use in your
>After deletion in AD, written back groups are not automatically restored from the AD recycle bin, if they're re-enabled for writeback or restored from soft delete state. New groups will be created. Deleted groups restored from the AD recycle bin, prior to being re-enabled for writeback or restored from soft delete state in Azure AD, will be joined to their respective Azure AD group. 1. On your Azure AD Connect server, open a PowerShell prompt as administrator.
- 2. Disable [Azure AD Connect sync scheduler](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-feature-scheduler)
+ 2. Disable [Azure AD Connect sync scheduler](./how-to-connect-sync-feature-scheduler.md)
``` PowerShell Set-ADSyncScheduler -SyncCycleEnabled $false ```
If the original version of group writeback is already enabled and in use in your
Since the default sync rule, that limits the group size, is created when group writeback is enabled, the following steps must be completed after group writeback is enabled. 1. On your Azure AD Connect server, open a PowerShell prompt as administrator.
-2. Disable [Azure AD Connect sync scheduler](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-feature-scheduler)
+2. Disable [Azure AD Connect sync scheduler](./how-to-connect-sync-feature-scheduler.md)
``` PowerShell Set-ADSyncScheduler -SyncCycleEnabled $false ```
-3. Open the [synchronization rule editor](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-create-custom-sync-rule)
+3. Open the [synchronization rule editor](./how-to-connect-create-custom-sync-rule.md)
4. Set the Direction to Outbound 5. Locate and disable the ΓÇÿOut to AD ΓÇô Group Writeback Member LimitΓÇÖ synchronization rule 6. Enable Azure AD Connect sync scheduler
Since the default sync rule, that limits the group size, is created when group w
## Restoring from AD Recycle Bin
-If you're updating the default behavior to delete groups when disabled for writeback or soft deleted, we recommend that you enable the [Active Directory Recycle Bin](https://docs.microsoft.com/azure/active-directory/hybrid/how-to-connect-sync-recycle-bin) feature for your on-premises instances of Active Directory. This feature will allow you to manually restore previously deleted AD groups, so that they can be rejoined to their respective Azure AD groups, if they were accidentally disabled for writeback or soft deleted.
+If you're updating the default behavior to delete groups when disabled for writeback or soft deleted, we recommend that you enable the [Active Directory Recycle Bin](./how-to-connect-sync-recycle-bin.md) feature for your on-premises instances of Active Directory. This feature will allow you to manually restore previously deleted AD groups, so that they can be rejoined to their respective Azure AD groups, if they were accidentally disabled for writeback or soft deleted.
Prior to re-enabling for writeback, or restoring from soft delete in Azure AD, the group will first need to be restored in AD.
Prior to re-enabling for writeback, or restoring from soft delete in Azure AD, t
- [Azure AD Connect group writeback](how-to-connect-group-writeback-v2.md) - [Enable Azure AD Connect group writeback](how-to-connect-group-writeback-enable.md) - -- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
+- [Disable Azure AD Connect group writeback](how-to-connect-group-writeback-disable.md)
active-directory Create Service Principal Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/create-service-principal-cross-tenant.md
From the Microsoft Graph explorer window:
## Next steps -- [Add RBAC role to the enterprise application](/azure/role-based-access-control/role-assignments-portal)
+- [Add RBAC role to the enterprise application](../../role-based-access-control/role-assignments-portal.md)
- [Assign users to your application](add-application-portal-assign-users.md)
active-directory Datawiza Azure Ad Sso Oracle Jde https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/datawiza-azure-ad-sso-oracle-jde.md
This tutorial shows how to enable Azure Active Directory (Azure AD) single sign-
Benefits of integrating applications with Azure AD using DAB include: -- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](/azure/active-directory/authentication/concept-mfa-howitworks) and
- [Conditional Access](/azure/active-directory/conditional-access/overview).
+- [Proactive security with Zero Trust](https://www.microsoft.com/security/business/zero-trust) through [Azure AD SSO](https://azure.microsoft.com/solutions/active-directory-sso/OCID=AIDcmm5edswduu_SEM_e13a1a1787ce1700761a78c235ae5906:G:s&ef_id=e13a1a1787ce1700761a78c235ae5906:G:s&msclkid=e13a1a1787ce1700761a78c235ae5906#features), [Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md) and
+ [Conditional Access](../conditional-access/overview.md).
- [Easy authentication and authorization in Azure AD with no-code Datawiza](https://www.microsoft.com/security/blog/2022/05/17/easy-authentication-and-authorization-in-azure-active-directory-with-no-code-datawiza/). Use of web applications such as: Oracle JDE, Oracle E-Business Suite, Oracle Sibel, Oracle Peoplesoft, and home-grown apps.
The scenario solution has the following components:
- **Datawiza Cloud Management Console (DCMC)**: A centralized console to manage DAB. DCMC has UI and RESTful APIs for administrators to configure Datawiza Access Broker and access control policies. Understand the SP initiated flow by following the steps mentioned in [Datawiza and Azure AD authentication
-architecture](/azure/active-directory/manage-apps/datawiza-with-azure-ad#datawiza-with-azure-ad-authentication-architecture).
+architecture](./datawiza-with-azure-ad.md#datawiza-with-azure-ad-authentication-architecture).
## Prerequisites
Ensure the following prerequisites are met.
- An Azure subscription. If you don't have one, you can get an [Azure free account](https://azure.microsoft.com/free) - An Azure AD tenant linked to the Azure subscription.
- - See, [Quickstart: Create a new tenant in Azure Active Directory.](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant)
+ - See, [Quickstart: Create a new tenant in Azure Active Directory.](../fundamentals/active-directory-access-create-new-tenant.md)
- Docker and Docker Compose
Ensure the following prerequisites are met.
- User identities synchronized from an on-premises directory to Azure AD, or created in Azure AD and flowed back to an on-premises directory.
- - See, [Azure AD Connect sync: Understand and customize synchronization](/azure/active-directory/hybrid/how-to-connect-sync-whatis).
+ - See, [Azure AD Connect sync: Understand and customize synchronization](../hybrid/how-to-connect-sync-whatis.md).
- An account with Azure AD and the Application administrator role
- - See, [Azure AD built-in roles, all roles](/azure/active-directory/roles/permissions-reference#all-roles).
+ - See, [Azure AD built-in roles, all roles](../roles/permissions-reference.md#all-roles).
- An Oracle JDE environment
For the Oracle JDE application to recognize the user correctly, there's another
## Enable Azure AD Multi-Factor Authentication
-To provide an extra level of security for sign-ins, enforce multifactor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure portal](/azure/active-directory/authentication/tutorial-enable-azure-mfa).
+To provide an extra level of security for sign-ins, enforce multifactor authentication (MFA) for user sign-in. One way to achieve this is to [enable MFA on the Azure portal](../authentication/tutorial-enable-azure-mfa.md).
1. Sign in to the Azure portal as a **Global Administrator**.
To confirm Oracle JDE application access occurs correctly, a prompt appears to u
- [Watch the video - Enable SSO/MFA for Oracle JDE with Azure AD via Datawiza](https://www.youtube.com/watch?v=_gUGWHT5m90). -- [Configure Datawiza and Azure AD for secure hybrid access](/azure/active-directory/manage-apps/datawiza-with-azure-ad)
+- [Configure Datawiza and Azure AD for secure hybrid access](./datawiza-with-azure-ad.md)
-- [Configure Datawiza with Azure AD B2C](/azure/active-directory-b2c/partner-datawiza)
+- [Configure Datawiza with Azure AD B2C](../../active-directory-b2c/partner-datawiza.md)
-- [Datawiza documentation](https://docs.datawiza.com/)
+- [Datawiza documentation](https://docs.datawiza.com/)
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
Title: Manage user-assigned managed identities - Azure AD
description: Create user-assigned managed identities. -+ editor:
active-directory How Managed Identities Work Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-managed-identities-work-vm.md
description: Description of managed identities for Azure resources work with Azu
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory How To Managed Identity Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-managed-identity-regional-move.md
Title: Move managed identities to another region - Azure AD
description: Steps involved in getting a managed identity recreated in another region -+
active-directory How To Use Vm Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md
description: Code samples for using Azure SDKs with an Azure VM that has managed
documentationcenter: -+ editor:
active-directory How To Use Vm Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md
description: Step-by-step instructions and examples for using an Azure VM-manage
documentationcenter: -+ editor:
active-directory How To Use Vm Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-use-vm-token.md
description: Step-by-step instructions and examples for using managed identities
documentationcenter: -+ editor:
active-directory How To View Managed Identity Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-activity.md
description: Step-by-step instructions for viewing the activities made to manage
documentationcenter: '' -+ editor: ''
active-directory How To View Managed Identity Service Principal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory How To View Managed Identity Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-portal.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory How To View Managed Identity Service Principal Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-powershell.md
description: Step-by-step instructions for viewing the service principal of a ma
documentationcenter: '' -+ editor: ''
active-directory Howto Assign Access Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-cli.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Howto Assign Access Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-portal.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Howto Assign Access Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-powershell.md
description: Step-by-step instructions for assigning a managed identity on one r
documentationcenter: -+ editor:
active-directory Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/known-issues.md
description: Known issues with managed identities for Azure resources.
documentationcenter: -+ editor: ms.assetid: 2097381a-a7ec-4e3b-b4ff-5d2fb17403b6
active-directory Managed Identities Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-faq.md
description: Frequently asked questions about managed identities
documentationcenter: -+ editor:
active-directory Managed Identities Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identities-status.md
Last updated 01/10/2022
-+
active-directory Managed Identity Best Practice Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md
description: Recommendations on when to use user-assigned versus system-assigned
documentationcenter: -+ editor:
active-directory Msi Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/msi-tutorial-linux-vm-access-arm.md
description: A tutorial that walks you through the process of using a user-assig
documentationcenter: '' -+ editor: daveba
active-directory Overview For Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview-for-developers.md
description: An overview how developers can use managed identities for Azure res
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/overview.md
description: An overview of the managed identities for Azure resources.
documentationcenter: -+ editor: ms.assetid: 0232041d-b8f5-4bd2-8d11-27999ad69370
active-directory Qs Configure Cli Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md
Title: Configure managed identities on Azure VM using Azure CLI - Azure AD description: Step-by-step instructions for configuring system and user-assigned managed identities on an Azure VM using Azure CLI. -+
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
description: Step-by-step instructions for configuring system and user-assigned
documentationcenter: -+ editor:
active-directory Qs Configure Portal Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Portal Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vmss.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Powershell Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vm.md
Title: Configure managed identities on an Azure VM using PowerShell - Azure AD
description: Step-by-step instructions for configuring managed identities for Azure resources on an Azure VM using PowerShell. -+
active-directory Qs Configure Powershell Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-powershell-windows-vmss.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Rest Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vm.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vmss.md
description: Step-by-step instructions for configuring a system and user-assigne
documentationcenter: -+ editor:
active-directory Qs Configure Sdk Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-sdk-windows-vm.md
description: Step-by-step instructions for configuring and using managed identit
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Template Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vm.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Qs Configure Template Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-template-windows-vmss.md
description: Step-by-step instructions for configuring managed identities for Az
documentationcenter: '' -+ editor: ''
active-directory Services Azure Active Directory Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/services-azure-active-directory-support.md
Last updated 02/01/2022
-+ # Azure services that support Azure AD authentication
active-directory Tutorial Linux Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-arm.md
description: A quickstart that walks you through the process of using a Linux VM
documentationcenter: '' -+ editor: bryanla
active-directory Tutorial Linux Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-cosmos-db.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: -+ editor:
active-directory Tutorial Linux Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-datalake.md
description: A tutorial that shows you how to use a Linux VM system-assigned man
documentationcenter: -+ editor:
active-directory Tutorial Linux Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-nonaad.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Linux Vm Access Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-access-key.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Linux Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage-sas.md
description: Tutorial showing how to use a Linux VM system-assigned managed iden
documentationcenter: '' -+
active-directory Tutorial Linux Vm Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-linux-vm-access-storage.md
description: A tutorial that walks you through the process of using a Linux VM s
documentationcenter: -+ editor:
active-directory Tutorial Vm Managed Identities Cosmos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-managed-identities-cosmos.md
Title: Use managed identities from a virtual machine to access Cosmos DB description: Learn how to use managed identities with Windows VMs using the Azure portal, CLI, PowerShell, Azure Resource Manager template -+
active-directory Tutorial Vm Windows Access Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-vm-windows-access-storage.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-arm.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db.md
description: A tutorial that walks you through the process of using a system-ass
documentationcenter: '' -+ editor:
active-directory Tutorial Windows Vm Access Datalake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-datalake.md
description: A tutorial that shows you how to use a Windows VM system-assigned m
documentationcenter: -+ editor:
active-directory Tutorial Windows Vm Access Nonaad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-nonaad.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Access Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-sql.md
description: A tutorial that walks you through the process of using a Windows VM
documentationcenter: '' -+
active-directory Tutorial Windows Vm Access Storage Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-storage-sas.md
description: A tutorial that shows you how to use a Windows VM system-assigned m
documentationcenter: '' -+ editor: daveba
active-directory Tutorial Windows Vm Ua Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/tutorial-windows-vm-ua-arm.md
description: A tutorial that walks you through the process of using a user-assig
documentationcenter: '' -+ editor:
active-directory Amazon Web Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/amazon-web-service-tutorial.md
To configure the integration of AWS Single-Account Access into Azure AD, you nee
1. In the **Add from the gallery** section, type **AWS Single-Account Access** in the search box. 1. Select **AWS Single-Account Access** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for AWS Single-Account Access Configure and test Azure AD SSO with AWS Single-Account Access using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS Single-Account Access.
active-directory Atlassian Cloud Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/atlassian-cloud-tutorial.md
To configure the integration of Atlassian Cloud into Azure AD, you need to add A
1. In the **Add from the gallery** section, type **Atlassian Cloud** in the search box. 1. Select **Atlassian Cloud** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO Configure and test Azure AD SSO with Atlassian Cloud using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Atlassian Cloud.
active-directory Aws Single Sign On Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/aws-single-sign-on-tutorial.md
To configure the integration of AWS IAM Identity Center into Azure AD, you need
1. In the **Add from the gallery** section, type **AWS IAM Identity Center** in the search box. 1. Select **AWS IAM Identity Center** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for AWS IAM Identity Center Configure and test Azure AD SSO with AWS IAM Identity Center using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in AWS IAM Identity Center.
active-directory Cisco Anyconnect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/cisco-anyconnect.md
To configure the integration of Cisco AnyConnect into Azure AD, you need to add
1. In the **Add from the gallery** section, type **Cisco AnyConnect** in the search box. 1. Select **Cisco AnyConnect** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for Cisco AnyConnect Configure and test Azure AD SSO with Cisco AnyConnect using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Cisco AnyConnect.
active-directory Docusign Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/docusign-tutorial.md
To configure the integration of DocuSign into Azure AD, you must add DocuSign fr
1. In the **Add from the gallery** section, type **DocuSign** in the search box. 1. Select **DocuSign** from the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for DocuSign Configure and test Azure AD SSO with DocuSign by using a test user named **B.Simon**. For SSO to work, you must establish a link relationship between an Azure AD user and the corresponding user in DocuSign.
active-directory Fortigate Ssl Vpn Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/fortigate-ssl-vpn-tutorial.md
To configure the integration of FortiGate SSL VPN into Azure AD, you need to add
1. In the **Add from the gallery** section, enter **FortiGate SSL VPN** in the search box. 1. Select **FortiGate SSL VPN** in the results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for FortiGate SSL VPN You'll configure and test Azure AD SSO with FortiGate SSL VPN by using a test user named B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the corresponding SAML SSO user group in FortiGate SSL VPN.
active-directory Google Apps Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/google-apps-tutorial.md
To configure the integration of Google Cloud / G Suite Connector by Microsoft in
1. In the **Add from the gallery** section, type **Google Cloud / G Suite Connector by Microsoft** in the search box. 1. Select **Google Cloud / G Suite Connector by Microsoft** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD single sign-on for Google Cloud / G Suite Connector by Microsoft Configure and test Azure AD SSO with Google Cloud / G Suite Connector by Microsoft using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Google Cloud / G Suite Connector by Microsoft.
active-directory Lms And Education Management System Leaf Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/lms-and-education-management-system-leaf-tutorial.md
Previously updated : 06/27/2022 Last updated : 08/16/2022
For more information, see [Azure built-in roles](../roles/permissions-reference.
In this tutorial, you configure and test Azure AD SSO in a test environment. * LMS and Education Management System Leaf supports **SP** initiated SSO.
-* LMS and Education Management System Leaf supports **Just In Time** user provisioning.
## Add LMS and Education Management System Leaf from the gallery
Follow these steps to enable Azure AD SSO in the Azure portal.
`https://<SUBDOMAIN>.leaf-hrm.jp/loginusers/acs` c. In the **Sign on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.leaf-hrm.jp/`
+ `https://<SUBDOMAIN>.leaf-hrm.jp/loginusers/sso/1`
> [!Note] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign on URL. Contact [LMS and Education Management System Leaf support team](mailto:leaf-jimukyoku@insource.co.jp) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+1. Your LMS and Education Management System Leaf application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows an example for this. The default value of **Unique User Identifier** is **user.userprincipalname** but LMS and Education Management System Leaf expects this to be mapped with the user's email address. For that you can use **user.mail** attribute from the list or use the appropriate attribute value based on your organization configuration.
+
+ ![image](common/default-attributes.png)
+ 1. On the **Set-up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer. ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
To configure single sign-on on **LMS and Education Management System Leaf** side
### Create LMS and Education Management System Leaf test user
-In this section, a user called B.Simon is created in LMS and Education Management System Leaf. LMS and Education Management System Leaf supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in LMS and Education Management System Leaf, a new one is created after authentication.
+1. Log in as the Leaf system administrator user. From the **User tab** of **Master Maintenance**, create a user with a login ID of `leaftest`.
+2. From the User tab of Master Maintenance, click the **SSO Information Bulk Registration** button.
+3. Click the **Registration CSV** button to download the registration CSV.
+4. Open the downloaded CSV, enter (Leaf) login ID, nameID format, authentication server, and save.
+
+ ![Screenshot for Registration CSV.](./media/lms-and-education-management-system-leaf-tutorial/create-test-user.png)
+
+ ![Screenshot for Name ID.](./media/lms-and-education-management-system-leaf-tutorial/name-identifier.png)
+
+ a. Please enter `leaftest` in the **(Leaf) Login ID** column.
+
+ b. In the Authentication Server column, enter the value corresponding to the Authentication Server in the above figure.
+
+ c. In the NameID format column, enter the value corresponding to **NameID format**.
+
+ d.Enter **leaftest@company。.extension** in the [NameID] column.
+
+5. Click the **Select File** button and select the CSV you edited earlier.
+6. Click the **Upload** button.
+
+> [!NOTE]
+> As a way to associate with Leaf, the login ID (user) on which Leaf is linked with the NameID (user)
+and NameID format (format) on which IdP (authentication server) is specified.
+ ## Test SSO
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-Once you configure LMS and Education Management System Leaf you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
+Once you configure LMS and Education Management System Leaf you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Salesforce Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/salesforce-tutorial.md
To configure the integration of Salesforce into Azure AD, you need to add Salesf
1. In the **Add from the gallery** section, type **Salesforce** in the search box. 1. Select **Salesforce** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide)
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide)
## Configure and test Azure AD SSO for Salesforce
In this section, you test your Azure AD single sign-on configuration with follow
## Next steps
-After you configure Salesforce, you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
+After you configure Salesforce, you can enforce Session Control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session Control extends from Conditional Access. [Learn how to enforce session control with Microsoft Defender for Cloud Apps](/cloud-app-security/proxy-deployment-aad).
active-directory Saml Toolkit Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/saml-toolkit-tutorial.md
To configure the integration of Azure AD SAML Toolkit into Azure AD, you need to
1. In the **Add from the gallery** section, type **Azure AD SAML Toolkit** in the search box. 1. Select **Azure AD SAML Toolkit** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for Azure AD SAML Toolkit Configure and test Azure AD SSO with Azure AD SAML Toolkit using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Azure AD SAML Toolkit.
active-directory Servicenow Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/servicenow-tutorial.md
To configure the integration of ServiceNow into Azure AD, you need to add Servic
1. In the **Add from the gallery** section, enter **ServiceNow** in the search box. 1. Select **ServiceNow** from results panel, and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for ServiceNow Configure and test Azure AD SSO with ServiceNow by using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in ServiceNow.
active-directory Slack Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/slack-tutorial.md
To configure the integration of Slack into Azure AD, you need to add Slack from
1. In the **Add from the gallery** section, type **Slack** in the search box. 1. Select **Slack** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. You can learn more about O365 wizards [here](https://docs.microsoft.com/microsoft-365/admin/misc/azure-ad-setup-guides?view=o365-worldwide).
+ ## Configure and test Azure AD SSO for Slack Configure and test Azure AD SSO with Slack using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Slack.
active-directory Verifiable Credentials Configure Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/verifiable-credentials/verifiable-credentials-configure-tenant.md
The following diagram illustrates the Verified ID architecture and the component
[Azure Key Vault](../../key-vault/general/basic-concepts.md) is a cloud service that enables the secure storage and access of secrets and keys. The Verified ID service stores public and private keys in Azure Key Vault. These keys are used to sign and verify credentials.
-If you don't have an Azure Key Vault instance available, follow [these steps](/azure/key-vault/general/quick-create-portal) to create a key vault using the Azure portal.
+If you don't have an Azure Key Vault instance available, follow [these steps](../../key-vault/general/quick-create-portal.md) to create a key vault using the Azure portal.
>[!NOTE] >By default, the account that creates a vault is the only one with access. The Verified ID service needs access to the key vault. You must configure your key vault with access policies allowing the account used during configuration to create and delete keys. The account used during configuration also requires permissions to sign so that it can create the domain binding for Verified ID. If you use the same account while testing, modify the default policy to grant the account sign permission, in addition to the default permissions granted to vault creators.
Once that you have successfully completed the verification steps, you are ready
## Next steps - [Learn how to issue Microsoft Entra Verified ID credentials from a web application](verifiable-credentials-configure-issuer.md).-- [Learn how to verify Microsoft Entra Verified ID credentials](verifiable-credentials-configure-verifier.md).
+- [Learn how to verify Microsoft Entra Verified ID credentials](verifiable-credentials-configure-verifier.md).
aks Command Invoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/command-invoke.md
az aks command invoke \
The above example runs three `helm` commands on the *myAKSCluster* cluster in *myResourceGroup*.
-## Use `command invoke` to run commands an with attached file or directory
+## Use `command invoke` to run commands with an attached file or directory
Use `az aks command invoke --command` to run commands on your cluster and `--file` to attach a file or directory for use by those commands. For example:
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
Nightly updates apply security updates to the OS on the node, but the node image
For Windows Server nodes, Windows Update doesn't automatically run and apply the latest updates. Schedule Windows Server node pool upgrades in your AKS cluster around the regular Windows Update release cycle and your own validation process. This upgrade process creates nodes that run the latest Windows Server image and patches, then removes the older nodes. For more information on this process, see [Upgrade a node pool in AKS][nodepool-upgrade].
+### Node authorization
+Node authorization is a special-purpose authorization mode that specifically authorizes API requests made by kubelets to protect against East-West attacks. Node authorization is enabled by default on AKS 1.24 + clusters.
+ ### Node deployment Nodes are deployed into a private virtual network subnet, with no public IP addresses assigned. For troubleshooting and management purposes, SSH is enabled by default and only accessible using the internal IP address.
api-management Devops Api Development Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/devops-api-development-templates.md
Title: CI/CD for Azure API Management using ARM templates
-description: Introduction to API DevOps with Azure API Management, using Azure Resource Manager templates to manage API deployments in a CI/CD pipeline
+ Title: Use DevOps and CI/CD to publish APIs
+description: Introduction to API DevOps with Azure API Management
-+ Previously updated : 10/09/2020- Last updated : 08/15/2022+
-# CI/CD for API Management using Azure Resource Manager templates
+# Use DevOps and CI/CD to publish APIs
-This article shows you how to use API DevOps with Azure API Management, through Azure Resource Manager templates. With the strategic value of APIs, a continuous integration and continuous deployment (CI/CD) pipeline has become an important aspect of API development. It allows organizations to automate deployment of API changes without error-prone manual steps, detect issues earlier, and ultimately deliver value to users faster.
+With the strategic value of APIs in the enterprise, adopting DevOps continuous integration (CI) and deployment (CD) techniques has become an important aspect of API development. This article discusses the decisions you'll need to make to adopt DevOps principles for the management of APIs.
-For details, tools, and code samples to implement the DevOps approach described in this article, see the open-source [Azure API Management DevOps Resource Kit](https://github.com/Azure/azure-api-management-devops-resource-kit) in GitHub. Because customers bring a wide range of engineering cultures and existing automation solutions, the approach isn't a one-size-fits-all solution.
+API DevOps consists of three parts:
-For architectural guidance, see:
-* **API Management landing zone accelerator**: [Reference architecture](/azure/architecture/example-scenario/integration/app-gateway-internal-api-management-function?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json) and [design guidance](/azure/cloud-adoption-framework/scenarios/app-platform/api-management/landing-zone-accelerator?toc=%2Fazure%2Fapi-management%2Ftoc.json&bc=/azure/api-management/breadcrumb/toc.json)
+Each part of the API DevOps pipeline is discussed below.
-## The problem
+## API definition
-Organizations today normally have multiple deployment environments (such as development, testing, and production) and use separate API Management instances for each environment. Some instances are shared by multiple development teams, who are responsible for different APIs with different release cadences.
+An API developer writes an API definition by providing a specification, settings (such as logging, diagnostics, and backend settings), and policies to be applied to the API. The API definition provides the information required to provision the API on an Azure API Management service. The specification may be based on a standards-based API specification (such as [WSDL][1], [OpenAPI][2], or [GraphQL][3]), or it may be defined using the Azure Resource Manager (ARM) APIs (for example, an ARM template describing the API and operations). The API definition will change over time and should be considered "source code". Ensure that the API definition is stored under source code control and has appropriate review before adoption.
-As a result, customers face the following challenges:
+There are several tools to assist producing the API definition:
-* How to automate deployment of APIs into API Management
-* How to migrate configurations from one environment to another
-* How to avoid interference between different development teams that share the same API Management instance
+* The [Azure API Management DevOps Resource Toolkit][4] includes two tools that provide an Azure Resource Manager (ARM) template. The _extractor_ creates an ARM template by extracting an API definition from an API Management service. The _creator_ produces the ARM template from a YAML specification. The DevOps Resource Toolkit supports SOAP, REST, and GraphQL APIs.
+* The [Azure API Ops Toolkit][5] provides a workflow built on top of a [git][21] source code control system (such as [GitHub][22] or [Azure Repos][23]). It uses an _extractor_ similar to the DevOps Resource Toolkit to produce an API definition that is then applied to a target API Management service. API Ops supports REST only at this time.
+* The [dotnet-apim][6] tool converts a well-formed YAML definition into an ARM template for later deployment. The tool is focused on REST APIs.
+* [Terraform][7] is an alternative to Azure Resource Manager to configure resources in Azure. You can create a Terraform configuration (together with policies) to implement the API in the same way that an ARM template is created.
-## Manage configurations in Resource Manager templates
+You can also use IDE-based tools for editors such as [Visual Studio Code][8] to produce the artifacts necessary to define the API. For instance, there are [over 30 plugins for editing OpenAPI specification files][9] on the Visual Studio Code Marketplace. You can also use code generators to create the artifacts. The [CADL language][10] lets you easily create high-level building blocks and then compile them into a standard API definition format such as OpenAPI.
-The following image illustrates the proposed approach.
+## API approval
+Once the API definition has been produced, the developer will submit the API definition for review and approval. If using a git-based source code control system (such as [GitHub][22] or [Azure Repos][23]), the submission can be done via [Pull Request][11]. A pull request informs others of changes that have been proposed to the API definition. Once the approval gates have been confirmed, an approver will merge the pull request into the main repository to signify that the API definition can be deployed to production. The pull request process empowers the developer to remediate any issues found during the approval process.
-In this example, there are two deployment environments: *Development* and *Production*. Each has its own API Management instance.
+Both GitHub and Azure Repos allow approval pipelines to be configured that run when a pull request is submitted. You can configure the approval pipelines to run tools such as:
-* API developers have access to the Development instance and can use it for developing and testing their APIs.
-* A designated team called the *API publishers* manages the Production instance.
+* API specification linters such as [Spectral][12] to ensure that the definition meets API standards required by the organization.
+* Breaking change detection using tools such as [openapi-diff][13].
+* Security audit and assessment tools. [OWASP maintains a list of tools][14] for security scanning.
+* Automated API test frameworks such as [Newman][15], a test runner for [Postman collections][16].
-The key in this proposed approach is to keep all API Management configurations in [Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). The organization should keep these templates in a source control system such as Git. As illustrated in the image, a Publisher repository contains all configurations of the Production API Management instance in a collection of templates:
+> [!NOTE]
+> Azure APIs must conform to a [strict set of guidelines][26] that you can use as a starting point for your own API guidelines. There is a [Spectral configuration][27] for enforcing the guidelines.
-|Template |Description |
-|||
-|Service template | Service-level configurations of the API Management instance, such as pricing tier and custom domains. |
-|Shared templates | Shared resources throughout an API Management instance, such as groups, products, and loggers. |
-|API templates | Configurations of APIs and their subresources: operations, policies, diagnostic settings. |
-|Master (main) template | Ties everything together by [linking](../azure-resource-manager/templates/linked-templates.md) to all templates and deploying them in order. To deploy all configurations to an API Management instance, deploy the main template. You can also deploy each template individually. |
+Once the automated tools have been run, the API definition is reviewed by the human eye. Tools won't catch all problems. A human reviewer ensures that the API definition meets the organizational criteria for APIs, including adherence to security, privacy, and consistency guidelines.
-API developers will fork the Publisher repository to a Developer repository and work on the changes for their APIs. In most cases, they focus on the API templates for their APIs and don't need to change the shared or service templates.
+## API publication
-## Migrate configurations to templates
-API developers face challenges when working with Resource Manager templates:
+The API definition will be published to an API Management service through a release pipeline. The tools used to publish the API definition depend on the tool used to produce the API definition:
-* API developers often work with the [OpenAPI Specification](https://github.com/OAI/OpenAPI-Specification) and might not be familiar with Resource Manager schemas. Authoring templates manually might be error-prone.
+* If using the [Azure API Management DevOps Resource Toolkit][4] or [dotnet-apim][6], the API definition is represented as an ARM template. Tasks are available for [Azure Pipelines][17] and [GitHub Actions][18] to deploy an ARM template.
+* If using the [Azure API Ops Toolkit][5], the toolkit includes a publisher that writes the API definition to the service.
+* If using [Terraform][7], CLI tools will deploy the API definition on your service. There are tasks available for [Azure Pipelines][19] and [GitHub Actions][20]
- A tool called [Creator](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#creator) in the resource kit can help automate the creation of API templates based on an Open API Specification file. Additionally, developers can supply API Management policies for an API in XML format.
+> **Can I use other source code control and CI/CD systems?**
+>
+> Yes. The process described works with any source code control system (although API Ops does require that the source code control system is [git][21] based). Similarly, you can use any CI/CD platform as long as it can be triggered by a check-in and run command line tools that communicate with Azure.
-* For customers who are already using API Management, another challenge is to extract existing configurations into Resource Manager templates. For those customers, a tool called [Extractor](https://github.com/Azure/azure-api-management-devops-resource-kit/blob/main/src/README.md#Extractor) in the resource kit can help generate templates by extracting configurations from their API Management instances.
+## Best practices
-## Workflow
+There's no industry standard for setting up a DevOps pipeline for publishing APIs, and none of the tools mentioned will work in all situations. However, we see that most situations are covered by using a combination of the following tools and
-* After API developers have finished developing and testing an API, and have generated the API templates, they can submit a pull request to merge the changes to the publisher repository.
+* [Azure Repos][23] stores the API definitions in a [git][21] repository.
+* [Azure Pipelines][17] runs the automated API approval and API publication processes.
+* [Azure API Ops Toolkit][5] provides tools and workflows for publishing APIs.
-* API publishers can validate the pull request and make sure the changes are safe and compliant. For example, they can check if only HTTPS is allowed to communicate with the API. Most validations can be automated as a step in the CI/CD pipeline.
+We've seen the greatest success in customer deployments, and recommend the following practices:
-* Once the changes are approved and merged successfully, API publishers can choose to deploy them to the Production instance either on schedule or on demand. The deployment of the templates can be automated using [GitHub Actions](https://docs.github.com/en/actions), [Azure Pipelines](/azure/devops/pipelines), [Azure PowerShell](../azure-resource-manager/templates/deploy-powershell.md), [Azure CLI](../azure-resource-manager/templates/deploy-cli.md), or other tools.
+* Set up either [GitHub][22] or [Azure Repos][23] for your source code control system. This choice will determine your choice of pipeline runner as well. GitHub can use [Azure Pipelines][17] or [GitHub Actions][18], whereas Azure Repos must use Azure Pipelines.
+* Set up an Azure API Management service for each API developer so that they can develop API definitions along with the API service. Use the consumption or developer SKU when creating the service.
+* Use [policy fragments][24] to reduce the new policy that developers need to write for each API.
+* Use the [Azure API Ops Toolkit][5] to extract a working API definition from the developer service.
+* Set up an API approval process that runs on each pull request. The API approval process should include breaking change detection, linting, and automated API testing.
+* Use the [Azure API Ops Toolkit][5] publisher to publish the API to your production API Management service.
+Review [Automated API deployments with API Ops][28] in the Azure Architecture Center for more details on how to configure and run a CI/CD deployment pipeline with API Ops.
-With this approach, an organization can automate the deployment of API changes into API Management instances, and it's easy to promote changes from one environment to another. Because different API development teams will be working on different sets of API templates and files, it prevents interference between different teams.
+## References
-## Video
+* [Azure DevOps Services][25] includes [Azure Repos][23] and [Azure Pipelines][17].
+* [Azure API Ops Toolkit][5] provides a workflow for API Management DevOps.
+* [Spectral][12] provides a linter for OpenAPI specifications.
+* [openapi-diff][13] provides a breaking change detector for OpenAPI v3 definitions.
+* [Newman][15] provides an automated test runner for Postman collections.
-> [!VIDEO https://www.youtube.com/embed/4Sp2Qvmg6j8]
-
-## Next steps
--- See the open-source [Azure API Management DevOps Resource Kit](https://github.com/Azure/azure-api-management-devops-resource-kit) for additional information, tools, and sample templates.
+<!-- Links -->
+[1]: https://www.w3.org/TR/wsdl20/
+[2]: https://www.openapis.org/
+[3]: https://graphql.org/learn/schema/
+[4]: https://github.com/Azure/azure-api-management-devops-resource-kit
+[5]: https://github.com/Azure/APIOps
+[6]: https://github.com/mirsaeedi/dotnet-apim
+[7]: https://www.terraform.io/
+[8]: https://code.visualstudio.com/
+[9]: https://marketplace.visualstudio.com/search?term=OpenAPI&target=VSCode&category=All%20categories&sortBy=Relevance
+[10]: https://github.com/microsoft/cadl
+[11]: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/about-pull-requests
+[12]: https://stoplight.io/open-source/spectral
+[13]: https://github.com/Azure/openapi-diff
+[14]: https://owasp.org/www-community/api_security_tools
+[15]: https://github.com/postmanlabs/newman
+[16]: https://learning.postman.com/docs/getting-started/creating-the-first-collection/
+[17]: /azure/azure-resource-manager/templates/deployment-tutorial-pipeline
+[18]: https://github.com/marketplace/actions/deploy-azure-resource-manager-arm-template
+[19]: https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
+[20]: https://learn.hashicorp.com/tutorials/terraform/github-actions
+[21]: https://git-scm.com/
+[22]: https://github.com/
+[23]: /azure/devops/repos/get-started/what-is-repos
+[24]: ./policy-fragments.md
+[25]: https://azure.microsoft.com/services/devops/
+[26]: https://github.com/microsoft/api-guidelines/blob/vNext/azure/Guidelines.md
+[27]: https://github.com/Azure/azure-api-style-guide
+[28]: /azure/architecture/example-scenario/devops/automated-api-deployments-apiops
app-service Configure Authentication Provider Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-aad.md
At present, this allows _any_ client application in your Azure AD tenant to requ
You have now configured a daemon client application that can access your App Service app using its own identity. > [!NOTE]
-> The access tokens provided to your app via EasyAuth do not have scopes for other APIs, such as Graph, even if your application has permissions to access those APIs. To use these APIs, you will need to use Azure Resource Manager to configure the token returned so it can be used to authenticate to other services. For more information, see [Tutorial: Access Microsoft Graph from a secured .NET app as the user](/azure/app-service/scenario-secure-app-access-microsoft-graph-as-user?tabs=azure-resource-explorer) .
+> The access tokens provided to your app via EasyAuth do not have scopes for other APIs, such as Graph, even if your application has permissions to access those APIs. To use these APIs, you will need to use Azure Resource Manager to configure the token returned so it can be used to authenticate to other services. For more information, see [Tutorial: Access Microsoft Graph from a secured .NET app as the user](./scenario-secure-app-access-microsoft-graph-as-user.md?tabs=azure-resource-explorer) .
## Best practices
Regardless of the configuration you use to set up authentication, the following
* [Tutorial: Authenticate and authorize users end-to-end in Azure App Service](tutorial-auth-aad.md) <!-- URLs. -->
-[Azure portal]: https://portal.azure.com/
+[Azure portal]: https://portal.azure.com/
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
The following features are supported for Linux containers:
- Mapping `/mounts`, `mounts/foo/bar`, `/`, and `/mounts/foo.bar/` to custom-mounted storage is not supported (you can only use /mounts/pathname for mounting custom storage to your web app.) - Storage mounts cannot be used together with clone settings option during [deployment slot](deploy-staging-slots.md) creation. - Storage mounts are not backed up when you [back up your app](manage-backup.md). Be sure to follow best practices to back up the Azure Storage accounts. -- Only Azure Files [SMB](/azure/storage/files/files-smb-protocol) are supported. Azure Files [NFS](/azure/storage/files/files-nfs-protocol) is not currently supported for Linux App Services.
+- Only Azure Files [SMB](../storage/files/files-smb-protocol.md) are supported. Azure Files [NFS](../storage/files/files-nfs-protocol.md) is not currently supported for Linux App Services.
::: zone-end
To validate that the Azure Storage is mounted successfully for the app:
- [Configure a custom container](configure-custom-container.md?pivots=platform-linux). - [Video: How to mount Azure Storage as a local share](https://www.youtube.com/watch?v=OJkvpWYr57Y).
app-service Configure Language Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-php.md
az webapp config show --resource-group <resource-group-name> --name <app-name> -
To show all supported PHP versions, run the following command in the [Cloud Shell](https://shell.azure.com): ```azurecli-interactive
-az webapp list-runtimes --os windows | grep php
+az webapp list-runtimes --os windows | grep PHP
``` ::: zone-end
app-service Configure Language Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-python.md
App Service ignores any errors that occur when processing a custom startup comma
gunicorn --bind=0.0.0.0 --timeout 600 --workers=4 --chdir <module_path> <module>.wsgi ```
- For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you are using scale rules to scale your web app up and down, you can dynamically set the number of workers using the `NUM_CORES` environment variable in our startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
+ For more information, see [Running Gunicorn](https://docs.gunicorn.org/en/stable/run.html) (docs.gunicorn.org). If you are using auto-scale rules to scale your web app up and down, you should also dynamically set the number of gunicorn workers using the `NUM_CORES` environment variable in your startup command, for example: `--workers $((($NUM_CORES*2)+1))`. For more information on setting the recommended number of gunicorn workers, see [the Gunicorn FAQ](https://docs.gunicorn.org/en/stable/design.html#how-many-workers)
- **Enable production logging for Django**: Add the `--access-logfile '-'` and `--error-logfile '-'` arguments to the command line:
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
App Service Environment v3 is available in the following regions:
### Azure Public:
-| Region | Normal and dedicated host | Availability zone support |
-| -- | :-: | :-: |
-| Australia East | x | x |
-| Australia Southeast | x | |
-| Brazil South | x | x |
-| Canada Central | x | x |
-| Canada East | x | |
-| Central India | x | x |
-| Central US | x | x |
-| East Asia | x | x |
-| East US | x | x |
-| East US 2 | x | x |
-| France Central | x | x |
-| Germany West Central | x | x |
-| Japan East | x | x |
-| Korea Central | x | x |
-| North Central US | x | |
-| North Europe | x | x |
-| Norway East | x | x |
-| South Africa North | x | x |
-| South Central US | x | x |
-| Southeast Asia | x | x |
-| Sweden Central | x | x |
-| Switzerland North | x | x |
-| UAE North | x | |
-| UK South | x | x |
-| UK West | x | |
-| West Central US | x | |
-| West Europe | x | x |
-| West US | x | |
-| West US 2 | x | x |
-| West US 3 | x | x |
+| Region | Normal and dedicated host | Availability zone support |
+| -- | :--: | :-: |
+| Australia East | ✅ | ✅ |
+| Australia Southeast | ✅ | |
+| Brazil South | ✅ | ✅ |
+| Canada Central | ✅ | ✅ |
+| Canada East | ✅ | |
+| Central India | ✅ | ✅ |
+| Central US | ✅ | ✅ |
+| East Asia | ✅ | ✅ |
+| East US | ✅ | ✅ |
+| East US 2 | ✅ | ✅ |
+| France Central | ✅ | ✅ |
+| Germany West Central | ✅ | ✅ |
+| Japan East | ✅ | ✅ |
+| Korea Central | ✅ | ✅ |
+| North Central US | ✅ | |
+| North Europe | ✅ | ✅ |
+| Norway East | ✅ | ✅ |
+| South Africa North | ✅ | ✅ |
+| South Central US | ✅ | ✅ |
+| Southeast Asia | ✅ | ✅ |
+| Sweden Central | ✅ | ✅ |
+| Switzerland North | ✅ | ✅ |
+| UAE North | ✅ | |
+| UK South | ✅ | ✅ |
+| UK West | ✅ | |
+| West Central US | ✅ | |
+| West Europe | ✅ | ✅ |
+| West US | ✅ | |
+| West US 2 | ✅ | ✅ |
+| West US 3 | ✅ | ✅ |
### Azure Government: | Region | Normal and dedicated host | Availability zone support | | -- | :-: | :-: |
-| US Gov Texas | x | |
-| US Gov Arizona | x | |
-| US Gov Virginia | x | |
+| US Gov Texas | ✅ | |
+| US Gov Arizona | ✅ | |
+| US Gov Virginia | ✅ | |
## App Service Environment v2
app-service Quickstart Wordpress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-wordpress.md
When no longer needed, you can delete the resource group, App service, and all r
:::image type="content" source="./media/quickstart-wordpress/delete-resource-group.png" alt-text="Delete resource group."::: ## Change MySQL password
-The WordPress configuration is modified to use [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. To change the MySQL database password, see [update admin password](/azure/mysql/single-server/how-to-create-manage-server-portal#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
+The WordPress configuration is modified to use [Application Settings](reference-app-settings.md#wordpress) to connect to the MySQL database. To change the MySQL database password, see [update admin password](../mysql/single-server/how-to-create-manage-server-portal.md#update-admin-password). Whenever the MySQL database credentials are changed, the [Application Settings](reference-app-settings.md#wordpress) also need to be updated. The [Application Settings for MySQL database](reference-app-settings.md#wordpress) begin with the **`DATABASE_`** prefix. For more information on updating MySQL passwords, see [WordPress on App Service](https://azure.github.io/AppService/2022/02/23/WordPress-on-App-Service-Public-Preview.html#known-limitations).
## Change WordPress admin password
Congratulations, you've successfully completed this quickstart!
> [Tutorial: PHP app with MySQL](tutorial-php-mysql-app.md) > [!div class="nextstepaction"]
-> [Configure PHP app](configure-language-php.md)
+> [Configure PHP app](configure-language-php.md)
app-service Reference App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/reference-app-settings.md
APACHE_RUN_GROUP | RUN sed -i 's!User ${APACHE_RUN_GROUP}!Group www-data!g' /etc
> |DATABASE_HOST|Database|-|-|Database host used to connect to WordPress.| > |DATABASE_NAME|Database|-|-|Database name used to connect to WordPress.| > |DATABASE_USERNAME|Database|-|-|Database username used to connect to WordPress.|
-> |DATABASE_PASSWORD|Database|-|-|Database password used to connect to the MySQL database. To change the MySQL database password, see [update admin password](/azure/mysql/single-server/how-to-create-manage-server-portal#update-admin-password). Whenever the MySQL database password is changed, the Application Settings also need to be updated. |
+> |DATABASE_PASSWORD|Database|-|-|Database password used to connect to the MySQL database. To change the MySQL database password, see [update admin password](../mysql/single-server/how-to-create-manage-server-portal.md#update-admin-password). Whenever the MySQL database password is changed, the Application Settings also need to be updated. |
> |WORDPRESS_ADMIN_EMAIL|Deployment only|-|-|WordPress admin email.| > |WORDPRESS_ADMIN_PASSWORD|Deployment only|-|-|WordPress admin password. This is only for deployment purposes. Modifying this value has no effect on the WordPress installation. To change the WordPress admin password, see [resetting your password](https://wordpress.org/support/article/resetting-your-password/#to-change-your-password).| > |WORDPRESS_ADMIN_USER|Deployment only|-|-|WordPress admin username|
HTTPSCALE_FORWARD_REQUEST
IS_VALID_STAMP_TOKEN NEEDS_SITE_RESTRICTED_TOKEN HTTP_X_MS_PRIVATELINK_ID
- -->
+ -->
app-service Tutorial Java Quarkus Postgresql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-java-quarkus-postgresql-app.md
# Tutorial: Build a Quarkus web app with Azure App Service on Linux and PostgreSQL This tutorial walks you through the process of building, configuring, deploying, and scaling Java web apps on Azure.
-When you are finished, you will have a [Quarkus](https://quarkus.io) application storing data in [PostgreSQL](/azure/postgresql) database running on [Azure App Service on Linux](overview.md).
+When you are finished, you will have a [Quarkus](https://quarkus.io) application storing data in [PostgreSQL](../postgresql/index.yml) database running on [Azure App Service on Linux](overview.md).
![Screenshot of Quarkus application storing data in PostgreSQL.](./media/tutorial-java-quarkus-postgresql/quarkus-crud-running-locally.png)
In this tutorial, you learn how to:
## Clone the sample app and prepare the repo
-This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](/azure/postgresql). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource).
+This tutorial uses a sample Fruits list app with a web UI that calls a Quarkus REST API backed by [Azure Database for PostgreSQL](../postgresql/index.yml). The code for the app is available [on GitHub](https://github.com/quarkusio/quarkus-quickstarts/tree/main/hibernate-orm-panache-quickstart). To learn more about writing Java apps using Quarkus and PostgreSQL, see the [Quarkus Hibernate ORM with Panache Guide](https://quarkus.io/guides/hibernate-orm-panache) and the [Quarkus Datasource Guide](https://quarkus.io/guides/datasource).
Run the following commands in your terminal to clone the sample repo and set up the sample app environment.
and
Learn more about running Java apps on App Service on Linux in the developer guide. > [!div class="nextstepaction"]
-> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
+> [Java in App Service Linux dev guide](configure-language-java.md?pivots=platform-linux)
application-gateway Configuration Http Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-http-settings.md
This feature is useful when you want to keep a user session on the same server a
> Some vulnerability scans may flag the Application Gateway affinity cookie because the Secure or HttpOnly flags are not set. These scans do not take into account that the data in the cookie is generated using a one-way hash. The cookie does not contain any user information and is used purely for routing.
-The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://tools.ietf.org/id/draft-ietf-httpbis-rfc6265bis-03.html#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
+The [Chromium browser](https://www.chromium.org/Home) [v80 update](https://chromiumdash.appspot.com/schedule) brought a mandate where HTTP cookies without [SameSite](https://datatracker.ietf.org/doc/html/draft-ietf-httpbis-rfc6265bis-03#rfc.section.5.3.7) attribute have to be treated as SameSite=Lax. In the case of CORS (Cross-Origin Resource Sharing) requests, if the cookie has to be sent in a third-party context, it has to use *SameSite=None; Secure* attributes and it should be sent over HTTPS only. Otherwise, in an HTTP only scenario, the browser doesn't send the cookies in the third-party context. The goal of this update from Chrome is to enhance security and to avoid Cross-Site Request Forgery (CSRF) attacks.
To support this change, starting February 17 2020, Application Gateway (all the SKU types) will inject another cookie called *ApplicationGatewayAffinityCORS* in addition to the existing *ApplicationGatewayAffinity* cookie. The *ApplicationGatewayAffinityCORS* cookie has two more attributes added to it (*"SameSite=None; Secure"*) so that sticky sessions are maintained even for cross-origin requests.
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the user-assigned managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant. -- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner).
+- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
## Add user-assigned managed identity for Azure Automation account
print(response.text)
- If you need to disable a managed identity, see [Disable your Azure Automation account managed identity](disable-managed-identity-for-automation.md). -- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
+- For an overview of Azure Automation account security, see [Automation account authentication overview](automation-security-overview.md).
automation Automation Create Standalone Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-standalone-account.md
The following table describes the fields on the **Basics** tab.
The following image shows a standard configuration for a new Automation account. ### Advanced
You can chose to enable managed identities later, and the Automation account is
The following image shows a standard configuration for a new Automation account.
-### Tags tab
+### Networking
+
+On the **Networking** tab, you can configure connectivity to Automation Account - either publicly via public IP addresses or privately using a [Azure Automation Private Link](./how-to/private-link-security.md). Azure Automation Private Link connects one or more private endpoints (and therefore the virtual networks they are contained in) to your Automation Account resource.
+
+The following image shows a standard configuration for a new Automation account.
++
+### Tags
On the **Tags** tab, you can specify Resource Manager tags to help organize your Azure resources. For more information, see [Tag resources, resource groups, and subscriptions for logical organization](../azure-resource-manager/management/tag-resources.md).
-### Review + create tab
+### Review + create
When you navigate to the **Review + create** tab, Azure runs validation on the Automation account settings that you have chosen. If validation passes, you can proceed to create the Automation account.
automation Automation Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-security-overview.md
For details on using managed identities, see [Enable managed identity for Azure
## Run As accounts Run As accounts in Azure Automation provide authentication for managing Azure Resource Manager resources or resources deployed on the classic deployment model. There are two types of Run As accounts in Azure Automation:
+- Azure Run As Account
+- Azure Classic Run As Account
To create or renew a Run As account, permissions are needed at three levels:
For runbooks that use Hybrid Runbook Workers on Azure VMs, you can use [runbook
* To create an Automation account from the Azure portal, see [Create a standalone Azure Automation account](automation-create-standalone-account.md). * If you prefer to create your account using a template, see [Create an Automation account using an Azure Resource Manager template](quickstart-create-automation-account-template.md). * For authentication using Amazon Web Services, see [Authenticate runbooks with Amazon Web Services](automation-config-aws-account.md).
-* For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
+* For a list of Azure services that support the managed identities for Azure resources feature, see [Services that support managed identities for Azure resources](../active-directory/managed-identities-azure-resources/services-support-managed-identities.md).
automation Automation Solution Vm Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-solution-vm-management.md
The Start/Stop VMs during off-hours feature start or stops enabled Azure VMs. It starts or stops machines on user-defined schedules, provides insights through Azure Monitor logs, and sends optional emails by using [action groups](../azure-monitor/alerts/action-groups.md). The feature can be enabled on both Azure Resource Manager and classic VMs for most scenarios. > [!NOTE]
-> Before you install this version (v1), we would like you to know about the [next version](../azure-functions/start-stop-vms/overview.md), which is in preview right now. This new version (v2) offers all the same functionality as this one, but is designed to take advantage of newer technology in Azure. It adds some of the commonly requested features from customers, such as multi-subscription support from a single Start/Stop instance.
->
-> Start/Stop VMs during off-hours (v1) will be deprecated soon and the date will be announced once V2 moves to general availability (GA).
+> Before you install version 1, we recommend you to learn about the [version 2](../azure-functions/start-stop-vms/overview.md), which is now generally available. The newer version offers all existing capabilities along with the support to use it with Azure. This also provides new capabilities, such as multi-subscription support from a single Start/Stop instance.
+
+> Start/Stop VMs during off-hours (v1) will be deprecated soon.
This feature uses [Start-AzVm](/powershell/module/az.compute/start-azvm) cmdlet to start VMs. It uses [Stop-AzVM](/powershell/module/az.compute/stop-azvm) for stopping VMs.
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
To install and use Hybrid Worker extension using REST API, follow these steps. T
```
-1. Follow the steps [here](/azure/active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
+1. Follow the steps [here](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md#enable-system-assigned-managed-identity-on-an-existing-vm) to enable the System-assigned managed identity on the VM.
1. Get the automation account details using this API call.
Using [VM insights](../azure-monitor/vm/vminsights-overview.md), you can monitor
- To learn about Azure VM extensions, see [Azure VM extensions and features for Windows](../virtual-machines/extensions/features-windows.md) and [Azure VM extensions and features for Linux](../virtual-machines/extensions/features-linux.md). -- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
+- To learn about VM extensions for Arc-enabled servers, see [VM extension management with Azure Arc-enabled servers](../azure-arc/servers/manage-vm-extensions.md).
automation Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/managed-identity.md
This issue occurs when you don't have the following permissions for the user-ass
> The above permissions are granted by default to Managed Identity Operator and Managed Identity Contributor. ### Resolution
-Ensure that you have [Identity Operator role permission](/azure/role-based-access-control/built-in-roles#managed-identity-operator) to add the user-assigned managed identity to your Automation account.
+Ensure that you have [Identity Operator role permission](../../role-based-access-control/built-in-roles.md#managed-identity-operator) to add the user-assigned managed identity to your Automation account.
## Scenario: Runbook fails with "this.Client.SubscriptionId cannot be null." error message
availability-zones Migrate App Gateway V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-app-gateway-v2.md
# Migrate Application Gateway and WAF deployments to availability zone support
-[Application Gateway Standard v2](/azure/application-gateway/overview-v2) and Application Gateway with [WAF v2](/azure/web-application-firewall/ag/ag-overview) supports zonal and zone redundant deployments. For more information about zone redundancy, see [Regions and availability zones](az-overview.md).
+[Application Gateway Standard v2](../application-gateway/overview-v2.md) and Application Gateway with [WAF v2](../web-application-firewall/ag/ag-overview.md) supports zonal and zone redundant deployments. For more information about zone redundancy, see [Regions and availability zones](az-overview.md).
If you previously deployed **Azure Application Gateway Standard v2** or **Azure Application Gateway Standard v2 + WAF v2** without zonal support, you must redeploy these services to enable zone redundancy. Two migration options to redeploy these services are described in this article.
Use this option to:
To create a separate Application Gateway, WAF (optional) and IP address: 1. Go to the [Azure portal](https://portal.azure.com).
-2. Follow the steps in [Create an application gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway) or [Create an application gateway with a Web Application Firewall](/azure/web-application-firewall/ag/application-gateway-web-application-firewall-portal) to create a new Application Gateway v2 or Application Gateway v2 + WAF v2, respectively. You can reuse your existing Virtual Network or create a new one, but you must create a new frontend Public IP address.
+2. Follow the steps in [Create an application gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway) or [Create an application gateway with a Web Application Firewall](../web-application-firewall/ag/application-gateway-web-application-firewall-portal.md) to create a new Application Gateway v2 or Application Gateway v2 + WAF v2, respectively. You can reuse your existing Virtual Network or create a new one, but you must create a new frontend Public IP address.
3. Verify that the application gateway and WAF are working as intended. 4. Migrate your DNS configuration to the new public IP address. 5. Delete the old Application gateway and WAF resources.
To delete the Application Gateway and WAF and redeploy:
1. Go to the [Azure portal](https://portal.azure.com). 2. Select **All resources**, and then select the resource group that contains the Application Gateway. 3. Select the Application Gateway resource and then select **Delete**. Type **yes** to confirm deletion, and then click **Delete**.
-4. Follow the steps in [Create an application gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway) or [Create an application gateway with a Web Application Firewall](/azure/web-application-firewall/ag/application-gateway-web-application-firewall-portal) to create a new Application Gateway v2 or Application Gateway v2 + WAF v2, respectively, using the same Virtual Network, subnets, and Public IP address that you used previously.
+4. Follow the steps in [Create an application gateway](../application-gateway/quick-create-portal.md#create-an-application-gateway) or [Create an application gateway with a Web Application Firewall](../web-application-firewall/ag/application-gateway-web-application-firewall-portal.md) to create a new Application Gateway v2 or Application Gateway v2 + WAF v2, respectively, using the same Virtual Network, subnets, and Public IP address that you used previously.
## Next steps
availability-zones Migrate Search Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-search-service.md
++
+ Title: Migrate Azure Cognitive Search to availability zone support
+description: Learn how to migrate Azure Cognitive Search to availability zone support.
+++ Last updated : 08/01/2022++++++
+# Migrate Azure Cognitive Search to availability zone support
+
+This guide describes how to migrate Azure Cognitive Search from non-availability zone support to availability support.
+
+Azure Cognitive Search services can take advantage of availability support [in regions that support availability zones](../search/search-performance-optimization.md#availability-zones). Services with [two or more replicas](../search/search-capacity-planning.md) in these regions created after availability support was enabled can automatically utilize availability zones. Each replica will be placed in a different availability zone within the region. If you have more replicas than availability zones, the replicas will be distributed across availability zones as evenly as possible.
+
+If a search service was created before availability zone support was enabled in its region, the search service must be recreated to take advantage of availability zone support.
+
+## Prerequisites
+
+The following are the current requirements/limitations for enabling availability zone support:
+
+- The search service must be in [a region that supports availability zones](../search/search-performance-optimization.md#availability-zones)
+- The search service must be created after availability zone support was enabled in its region.
+- The search service must have [at least two replicas](../search/search-performance-optimization.md#high-availability)
+
+## Downtime requirements
+
+Downtime will be dependent on how you decide to carry out the migration. Migration will consist of a side-by-side deployment where you'll create a new search service. Downtime will depend on how you choose to redirect traffic from your old search service to your new availability zone enabled search service. For example, if you're using [Azure Front Door](../frontdoor/front-door-overview.md), downtime will be dependent on the time it takes to update Azure Front Door with your new search service's information. Alternatively, you can route traffic to multiple search services at the same time using [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md).
+
+## Migration guidance: Recreate your search service
+
+### When to recreate your search service
+
+If you created your search service in a region that supports availability zones before this support was enabled, you'll need to recreate the search service.
+
+### How to recreate your search service
+
+1. [Create a new search service](../search/search-create-service-portal.md) in the same region as the old search service. This region should [support availability zones on or after the current date](../search/search-performance-optimization.md#availability-zones).
+
+ >[!IMPORTANT]
+ >The [free and basic tiers do not support availability zones](../search/search-sku-tier.md#feature-availability-by-tier), and so they should not be used.
+1. Add at [least two replicas to your new search service](../search/search-capacity-planning.md#add-or-reduce-replicas-and-partitions). Once the search service has at least two replicas, it automatically takes advantage of availability zone support.
+1. Migrate your data from your old search service to your new search service by rebuilding of all your search indexes from your old service.
+
+To rebuild all of your search indexes, choose one of the following two options:
+ - [Move individual indexes from your old search service to your new one](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/index-backup-restore)
+ - Rebuild indexes from an external data source if one is available.
+1. Redirect traffic from your old search service to your new search service. This may require updates to your application that uses the old search service.
+>[!TIP]
+>Services such as [Azure Front Door](../frontdoor/front-door-overview.md) and [Azure Traffic Manager](../traffic-manager/traffic-manager-overview.md) help simplify this process.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Learn how to create and deploy ARM templates](../azure-resource-manager/templates/quickstart-create-templates-use-visual-studio-code.md)
+
+> [!div class="nextstepaction"]
+> [ARM Quickstart Templates](https://azure.microsoft.com/resources/templates/)
+
+> [!div class="nextstepaction"]
+> [Learn about high availability in Azure Cognitive Search](../search/search-performance-optimization.md)
azure-app-configuration Enable Dynamic Configuration Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/enable-dynamic-configuration-azure-functions-csharp.md
In this tutorial, you learn how to:
## Reload data from App Configuration
-Azure Functions support running [in-process](/azure/azure-functions/functions-dotnet-class-library) or [isolated-process](/azure/azure-functions/dotnet-isolated-process-guide). The main difference in App Configuration usage between the two modes is how the configuration is refreshed. In the in-process mode, you must make a call in each function to refresh the configuration. In the isolated-process mode, there is support for middleware. The App Configuration middleware, `Microsoft.Azure.AppConfiguration.Functions.Worker`, enables the call to refresh configuration automatically before each function is executed.
+Azure Functions support running [in-process](../azure-functions/functions-dotnet-class-library.md) or [isolated-process](../azure-functions/dotnet-isolated-process-guide.md). The main difference in App Configuration usage between the two modes is how the configuration is refreshed. In the in-process mode, you must make a call in each function to refresh the configuration. In the isolated-process mode, there is support for middleware. The App Configuration middleware, `Microsoft.Azure.AppConfiguration.Functions.Worker`, enables the call to refresh configuration automatically before each function is executed.
1. Update the code that connects to App Configuration and add the data refreshing conditions.
Azure Functions support running [in-process](/azure/azure-functions/functions-do
In this tutorial, you enabled your Azure Functions app to dynamically refresh configuration settings from App Configuration. To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
+> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
azure-app-configuration Howto Set Up Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-set-up-private-access.md
# Set up private access in Azure App Configuration
-In this article, you'll learn how to set up private access for your Azure App Configuration store, by creating a [private endpoint](/azure/private-link/private-endpoint-overview) with Azure Private Link. Private endpoints allow access to your App Configuration store using a private IP address from a virtual network.
+In this article, you'll learn how to set up private access for your Azure App Configuration store, by creating a [private endpoint](../private-link/private-endpoint-overview.md) with Azure Private Link. Private endpoints allow access to your App Configuration store using a private IP address from a virtual network.
## Prerequisites
This command will prompt your web browser to launch and load an Azure sign-in pa
1. Leave the box **Enable network policies for all private endpoints in this subnet** checked.
- 1. Under **Private IP configuration**, select the option to allocate IP addresses dynamically. For more information, refer to [Private IP addresses](/azure/virtual-network/ip-services/private-ip-addresses#allocation-method).
+ 1. Under **Private IP configuration**, select the option to allocate IP addresses dynamically. For more information, refer to [Private IP addresses](../virtual-network/ip-services/private-ip-addresses.md#allocation-method).
1. Optionally, you can select or create an **Application security group**. Application security groups allow you to group virtual machines and define network security policies based on those groups.
Once deployment is complete, you'll get a notification that your endpoint has be
Go to **Networking** > **Private Access** in your App Configuration store to access the private endpoints linked to your App Configuration store.
-1. Check the connection state of your private link connection. When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](/azure/private-link/rbac-permissions), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request. For more information about the connection approval models, go to [Manage Azure Private Endpoints](/azure/private-link/manage-private-endpoint#private-endpoint-connections).
+1. Check the connection state of your private link connection. When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](../private-link/rbac-permissions.md), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request. For more information about the connection approval models, go to [Manage Azure Private Endpoints](../private-link/manage-private-endpoint.md#private-endpoint-connections).
1. To manually approve, reject or remove a connection, select the checkbox next to the endpoint you want to edit and select an action item from the top menu.
az network private-endpoint-connection show --resource-group <resource-group> --
#### Get connection approval
-When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](/azure/private-link/rbac-permissions), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request.
+When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory and you have [sufficient permissions](../private-link/rbac-permissions.md), the connection request will be auto-approved. Otherwise, you must wait for the owner of that resource to approve your connection request.
To approve a private endpoint connection, use the [az network private-endpoint-connection approve](/cli/azure/network/private-endpoint-connection#az-network-private-endpoint-connection-approve) command. Replace the placeholder texts `resource-group`, `private-endpoint`, and `<app-config-store-name>` with the name of the resource group, the name of the private endpoint and the name of the store.
To approve a private endpoint connection, use the [az network private-endpoint-c
az network private-endpoint-connection approve --resource-group <resource-group> --name <private-endpoint> --type Microsoft.AppConfiguration/configurationStores --resource-name <app-config-store-name> ```
-For more information about the connection approval models, go to [Manage Azure Private Endpoints](/azure/private-link/manage-private-endpoint#private-endpoint-connections).
+For more information about the connection approval models, go to [Manage Azure Private Endpoints](../private-link/manage-private-endpoint.md#private-endpoint-connections).
#### Delete a private endpoint connection
For more CLI commands, go to [az network private-endpoint-connection](/cli/azure
-If you have issues with a private endpoint, check the following guide: [Troubleshoot Azure Private Endpoint connectivity problems](/azure/private-link/troubleshoot-private-endpoint-connectivity).
+If you have issues with a private endpoint, check the following guide: [Troubleshoot Azure Private Endpoint connectivity problems](../private-link/troubleshoot-private-endpoint-connectivity.md).
## Next steps
If you have issues with a private endpoint, check the following guide: [Troubles
>[Use private endpoints for Azure App Configuration](concept-private-endpoint.md) > [!div class="nextstepaction"]
->[Disable public access in Azure App Configuration](howto-disable-public-access.md)
+>[Disable public access in Azure App Configuration](howto-disable-public-access.md)
azure-app-configuration Quickstart Azure Functions Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/quickstart-azure-functions-csharp.md
In this quickstart, you incorporate the Azure App Configuration service into an
[!INCLUDE [Create a project using the Azure Functions template](../../includes/functions-vstools-create.md)] ## Connect to an App Configuration store
-This project will use [dependency injection in .NET Azure Functions](/azure/azure-functions/functions-dotnet-dependency-injection) and add Azure App Configuration as an extra configuration source. Azure Functions support running [in-process](/azure/azure-functions/functions-dotnet-class-library) or [isolated-process](/azure/azure-functions/dotnet-isolated-process-guide). Pick the one that matches your requirements.
+This project will use [dependency injection in .NET Azure Functions](../azure-functions/functions-dotnet-dependency-injection.md) and add Azure App Configuration as an extra configuration source. Azure Functions support running [in-process](../azure-functions/functions-dotnet-class-library.md) or [isolated-process](../azure-functions/dotnet-isolated-process-guide.md). Pick the one that matches your requirements.
1. Right-click your project, and select **Manage NuGet Packages**. On the **Browse** tab, search for and add following NuGet packages to your project. ### [In-process](#tab/in-process)
In this quickstart, you created a new App Configuration store and used it with a
To learn how to use an Azure managed identity to streamline the access to App Configuration, continue to the next tutorial. > [!div class="nextstepaction"]
-> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
+> [Access App Configuration using managed identity](./howto-integrate-azure-managed-service-identity.md)
azure-arc Deploy Active Directory Connector Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/deploy-active-directory-connector-cli.md
Previously updated : 05/05/2022 Last updated : 08/16/2022
To know further details about how to set up OU and AD account, go to [Deploy Azu
#### Create an AD connector instance > [!NOTE]
-> Make sure the password of provided domain service AD account here doesn't contain `!` as special characters.
+> Make sure to wrap your password for the domain service AD account with single quote `'` to avoid the expansion of special characters such as `!`.
> To view available options for create command for AD connector instance, use the following command:
az arcdata ad-connector create
--k8s-namespace < Kubernetes namespace > --realm < AD Domain name > --nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode : manual or auto >
+--account-provisioning < account provisioning mode : manual or automatic >
--prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup > --use-k8s ```
az arcdata ad-connector create
--use-k8s ```
+```azurecli
+# Setting environment variables needed for automatic account provisioning
+DOMAIN_SERVICE_ACCOUNT_USERNAME='sqlmi'
+DOMAIN_SERVICE_ACCOUNT_PASSWORD='arc@123!!'
+
+# Deploying active directory connector with automatic account provisioning
+az arcdata ad-connector create
+--name arcadc
+--k8s-namespace arc
+--realm CONTOSO.LOCAL
+--nameserver-addresses 10.10.10.11
+--account-provisioning automatic
+--prefer-k8s-dns false
+--use-k8s
+```
+ ##### Directly connected mode ```azurecli
az arcdata ad-connector create
--dns-domain-name < The DNS name of AD domain > --realm < AD Domain name > --nameserver-addresses < DNS server IP addresses >account-provisioning < account provisioning mode : manual or auto >
+--account-provisioning < account provisioning mode : manual or automatic >
--prefer-k8s-dns < whether Kubernetes DNS or AD DNS Server for IP address lookup > --data-controller-name < Arc Data Controller Name > --resource-group < resource-group >
az arcdata ad-connector create
--resource-group arc-rg ```
+```azurecli
+# Setting environment variables needed for automatic account provisioning
+DOMAIN_SERVICE_ACCOUNT_USERNAME='sqlmi'
+DOMAIN_SERVICE_ACCOUNT_PASSWORD='arc@123!!'
+
+# Deploying active directory connector with automatic account provisioning
+az arcdata ad-connector create
+--name arcadc
+--realm CONTOSO.LOCAL
+--dns-domain-name contoso.local
+--nameserver-addresses 10.10.10.11
+--account-provisioning automatic
+--prefer-k8s-dns false
+--data-controller-name arcdc
+--resource-group arc-rg
+```
+ ### Update an AD connector instance To view available options for update command for AD connector instance, use the following command:
azure-arc Agent Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/agent-upgrade.md
az connectedk8s upgrade -g AzureArcTest1 -n AzureArcTest --agent-version 1.1.0
## Version support policy
-When you [create support requests](/azure/azure-portal/supportability/how-to-create-azure-support-request) for Azure Arc-enabled Kubernetes, the following version support policy applies:
+When you [create support requests](../../azure-portal/supportability/how-to-create-azure-support-request.md) for Azure Arc-enabled Kubernetes, the following version support policy applies:
* Azure Arc-enabled Kubernetes agents have a support window of "N-2", where 'N' is the latest minor release of agents. * For example, if Azure Arc-enabled Kubernetes introduces 0.28.a today, versions 0.28.a, 0.28.b, 0.27.c, 0.27.d, 0.26.e, and 0.26.f are supported.
If you create a support request and are using a version that is outside of the s
* Walk through our quickstart to [connect a Kubernetes cluster to Azure Arc](./quickstart-connect-cluster.md). * Already have a Kubernetes cluster connected Azure Arc? [Create configurations on your Azure Arc-enabled Kubernetes cluster](./tutorial-use-gitops-connected-cluster.md).
-* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
+* Learn how to [use Azure Policy to apply configurations at scale](./use-azure-policy.md).
azure-arc Agent Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes-archive.md
Title: Archive for What's new with Azure Arc-enabled servers agent description: The What's new release notes in the Overview section for Azure Arc-enabled servers agent contains six months of activity. Thereafter, the items are removed from the main article and put into this article. Previously updated : 07/06/2022 Last updated : 08/17/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. Thi
- Known issues - Bug fixes
+## Version 1.16 - March 2022
+
+### Known issues
+
+- `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](deployment-options.md#agent-installation-details).
+
+### New features
+
+- You can now granularly control which extensions are allowed to be deployed to your server and whether or not Guest Configuration should be enabled. See [local agent controls to enable or disable capabilities](security-overview.md#local-agent-security-controls) for more information.
+
+### Fixed
+
+- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux. Azure Storage endpoints for extension downloads are now included with the "Arc" keyword.
+ ## Version 1.15 - February 2022 ### Known issues
azure-arc Agent Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/agent-release-notes.md
Title: What's new with Azure Arc-enabled servers agent description: This article has release notes for Azure Arc-enabled servers agent. For many of the summarized issues, there are links to more details. Previously updated : 07/26/2022 Last updated : 08/17/2022
The Azure Connected Machine agent receives improvements on an ongoing basis. To
This page is updated monthly, so revisit it regularly. If you're looking for items older than six months, you can find them in [archive for What's new with Azure Arc-enabled servers agent](agent-release-notes-archive.md).
+## Version 1.21 - August 2022
+
+### New features
+
+- `azcmagent connect` usability improvements:
+ - The `--subscription-id (-s)` parameter now accepts friendly names in addition to subscription IDs
+ - Automatic registration of any missing resource providers for first-time users (additional user permissions required to register resource providers)
+ - A progress bar now appears while the resource is being created and connected
+ - The onboarding script now supports both the yum and dnf package managers on RPM-based Linux systems
+- You can now restrict which URLs can be used to download machine configuration (formerly Azure Policy guest configuration) packages by setting the `allowedGuestConfigPkgUrls` tag on the server resource and providing a comma-separated list of URL patterns to allow.
+
+### Fixed
+
+- Extension installation failures are now reported to Azure more reliably to prevent extensions from being stuck in the "creating" state
+- Metadata for Google Cloud Platform virtual machines can now be retrieved when the agent is configured to use a proxy server
+- Improved network connection retry logic and error handling
+ ## Version 1.20 - July 2022 ### Known issues
This page is updated monthly, so revisit it regularly. If you're looking for ite
- `azcmagent logs` collects only the 2 most recent logs for each service to reduce ZIP file size. - `azcmagent logs` collects Guest Configuration logs again.
-## Version 1.16 - March 2022
-
-### Known issues
--- `azcmagent logs` doesn't collect Guest Configuration logs in this release. You can locate the log directories in the [agent installation details](deployment-options.md#agent-installation-details).-
-### New features
--- You can now granularly control which extensions are allowed to be deployed to your server and whether or not Guest Configuration should be enabled. See [local agent controls to enable or disable capabilities](security-overview.md#local-agent-security-controls) for more information.-
-### Fixed
--- The "Arc" proxy bypass keyword no longer includes Azure Active Directory endpoints on Linux. Azure Storage endpoints for extension downloads are now included with the "Arc" keyword.- ## Next steps - Before evaluating or enabling Azure Arc-enabled servers across multiple hybrid machines, review [Connected Machine agent overview](agent-overview.md) to understand requirements, technical details about the agent, and deployment methods.
azure-cache-for-redis Cache Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-configure.md
Select **Diagnose and solve problems** to be provided with common issues and str
Select **Events** to add event subscriptions to your cache. Use events to build reactive, event-driven apps with the fully managed event routing service that is built into Azure.
-The Event Grid helps you build automation into your cloud infrastructure, create serverless apps, and integrate across services and clouds. For more information, see [What is Azure Event Grid](/azure/event-grid/overview).
+The Event Grid helps you build automation into your cloud infrastructure, create serverless apps, and integrate across services and clouds. For more information, see [What is Azure Event Grid](../event-grid/overview.md).
## Redis console
The **Virtual Network** section allows you to configure the virtual network sett
The **Private Endpoint** section allows you to configure the private endpoint settings for your cache. Private endpoint is supported on all cache tiers Basic, Standard, Premium, and Enterprise. We recommend using private endpoint instead of VNets. Private endpoints are easy to set up or remove, are supported on all tiers, and can connect your cache to multiple different VNets at once.
-For more information, see [Azure Cache for Redis with Azure Private Link](/azure/azure-cache-for-redis/cache-private-link).
+For more information, see [Azure Cache for Redis with Azure Private Link](./cache-private-link.md).
### Firewall
Azure Automation delivers a cloud-based automation, operating system updates, an
Select **Tasks** to help you manage Azure Cache for Redis resources more easily. These tasks vary in number and availability, based on the resource type. Presently, you can only use the **Send monthly cost for resource** template to create a task while in preview.
-For more information, see [Manage Azure resources and monitor costs by creating automation tasks](/azure/logic-apps/create-automation-tasks-azure-resources).
+For more information, see [Manage Azure resources and monitor costs by creating automation tasks](../logic-apps/create-automation-tasks-azure-resources.md).
### Export template
For more information about Redis commands, see [https://redis.io/commands](https
## Next steps - [How can I run Redis commands?](cache-development-faq.yml#how-can-i-run-redis-commands-)-- [Monitor Azure Cache for Redis](cache-how-to-monitor.md)
+- [Monitor Azure Cache for Redis](cache-how-to-monitor.md)
azure-cache-for-redis Cache How To Premium Persistence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-persistence.md
The following list contains answers to commonly asked questions about Azure Cach
- [Can I use the same storage account for persistence across two different caches?](#can-i-use-the-same-storage-account-for-persistence-across-two-different-caches) - [Will I be charged for the storage being used in Data Persistence](#will-i-be-charged-for-the-storage-being-used-in-data-persistence) - [How frequently does RDB and AOF persistence write to my blobs, and should I enable soft delete?](#how-frequently-does-rdb-and-aof-persistence-write-to-my-blobs-and-should-i-enable-soft-delete)-- [Will having firewall exceptions on the storage account affect persistence](#Will having firewall exceptions on the storage account affect persistence)
+- [Will having firewall exceptions on the storage account affect persistence](#will-having-firewall-exceptions-on-the-storage-account-affect-persistence)
### RDB persistence
When clustering is enabled, each shard in the cache has its own set of page blob
After a rewrite, two sets of AOF files exist in storage. Rewrites occur in the background and append to the first set of files. Set operations, sent to the cache during the rewrite, append to the second set. A backup is temporarily stored during rewrites if there's a failure. The backup is promptly deleted after a rewrite finishes. If soft delete is turned on for your storage account, the soft delete setting applies and existing backups continue to stay in the soft delete state. ### Will having firewall exceptions on the storage account affect persistence+ Using managed identity adds the cache instance to the [trusted services list](../storage/common/storage-network-security.md?tabs=azure-portal), making firewall exceptions easier to carry out. If you aren't using managed identity and instead authorizing to a storage account using a key, then having firewall exceptions on the storage account tends to break the persistence process. ## Next steps
azure-cache-for-redis Cache Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-ml.md
Azure Cache for Redis is performant and scalable. When paired with an Azure Mach
> * `model` - The registered model that will be deployed. > * `inference_config` - The inference configuration for the model. >
-> For more information on setting these variables, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+> For more information on setting these variables, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-managed-online-endpoints.md).
## Create an Azure Cache for Redis instance
def run(data):
return error ```
-For more information on entry script, see [Define scoring code.](/azure/machine-learning/how-to-deploy-managed-online-endpoints)
+For more information on entry script, see [Define scoring code.](../machine-learning/how-to-deploy-managed-online-endpoints.md)
* **Dependencies**, such as helper scripts or Python/Conda packages required to run the entry script or model
These entities are encapsulated into an **inference configuration**. The inferen
For more information on environments, see [Create and manage environments for training and deployment](../machine-learning/how-to-use-environments.md).
-For more information on inference configuration, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+For more information on inference configuration, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-managed-online-endpoints.md).
> [!IMPORTANT] > When deploying to Functions, you do not need to create a **deployment configuration**.
pip install azureml-contrib-functions
To create the Docker image that is deployed to Azure Functions, use [azureml.contrib.functions.package](/python/api/azureml-contrib-functions/azureml.contrib.functions) or the specific package function for the trigger you want to use. The following code snippet demonstrates how to create a new package with an HTTP trigger from the model and inference configuration: > [!NOTE]
-> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
+> The code snippet assumes that `model` contains a registered model, and that `inference_config` contains the configuration for the inference environment. For more information, see [Deploy models with Azure Machine Learning](../machine-learning/how-to-deploy-managed-online-endpoints.md).
```python from azureml.contrib.functions import package
After a few moments, the resource group and all of its resources are deleted.
* Learn more about [Azure Cache for Redis](./cache-overview.md) * Learn to configure your function app in the [Functions](../azure-functions/functions-create-function-linux-custom-image.md) documentation. * [API Reference](/python/api/azureml-contrib-functions/azureml.contrib.functions)
-* Create a [Python app that uses Azure Cache for Redis](./cache-python-get-started.md)
+* Create a [Python app that uses Azure Cache for Redis](./cache-python-get-started.md)
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
Azure Functions deployment slots have the following considerations:
- The number of slots available to an app depends on the plan. The Consumption plan is only allowed one deployment slot. Additional slots are available for apps running under other plans. For details, see [Service limits](functions-scale.md#service-limits). - Swapping a slot resets keys for apps that have an `AzureWebJobsSecretStorageType` app setting equal to `files`. - When slots are enabled, your function app is set to read-only mode in the portal.
+- Use function app names shorter than 32 characters. Names longer than 32 characters are at risk of causing [host ID collisons](storage-considerations.md#host-id-considerations).
## Next steps
azure-functions Functions Infrastructure As Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-infrastructure-as-code.md
description: Learn how to build an Azure Resource Manager template that deploys
ms.assetid: d20743e3-aab6-442c-a836-9bcea09bfd32 Previously updated : 04/03/2019 Last updated : 08/18/2022
You can use an Azure Resource Manager template to deploy a function app. This ar
For more information about creating templates, see [Authoring Azure Resource Manager templates](../azure-resource-manager/templates/syntax.md). For sample templates, see:+ - [ARM templates for function app deployment](https://github.com/Azure-Samples/function-app-arm-templates) - [Function app on Consumption plan] - [Function app on Azure App Service plan]
An Azure storage account is required for a function app. You need a general purp
```json {
- "type": "Microsoft.Storage/storageAccounts",
- "name": "[variables('storageAccountName')]",
- "apiVersion": "2019-06-01",
- "location": "[resourceGroup().location]",
- "kind": "StorageV2",
- "sku": {
- "name": "[parameters('storageAccountType')]"
- }
+ "type": "Microsoft.Storage/storageAccounts",
+ "name": "[variables('storageAccountName')]",
+ "apiVersion": "2019-06-01",
+ "location": "[resourceGroup().location]",
+ "kind": "StorageV2",
+ "sku": {
+ "name": "[parameters('storageAccountType')]"
+ }
} ```
These properties are specified in the `appSettings` collection in the `siteConfi
```json "appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
- {
- "name": "AzureWebJobsDashboard",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- }
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
+ },
+ {
+ "name": "AzureWebJobsDashboard",
+ "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
+ }
] ```
These properties are specified in the `appSettings` collection in the `siteConfi
Application Insights is recommended for monitoring your function apps. The Application Insights resource is defined with the type **Microsoft.Insights/components** and the kind **web**: ```json
- {
- "apiVersion": "2015-05-01",
- "name": "[variables('appInsightsName')]",
- "type": "Microsoft.Insights/components",
- "kind": "web",
- "location": "[resourceGroup().location]",
- "tags": {
- "[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', variables('functionAppName'))]": "Resource"
- },
- "properties": {
- "Application_Type": "web",
- "ApplicationId": "[variables('appInsightsName')]"
- }
- },
+{
+ "apiVersion": "2015-05-01",
+ "name": "[variables('appInsightsName')]",
+ "type": "Microsoft.Insights/components",
+ "kind": "web",
+ "location": "[resourceGroup().location]",
+ "tags": {
+ "[concat('hidden-link:', resourceGroup().id, '/providers/Microsoft.Web/sites/', variables('functionAppName'))]": "Resource"
+ },
+ "properties": {
+ "Application_Type": "web",
+ "ApplicationId": "[variables('appInsightsName')]"
+ }
+},
``` In addition, the instrumentation key needs to be provided to the function app using the `APPINSIGHTS_INSTRUMENTATIONKEY` application setting. This property is specified in the `appSettings` collection in the `siteConfig` object: ```json "appSettings": [
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
- }
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
+ }
] ``` ### Hosting plan The definition of the hosting plan varies, and can be one of the following:
-* [Consumption plan](#consumption) (default)
-* [Premium plan](#premium)
-* [App Service plan](#app-service-plan)
+
+- [Consumption plan](#consumption) (default)
+- [Premium plan](#premium)
+- [App Service plan](#app-service-plan)
### Function app
The function app resource is defined by using a resource of type **Microsoft.Web
```json {
- "apiVersion": "2015-08-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('functionAppName')]",
- "location": "[resourceGroup().location]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]"
- ]
+ "apiVersion": "2015-08-01",
+ "type": "Microsoft.Web/sites",
+ "name": "[variables('functionAppName')]",
+ "location": "[resourceGroup().location]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
+ "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]"
+ ]
} ```
These properties are specified in the `appSettings` collection in the `siteConfi
```json "properties": {
- "siteConfig": {
- "appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~4"
- }
- ]
- }
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~4"
+ }
+ ]
+ }
} ```
To run your app on Linux, you must also set the property `"reserved": true` for
} } ```+ ### Create a function app
For a sample Azure Resource Manager template, see [Azure Function App Hosted on
"properties": { "reserved": true, "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
+ "siteConfig": {
"linuxFxVersion": "node|14", "appSettings": [ {
For a sample Azure Resource Manager template, see [Azure Function App Hosted on
"name": "FUNCTIONS_WORKER_RUNTIME", "value": "node" }
- ]
+ ]
} } } ```+ <a name="premium"></a>
To run your app on Linux, you must also set property `"reserved": true` for the
"kind": "elastic" } ```+ ### Create a function app
The settings required by a function app running in Premium plan differ between W
} } ```+ > [!IMPORTANT] > You don't need to set the [`WEBSITE_CONTENTSHARE`](functions-app-settings.md#website_contentshare) setting because it's generated for you when the site is first created.
The function app must have set `"kind": "functionapp,linux"`, and it must have s
} } ```+ <a name="app-service-plan"></a>
To run your app on Linux, you must also set property `"reserved": true` for the
} } ```+ ### Create a function app
The function app must have set `"kind": "functionapp,linux"`, and it must have s
} } ```+ ### Custom Container Image
If you are [deploying a custom container image](./functions-create-function-linu
```json {
- "apiVersion": "2016-03-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('functionAppName')]",
- "location": "[resourceGroup().location]",
- "kind": "functionapp",
- "dependsOn": [
- "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ],
- "properties": {
- "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
- "siteConfig": {
- "appSettings": [
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
- },
- {
- "name": "FUNCTIONS_WORKER_RUNTIME",
- "value": "node"
- },
- {
- "name": "WEBSITE_NODE_DEFAULT_VERSION",
- "value": "~14"
- },
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_URL",
- "value": "[parameters('dockerRegistryUrl')]"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_USERNAME",
- "value": "[parameters('dockerRegistryUsername')]"
- },
- {
- "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
- "value": "[parameters('dockerRegistryPassword')]"
- },
- {
- "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
- "value": "false"
- }
- ],
- "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
+ "apiVersion": "2016-03-01",
+ "type": "Microsoft.Web/sites",
+ "name": "[variables('functionAppName')]",
+ "location": "[resourceGroup().location]",
+ "kind": "functionapp",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ],
+ "properties": {
+ "serverFarmId": "[resourceId('Microsoft.Web/serverfarms', variables('hostingPlanName'))]",
+ "siteConfig": {
+ "appSettings": [
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]"
+ },
+ {
+ "name": "FUNCTIONS_WORKER_RUNTIME",
+ "value": "node"
+ },
+ {
+ "name": "WEBSITE_NODE_DEFAULT_VERSION",
+ "value": "~14"
+ },
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_URL",
+ "value": "[parameters('dockerRegistryUrl')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_USERNAME",
+ "value": "[parameters('dockerRegistryUsername')]"
+ },
+ {
+ "name": "DOCKER_REGISTRY_SERVER_PASSWORD",
+ "value": "[parameters('dockerRegistryPassword')]"
+ },
+ {
+ "name": "WEBSITES_ENABLE_APP_SERVICE_STORAGE",
+ "value": "false"
}
+ ],
+ "linuxFxVersion": "DOCKER|myacr.azurecr.io/myimage:mytag"
}
+ }
} ```
To create the app and plan resources, you must have already [created an App Serv
```json {
- "parameters": {
- "kubeEnvironmentId" : {
- "type": "string"
- },
- "customLocationId" : {
- "type": "string"
- }
+ "parameters": {
+ "kubeEnvironmentId" : {
+ "type": "string"
+ },
+ "customLocationId" : {
+ "type": "string"
}
+ }
} ```
Both sites and plans must reference the custom location through an `extendedLoca
```json {
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
+ "extendedLocation": {
+ "type": "customlocation",
+ "name": "[parameters('customLocationId')]"
+ },
} ```
The plan resource should use the Kubernetes (K1) SKU, and its `kind` field shoul
```json {
- "type": "Microsoft.Web/serverfarms",
+ "type": "Microsoft.Web/serverfarms",
+ "name": "[variables('hostingPlanName')]",
+ "location": "[parameters('location')]",
+ "apiVersion": "2020-12-01",
+ "kind": "linux,kubernetes",
+ "sku": {
+ "name": "K1",
+ "tier": "Kubernetes"
+ },
+ "extendedLocation": {
+ "type": "customlocation",
+ "name": "[parameters('customLocationId')]"
+ },
+ "properties": {
"name": "[variables('hostingPlanName')]", "location": "[parameters('location')]",
- "apiVersion": "2020-12-01",
- "kind": "linux,kubernetes",
- "sku": {
- "name": "K1",
- "tier": "Kubernetes"
- },
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
+ "workerSizeId": "0",
+ "numberOfWorkers": "1",
+ "kubeEnvironmentProfile": {
+ "id": "[parameters('kubeEnvironmentId')]"
},
- "properties": {
- "name": "[variables('hostingPlanName')]",
- "location": "[parameters('location')]",
- "workerSizeId": "0",
- "numberOfWorkers": "1",
- "kubeEnvironmentProfile": {
- "id": "[parameters('kubeEnvironmentId')]"
- },
- "reserved": true
- }
+ "reserved": true
+ }
} ```
The function app resource should have its `kind` field set to "functionapp,linux
```json {
- "apiVersion": "2018-11-01",
- "type": "Microsoft.Web/sites",
- "name": "[variables('appName')]",
- "kind": "kubernetes,functionapp,linux,container",
- "location": "[parameters('location')]",
- "extendedLocation": {
- "type": "customlocation",
- "name": "[parameters('customLocationId')]"
- },
- "dependsOn": [
- "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
- "[variables('hostingPlanId')]"
- ],
- "properties": {
- "serverFarmId": "[variables('hostingPlanId')]",
- "siteConfig": {
- "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
- "appSettings": [
- {
- "name": "FUNCTIONS_EXTENSION_VERSION",
- "value": "~3"
- },
- {
- "name": "AzureWebJobsStorage",
- "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
-
- },
- {
- "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
- "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
- }
- ],
- "alwaysOn": true
+ "apiVersion": "2018-11-01",
+ "type": "Microsoft.Web/sites",
+ "name": "[variables('appName')]",
+ "kind": "kubernetes,functionapp,linux,container",
+ "location": "[parameters('location')]",
+ "extendedLocation": {
+ "type": "customlocation",
+ "name": "[parameters('customLocationId')]"
+ },
+ "dependsOn": [
+ "[resourceId('Microsoft.Insights/components', variables('appInsightsName'))]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]",
+ "[variables('hostingPlanId')]"
+ ],
+ "properties": {
+ "serverFarmId": "[variables('hostingPlanId')]",
+ "siteConfig": {
+ "linuxFxVersion": "DOCKER|mcr.microsoft.com/azure-functions/dotnet:3.0-appservice-quickstart",
+ "appSettings": [
+ {
+ "name": "FUNCTIONS_EXTENSION_VERSION",
+ "value": "~3"
+ },
+ {
+ "name": "AzureWebJobsStorage",
+ "value": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2015-05-01-preview').key1)]"
+
+ },
+ {
+ "name": "APPINSIGHTS_INSTRUMENTATIONKEY",
+ "value": "[reference(resourceId('microsoft.insights/components/', variables('appInsightsName')), '2015-05-01').InstrumentationKey]"
}
+ ],
+ "alwaysOn": true
}
+ }
} ```
A function app has many child resources that you can use in your deployment, inc
} }, "resources": [
- {
- "apiVersion": "2015-08-01",
- "name": "appsettings",
- "type": "config",
- "dependsOn": [
- "[resourceId('Microsoft.Web/Sites', parameters('appName'))]",
- "[resourceId('Microsoft.Web/Sites/sourcecontrols', parameters('appName'), 'web')]",
- "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
- ],
- "properties": {
- "AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
- "FUNCTIONS_EXTENSION_VERSION": "~3",
- "FUNCTIONS_WORKER_RUNTIME": "dotnet",
- "Project": "src"
- }
- },
- {
- "apiVersion": "2015-08-01",
- "name": "web",
- "type": "sourcecontrols",
- "dependsOn": [
- "[resourceId('Microsoft.Web/sites/', parameters('appName'))]"
- ],
- "properties": {
- "RepoUrl": "[parameters('sourceCodeRepositoryURL')]",
- "branch": "[parameters('sourceCodeBranch')]",
- "IsManualIntegration": "[parameters('sourceCodeManualIntegration')]"
- }
- }
+ {
+ "apiVersion": "2015-08-01",
+ "name": "appsettings",
+ "type": "config",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/Sites', parameters('appName'))]",
+ "[resourceId('Microsoft.Web/Sites/sourcecontrols', parameters('appName'), 'web')]",
+ "[resourceId('Microsoft.Storage/storageAccounts', variables('storageAccountName'))]"
+ ],
+ "properties": {
+ "AzureWebJobsStorage": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
+ "AzureWebJobsDashboard": "[concat('DefaultEndpointsProtocol=https;AccountName=', variables('storageAccountName'), ';AccountKey=', listKeys(variables('storageAccountid'),'2019-06-01').keys[0].value)]",
+ "FUNCTIONS_EXTENSION_VERSION": "~3",
+ "FUNCTIONS_WORKER_RUNTIME": "dotnet",
+ "Project": "src"
+ }
+ },
+ {
+ "apiVersion": "2015-08-01",
+ "name": "web",
+ "type": "sourcecontrols",
+ "dependsOn": [
+ "[resourceId('Microsoft.Web/sites/', parameters('appName'))]"
+ ],
+ "properties": {
+ "RepoUrl": "[parameters('sourceCodeRepositoryURL')]",
+ "branch": "[parameters('sourceCodeBranch')]",
+ "IsManualIntegration": "[parameters('sourceCodeManualIntegration')]"
+ }
+ }
] } ```+ > [!TIP] > This template uses the [Project](https://github.com/projectkudu/kudu/wiki/Customizing-deployments#using-app-settings-instead-of-a-deployment-file) app settings value, which sets the base directory in which the Functions deployment engine (Kudu) looks for deployable code. In our repository, our functions are in a subfolder of the **src** folder. So, in the preceding example, we set the app settings value to `src`. If your functions are in the root of your repository, or if you are not deploying from source control, you can remove this app settings value.
A function app has many child resources that you can use in your deployment, inc
You can use any of the following ways to deploy your template:
-* [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
-* [Azure CLI](../azure-resource-manager/templates/deploy-cli.md)
-* [Azure portal](../azure-resource-manager/templates/deploy-portal.md)
-* [REST API](../azure-resource-manager/templates/deploy-rest.md)
+- [PowerShell](../azure-resource-manager/templates/deploy-powershell.md)
+- [Azure CLI](../azure-resource-manager/templates/deploy-cli.md)
+- [Azure portal](../azure-resource-manager/templates/deploy-portal.md)
+- [REST API](../azure-resource-manager/templates/deploy-rest.md)
### Deploy to Azure button
To test out this deployment, you can use a [template like this one](https://raw.
Learn more about how to develop and configure Azure Functions.
-* [Azure Functions developer reference](functions-reference.md)
-* [How to configure Azure function app settings](functions-how-to-use-azure-function-app-settings.md)
-* [Create your first Azure function](./functions-get-started.md)
+- [Azure Functions developer reference](functions-reference.md)
+- [How to configure Azure function app settings](functions-how-to-use-azure-function-app-settings.md)
+- [Create your first Azure function](./functions-get-started.md)
<!-- LINKS -->
azure-functions Functions Recover Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-recover-storage-account.md
Your function app must be able to access the storage account. Common issues that
* The storage account firewall is enabled and not configured to allow traffic to and from functions. For more information, see [Configure Azure Storage firewalls and virtual networks](../storage/common/storage-network-security.md?toc=%2fazure%2fstorage%2ffiles%2ftoc.json).
-* Verify that the `allowSharedKeyAccess` setting is set to `true` which is its default value. For more information, see [Prevent Shared Key authorization for an Azure Storage account](../storage/common/shared-key-authorization-prevent.md?tabs=portal#verify-that-shared-key-access-is-not-allowed).
+* Verify that the `allowSharedKeyAccess` setting is set to `true`, which is its default value. For more information, see [Prevent Shared Key authorization for an Azure Storage account](../storage/common/shared-key-authorization-prevent.md?tabs=portal#verify-that-shared-key-access-is-not-allowed).
## Daily execution quota is full
You can also use the portal from a computer that's connected to the virtual netw
For more information about inbound rule configuration, see the "Network Security Groups" section of [Networking considerations for an App Service Environment](../app-service/environment/network-info.md#network-security-groups).
-## Container image unavailable (Linux)
+## Container errors on Linux
-For Linux function apps that run from a container, the "Azure Functions runtime is unreachable" error can occur when the container image being referenced is unavailable or fails to start correctly.
-
-To confirm that the error is caused for this reason:
+For function apps that run on Linux in a container, the `Azure Functions runtime is unreachable` error can occur as a result of problems with the container. Use the following procedure to review the container logs for errors:
1. Navigate to the Kudu endpoint for the function app, which is located at `https://scm.<FUNCTION_APP>.azurewebsites.net`, where `<FUNCTION_APP>` is the name of your app.
-1. Download the Docker logs ZIP file and review them locally, or review the docker logs from within Kudu.
+1. Download the Docker logs .zip file and review the contents on your local computer.
+
+1. Check for any logged errors that indicate that the container is unable to start successfully.
+
+### Container image unavailable
+
+Errors can occur when the container image being referenced is unavailable or fails to start correctly. Check for any logged errors that indicate that the container is unable to start successfully.
+
+You need to correct any errors that prevent the container from starting for the function app run correctly.
+
+When the container image can't be found, you'll see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli#manual-version-updates-on-linux) to change the container image being referenced. If you've deployed a [custom container image](functions-create-function-linux-custom-image.md), you need to fix the image and redeploy the updated version to the referenced registry.
+
+### App container has conflicting ports
+
+Your function app might be in an unresponsive state due to conflicting port assignment upon startup. This can happen in the following cases:
+
+* Your container has separate services running where one or more services are tying to bind to the same port as the function app.
+* You've added an Azure Hybrid Connection that shares the same port value as the function app.
-1. Check for any errors in the logs that would indicate that the container is unable to start successfully.
+By default, the container in which your function app runs uses port `:80`. When other services in the same container are also trying to using port `:80`, the function app can fail to start. If your logs show port conflicts, change the default ports.
-Any such error would need to be remedied for the function to work correctly.
+## Host ID collision
-When the container image can't be found, you should see a `manifest unknown` error in the Docker logs. In this case, you can use the Azure CLI commands documented at [How to target Azure Functions runtime versions](set-runtime-version.md?tabs=azurecli) to change the container image being reference. If you've deployed a custom container image, you need to fix the image and redeploy the updated version to the referenced registry.
+Starting with version 3.x of the Functions runtime, [host ID collision](storage-considerations.md#host-id-considerations) are detected and logged as a warning. In version 4.x, an error is logged and the host is stopped. If the runtime can't start for your function app, [review the logs](analyze-telemetry-data.md). If there's a warning or an error about host ID collisions, follow the mitigation steps in [Host ID considerations](storage-considerations.md#host-id-considerations).
## Next steps
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
The following table indicates which programming languages are currently supporte
## <a name="creating-1x-apps"></a>Run on a specific version
-By default, function apps created in the Azure portal and by the Azure CLI are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions. When your app has existing functions, be aware of any breaking changes between versions before moving to a later runtime version. The following sections detail changes between versions:
+By default, function apps created in the Azure portal and by the Azure CLI are set to version 4.x. You can modify this version if needed. You can only downgrade the runtime version to 1.x after you create your function app but before you add any functions. Moving to a later version is allowed even with apps that have existing functions. When your app has existing functions, be aware of any breaking changes between versions before moving to a later runtime version. The following sections detail breaking changes between versions, including language-specific breaking changes.
+ [Between 3.x and 4.x](#breaking-changes-between-3x-and-4x) + [Between 2.x and 3.x](#breaking-changes-between-2x-and-3x) + [Between 1.x and later versions](#migrating-from-1x-to-later-versions)
+If you don't see your programming language, go select it from the [top of the page](#top).
+ Before making a change to the major version of the runtime, you should first test your existing code on the new runtime version. You can verify your app runs correctly after the upgrade by deploying to another function app running on the latest major version. You can also verify your code locally by using the runtime-specific version of the [Azure Functions Core Tools](functions-run-local.md), which includes the Functions runtime. Downgrades to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
To update your project to Azure Functions 4.x:
### Breaking changes between 3.x and 4.x
-The following are some changes to be aware of before upgrading a 3.x app to 4.x. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22). More changes are expected during the preview period. Subscribe to [App Service Announcements](https://github.com/Azure/app-service-announcements/issues) for updates.
+The following are key breaking changes to be aware of before upgrading a 3.x app to 4.x, including language-specific breaking changes. For a full list, see Azure Functions GitHub issues labeled [*Breaking Change: Approved*](https://github.com/Azure/azure-functions/issues?q=is%3Aissue+label%3A%22Breaking+Change%3A+Approved%22+is%3A%22closed+OR+open%22). More changes are expected during the preview period. Subscribe to [App Service Announcements](https://github.com/Azure/app-service-announcements/issues) for updates.
+
+If you don't see your programming language, go select it from the [top of the page](#top).
#### Runtime
Azure Functions version 3.x is highly backwards compatible to version 2.x. Many
### Breaking changes between 2.x and 3.x
-The following are the language-specific changes to be aware of before upgrading a 2.x app to 3.x.
+The following are the language-specific changes to be aware of before upgrading a 2.x app to 3.x. If you don't see your programming language, go select it from the [top of the page](#top).
::: zone pivot="programming-language-csharp" The main differences between versions when running .NET class library functions is the .NET Core runtime. Functions version 2.x is designed to run on .NET Core 2.2 and version 3.x is designed to run on .NET Core 3.1.
azure-functions Storage Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/storage-considerations.md
Azure Functions requires an Azure Storage account when you create a function app
## Storage account requirements
-When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This is because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage.
+When creating a function app, you must create or link to a general-purpose Azure Storage account that supports Blob, Queue, and Table storage. This requirement exists because Functions relies on Azure Storage for operations such as managing triggers and logging function executions. Some storage accounts don't support queues and tables. These accounts include blob-only storage accounts and Azure Premium Storage.
To learn more about storage account types, see [Storage account overview](../storage/common/storage-account-overview.md).
-While you can use an existing storage account with your function app, you must make sure that it meets these requirements. Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to meet these storage account requirements. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. In this flow, you are only allowed to choose existing storage accounts in the same region as the function app you're creating. To learn more, see [Storage account location](#storage-account-location).
+While you can use an existing storage account with your function app, you must make sure that it meets these requirements. Storage accounts created as part of the function app create flow in the Azure portal are guaranteed to meet these storage account requirements. In the portal, unsupported accounts are filtered out when choosing an existing storage account while creating a function app. In this flow, you're only allowed to choose existing storage accounts in the same region as the function app you're creating. To learn more, see [Storage account location](#storage-account-location).
<!-- JH: Does using a Premium Storage account improve perf? --> ## Storage account guidance
-Every function app requires a storage account to operate. If that account is deleted your function app won't run. To troubleshoot storage-related issues, see [How to troubleshoot storage-related issues](functions-recover-storage-account.md). The following additional considerations apply to the Storage account used by function apps.
+Every function app requires a storage account to operate. When that account is deleted, your function app won't run. To troubleshoot storage-related issues, see [How to troubleshoot storage-related issues](functions-recover-storage-account.md). The following other considerations apply to the Storage account used by function apps.
### Storage account location
-For best performance, your function app should use a storage account in the same region, which reduces latency. The Azure portal enforces this best practice. If, for some reason, you need to use a storage account in a region different than your function app, you must create your function app outside of the portal.
+For best performance, your function app should use a storage account in the same region, which reduces latency. The Azure portal enforces this best practice. If for some reason you need to use a storage account in a region different than your function app, you must create your function app outside of the portal.
### Storage account connection setting
You may need to use separate store accounts to [avoid host ID collisions](#avoid
### Lifecycle management policy considerations
-Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). When you apply a [lifecycle management policy](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account, the policy may remove blobs needed by the Functions host. Because of this, you shouldn't apply such policies to the storage account used by Functions. If you do need to apply such a policy, remember to exclude containers used by Functions, which are usually prefixed with `azure-webjobs` or `scm`.
+Functions uses Blob storage to persist important information, such as [function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). When you apply a [lifecycle management policy](../storage/blobs/lifecycle-management-overview.md) to your Blob Storage account, the policy may remove blobs needed by the Functions host. Because of this fact, you shouldn't apply such policies to the storage account used by Functions. If you do need to apply such a policy, remember to exclude containers used by Functions, which are prefixed with `azure-webjobs` or `scm`.
### Optimize storage performance
When all customer data must remain within a single region, the storage account a
Other platform-managed customer data is only stored within the region when hosting in an internally load-balanced App Service Environment (ASE). To learn more, see [ASE zone redundancy](../app-service/environment/zone-redundancy.md#in-region-data-residency).
-## Host ID considerations
+## Host ID considerations
Functions uses a host ID value as a way to uniquely identify a particular function app in stored artifacts. By default, this ID is auto-generated from the name of the function app, truncated to the first 32 characters. This ID is then used when storing per-app correlation and tracking information in the linked storage account. When you have function apps with names longer than 32 characters and when the first 32 characters are identical, this truncation can result in duplicate host ID values. When two function apps with identical host IDs use the same storage account, you get a host ID collision because stored data can't be uniquely linked to the correct function app.
+>[!NOTE]
+>This same collison can occur between a function app in a production slot and the same function app in a staging slot, when both slots use the same storage account.
+ Starting with version 3.x of the Functions runtime, host ID collision is detected and a warning is logged. In version 4.x, an error is logged and the host is stopped, resulting in a hard failure. More details about host ID collision can be found in [this issue](https://github.com/Azure/azure-functions-host/issues/2015). ### Avoiding host ID collisions You can use the following strategies to avoid host ID collisions:
-+ Use a separated storage account for each function app involved in the collision.
-+ Rename one of your function apps to a value less than 32 characters in length, which changes the computed host ID for the app and removes the collision.
++ Use a separated storage account for each function app or slot involved in the collision.++ Rename one of your function apps to a value fewer than 32 characters in length, which changes the computed host ID for the app and removes the collision. + Set an explicit host ID for one or more of the colliding apps. To learn more, see [Host ID override](#override-the-host-id). > [!IMPORTANT]
You can use the following strategies to avoid host ID collisions:
You can explicitly set a specific host ID for your function app in the application settings by using the `AzureFunctionsWebHost__hostid` setting. For more information, see [AzureFunctionsWebHost__hostid](functions-app-settings.md#azurefunctionswebhost__hostid).
-To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
+When the collision occurs between slots, you may need to mark this setting as a slot setting. To learn how to create app settings, see [Work with application settings](functions-how-to-use-azure-function-app-settings.md#settings).
## Create an app without Azure Files
-Azure Files is set up by default for Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system, so Azure Files can be omitted if desired. In such cases, a writeable file system is provided, but it is not guaranteed to be shared with all function app instances.
+Azure Files is set up by default for Premium and non-Linux Consumption plans to serve as a shared file system in high-scale scenarios. The file system is used by the platform for some features such as log streaming, but it primarily ensures consistency of the deployed function payload. When an app is [deployed using an external package URL](./run-functions-from-deployment-package.md), the app content is served from a separate read-only file system. This means that you can create your function app without Azure Files. If you create your function app with Azure Files, a writeable file system is still provided. However, this file system may not be available for all function app instances.
-When Azure Files isn't used, you must account for the following:
+When Azure Files isn't used, you must meet the following requirements:
* You must deploy from an external package URL. * Your app can't rely on a shared writeable file system.
-* The app can't use Functions runtime v1.
+* The app can't use version 1.x of the Functions runtime.
* Log streaming experiences in clients such as the Azure portal default to file system logs. You should instead rely on Application Insights logs.
-If the above are properly accounted for, you may create the app without Azure Files. Create the function app without specifying the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` and `WEBSITE_CONTENTSHARE` application settings. You can do this by generating an ARM template for a standard deployment, removing these two settings, and then deploying the template.
+If the above are properly accounted for, you may create the app without Azure Files. Create the function app without specifying the `WEBSITE_CONTENTAZUREFILECONNECTIONSTRING` and `WEBSITE_CONTENTSHARE` application settings. You can avoid these settings by generating an ARM template for a standard deployment, removing the two settings, and then deploying the template.
-Because Functions use Azure Files during parts of the the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Premium plans.
+Because Functions use Azure Files during parts of the dynamic scale-out process, scaling could be limited when running without Azure Files on Consumption and Premium plans.
## Mount file shares _This functionality is current only available when running on Linux._
-You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can leverage existing machine learning models or other data in your functions. You can use the following command to mount an existing share to your Linux function app.
+You can mount existing Azure Files shares to your Linux function apps. By mounting a share to your Linux function app, you can use existing machine learning models or other data in your functions. You can use the following command to mount an existing share to your Linux function app.
# [Azure CLI](#tab/azure-cli)
For a complete example, see the script in [Create a serverless Python function a
-Currently, only a `storage-type` of `AzureFiles` is supported. You can only mount five shares to a given function app. Mounting a file share may increase the cold start time by at least 200-300ms, or even more when the storage account is in a different region.
+Currently, only a `storage-type` of `AzureFiles` is supported. You can only mount five shares to a given function app. Mounting a file share may increase the cold start time by at least 200-300 ms, or even more when the storage account is in a different region.
The mounted share is available to your function code at the `mount-path` specified. For example, when `mount-path` is `/path/to/mount`, you can access the target directory by file system APIs, as in the following Python example:
azure-government Azure Secure Isolation Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/azure-secure-isolation-guidance.md
Storage accounts are encrypted regardless of their performance tier (standard or
Because data encryption is performed by the Storage service, server-side encryption with CMK enables you to use any operating system types and images for your VMs. For your Windows and Linux IaaS VMs, Azure also provides Azure Disk encryption that enables you to encrypt managed disks with CMK within the Guest VM, as described in the next section. Combining Azure Storage service encryption and Disk encryption effectively enables [double encryption of data at rest](../virtual-machines/disks-enable-double-encryption-at-rest-portal.md). #### Azure Disk encryption
-Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
+Azure Storage service encryption encrypts the page blobs that store Azure Virtual Machine disks. Moreover, you may optionally use [Azure Disk encryption](../virtual-machines/disk-encryption-overview.md) to encrypt Azure [Windows](../virtual-machines/windows/disk-encryption-overview.md) and [Linux](../virtual-machines/linux/disk-encryption-overview.md) IaaS Virtual Machine disks to increase storage isolation and assure cryptographic certainty of your data stored in Azure. This encryption includes [managed disks](../virtual-machines/managed-disks-overview.md), as described later in this section. Azure disk encryption uses the industry standard [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows and the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide OS-based volume encryption that is integrated with Azure Key Vault.
Drive encryption through BitLocker and DM-Crypt is a data protection feature that integrates with the operating system and addresses the threats of data theft or exposure from lost, stolen, or inappropriately decommissioned computers. BitLocker and DM-Crypt provide the most protection when used with a Trusted Platform Module (TPM) version 1.2 or higher. The TPM is a microcontroller designed to secure hardware through integrated cryptographic keys ΓÇô it's commonly pre-installed on newer computers. BitLocker and DM-Crypt can use this technology to protect the keys used to encrypt disk volumes and provide integrity to computer boot process.
For [Windows VMs](../virtual-machines/windows/disk-encryption-faq.yml), Azure Di
Customer-managed keys (CMK) enable you to have [full control](../virtual-machines/disk-encryption.md#full-control-of-your-keys) over your encryption keys. You can grant access to managed disks in your Azure Key Vault so that your keys can be used for encrypting and decrypting the DEK. You can also disable your keys or revoke access to managed disks at any time. Finally, you have full audit control over key usage with Azure Key Vault monitoring to ensure that only managed disks or other authorized resources are accessing your encryption keys. ##### *Encryption at host*
-Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). As mentioned previously, [Azure Disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) for virtual machines and virtual machine scale sets isn't supported by Managed HSM. However, encryption at host with CMK is supported by Managed HSM.
+Encryption at host ensures that data stored on the VM host is encrypted at rest and flows encrypted to the Storage service. Disks with encryption at host enabled aren't encrypted with Azure Storage encryption; instead, the server hosting your VM provides the encryption for your data, and that encrypted data flows into Azure Storage. For more information, see [Encryption at host - End-to-end encryption for your VM data](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data). As mentioned previously, [Azure Disk encryption](../virtual-machines/disk-encryption-overview.md) for virtual machines and virtual machine scale sets isn't supported by Managed HSM. However, encryption at host with CMK is supported by Managed HSM.
You're [always in control of your customer data](https://www.microsoft.com/trust-center/privacy/data-management) in Azure. You can access, extract, and delete your customer data stored in Azure at will. When you terminate your Azure subscription, Microsoft takes the necessary steps to ensure that you continue to own your customer data. A common concern upon data deletion or subscription termination is whether another customer or Azure administrator can access your deleted data. The following sections explain how data deletion, retention, and destruction work in Azure.
azure-monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-manage.md
Title: Managing the Azure Log Analytics agent
+ Title: Manage the Azure Log Analytics agent
description: This article describes the different management tasks that you will typically perform during the lifecycle of the Log Analytics Windows or Linux agent deployed on a machine. --++ Last updated 04/06/2022
-# Managing and maintaining the Log Analytics agent for Windows and Linux
+# Manage and maintain the Log Analytics agent for Windows and Linux
After initial deployment of the Log Analytics Windows or Linux agent in Azure Monitor, you may need to reconfigure the agent, upgrade it, or remove it from the computer if it has reached the retirement stage in its lifecycle. You can easily manage these routine maintenance tasks manually or through automation, which reduces both operational error and expenses. [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-## Upgrading agent
+## Upgrade the agent
-The Log Analytics agent for Windows and Linux can be upgraded to the latest release manually or automatically depending on the deployment scenario and environment the VM is running in. The following methods can be used to upgrade the agent.
+Upgrade to the latest release of the Log Analytics agent for Windows and Linux manually or automatically based on your deployment scenario and the environment the VM is running in:
-| Environment | Installation Method | Upgrade method |
+| Environment | Installation method | Upgrade method |
|--|-|-| | Azure VM | Log Analytics agent VM extension for Windows/Linux | Agent is automatically upgraded [after the VM model changes](../../virtual-machines/extensions/features-linux.md#how-agents-and-extensions-are-updated), unless you configured your Azure Resource Manager template to opt out by setting the property _autoUpgradeMinorVersion_ to **false**. Once deployed, however, the extension will not upgrade minor versions unless redeployed, even with this property set to true. Major version upgrade is always manual. See [VirtualMachineExtensionInner.AutoUpgradeMinorVersion Property](https://docs.azure.cn/dotnet/api/microsoft.azure.management.compute.fluent.models.virtualmachineextensioninner.autoupgrademinorversion?view=azure-dotnet). | | Custom Azure VM images | Manual install of Log Analytics agent for Windows/Linux | Updating VMs to the newest version of the agent needs to be performed from the command line running the Windows installer package or Linux self-extracting and installable shell script bundle.|
You can download the latest version of the Windows agent from your Log Analytics
5. From the **Windows Servers** page, select the appropriate **Download Windows Agent** version to download depending on the processor architecture of the Windows operating system. >[!NOTE]
->During the upgrade of the Log Analytics agent for Windows, it does not support configuring or reconfiguring a workspace to report to. To configure the agent, you need to follow one of the supported methods listed under [Adding or removing a workspace](#adding-or-removing-a-workspace).
+>During the upgrade of the Log Analytics agent for Windows, it does not support configuring or reconfiguring a workspace to report to. To configure the agent, you need to follow one of the supported methods listed under [Add or remove a workspace](#add-or-remove-a-workspace).
> #### To upgrade using the Setup Wizard
Run the following command to upgrade the agent.
### Enable Auto-Update for the Linux Agent
-The **recommendation** is to enable automatic update of the agent by enabling the [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) feature, using the following PowerShell commands.
+We recommend enabling [Automatic Extension Upgrade](../../virtual-machines/automatic-extension-upgrade.md) using these commands to update the agent automatically:
+ # [Powershell](#tab/PowerShellLinux) ```powershell Set-AzVMExtension \
az vm extension set \
--version latestVersion \ --enable-auto-upgrade true ```+
-## Adding or removing a workspace
+## Add or remove a workspace
### Windows agent The steps in this section are necessary when you want to not only reconfigure the Windows agent to report to a different workspace or to remove a workspace from its configuration, but also when you want to configure the agent to report to more than one workspace (commonly referred to as multi-homing). Configuring the Windows agent to report to multiple workspaces can only be performed after initial setup of the agent and using the methods described below.
azure-monitor Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-windows.md
Title: Install Log Analytics agent on Windows computers
description: This article describes how to connect Windows computers hosted in other clouds or on-premises to Azure Monitor with the Log Analytics agent for Windows. Last updated 03/31/2022++
Regardless of the installation method used, you'll require the workspace ID and
[![Screenshot that shows workspace details.](media/log-analytics-agent/workspace-details.png)](media/log-analytics-agent/workspace-details.png#lightbox) > [!NOTE]
-> You can't configure the agent to report to more than one workspace during initial setup. [Add or remove a workspace](agent-manage.md#adding-or-removing-a-workspace) after installation by updating the settings from Control Panel or PowerShell.
+> You can't configure the agent to report to more than one workspace during initial setup. [Add or remove a workspace](agent-manage.md#add-or-remove-a-workspace) after installation by updating the settings from Control Panel or PowerShell.
## Install the agent
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 7/21/2022 Last updated : 8/17/2022
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
| Data source | Destinations | Description | |:|:|:| | Performance | Azure Monitor Metrics (Public preview)<sup>1</sup> - Insights.virtualmachine namespace<br>Log Analytics workspace - [Perf](/azure/azure-monitor/reference/tables/perf) table | Numerical values measuring performance of different aspects of operating system and workloads |
- | Windows event logs | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
+ | Windows event logs (including sysmon events) | Log Analytics workspace - [Event](/azure/azure-monitor/reference/tables/Event) table | Information sent to the Windows event logging system |
| Syslog | Log Analytics workspace - [Syslog](/azure/azure-monitor/reference/tables/syslog)<sup>2</sup> table | Information sent to the Linux event logging system | | Text logs | Log Analytics workspace - custom table | Events sent to log file on agent machine |
Azure Monitor Agent uses [data collection rules](../essentials/data-collection-r
## Supported services and features
-Azure Monitor Agent currently supports these Azure Monitor features:
+In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure Monitor features in preview:
| Azure Monitor feature | Current support | Other extensions installed | More information | | : | : | : | : | | Text logs and Windows IIS logs | Public preview | None | [Collect text logs with Azure Monitor Agent (Public preview)](data-collection-text-log.md) | | Windows client installer | Public preview | None | [Set up Azure Monitor Agent on Windows client devices](azure-monitor-agent-windows-client.md) |
-| [VM insights](../vm/vminsights-overview.md) | Preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| [VM insights](../vm/vminsights-overview.md) | Public preview | Dependency Agent extension, if youΓÇÖre using the Map Services feature | [Enable VM Insights overview](../vm/vminsights-enable-overview.md) |
-Azure Monitor Agent currently supports these Azure
+In addition to the generally available data collection listed above, Azure Monitor Agent also supports these Azure services in preview:
| Azure service | Current support | Other extensions installed | More information | | : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Sign-up link](https://aka.ms/AMAgent) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows DNS logs: Preview</li><li>Linux Syslog CEF: Preview</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | <ul><li>[Sign-up link for Windows DNS logs](https://aka.ms/AMAgent)</li><li>[Sign-up link for Linux Syslog CEF](https://aka.ms/AMAgent)</li><li>No sign-up needed for Windows Forwarding Event (WEF) and Windows Security Events</li></ul> | | [Change Tracking](../../automation/change-tracking/overview.md) (part of Defender) | Supported as File Integrity Monitoring in the Microsoft Defender for Cloud: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/AMAgent) |
-| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](/azure/update-center/) |
+| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) |
| [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Preview | Azure NetworkWatcher extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) | ## Supported regions
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
description: Options for managing the Azure Monitor agent (AMA) on Azure virtual
Previously updated : 06/21/2022 Last updated : 08/18/2022
The following prerequisites must be met prior to installing the Azure Monitor ag
We recommend using `mi_res_id` as the `identifier-name`. The sample commands below only show usage with `mi_res_id` for the sake of brevity. For more details on `mi_res_id`, `object_id`, and `client_id`, see the [managed identity documentation](../../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md#get-a-token-using-http). - **System-assigned**: This is suited for initial testing or small deployments. When used at scale (for example, for all VMs in a subscription) it results in substantial number of identities created (and deleted) in Azure AD (Azure Active Directory). To avoid this churn of identities, it is recommended to use user-assigned managed identities instead. **For Arc-enabled servers, system-assigned managed identity is enabled automatically** (as soon as you install the Arc agent) as it's the only supported type for Arc-enabled servers. - This is not required for Azure Arc-enabled servers. The system identity will be enabled automatically if the agent is installed via [creating and assigning a data collection rule using the Azure portal](data-collection-rule-azure-monitor-agent.md#create-data-collection-rule-and-association).-- **Networking**: The [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints:
+- **Networking**: If using network firewalls, the [AzureResourceManager service tag](../../virtual-network/service-tags-overview.md) must be enabled on the virtual network for the virtual machine. Additionally, the virtual machine must have access to the following HTTPS endpoints:
- global.handler.control.monitor.azure.com - `<virtual-machine-region-name>`.handler.control.monitor.azure.com (example: westus.handler.control.azure.com) - `<log-analytics-workspace-id>`.ods.opinsights.azure.com (example: 12345a01-b1cd-1234-e1f2-1234567g8h99.ods.opsinsights.azure.com)
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
Previously updated : 6/22/2022 Last updated : 8/18/2022 # Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
You can access the workbook [here](https://portal.azure.com/#view/AppInsightsExt
## Installing and using DCR Config Generator (preview) Azure Monitor Agent relies only on [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) for configuration, whereas Log Analytics Agent inherits its configuration from Log Analytics workspaces.
-Use the DCR Config Generator tool to parse Log Analytics Agent configuration from your workspaces and generate corresponding data collection rules automatically. You can then associate the rules to machines running the new agent using built-in association policies.
+Use the DCR Config Generator tool to parse Log Analytics Agent configuration from your workspaces and generate/deploy corresponding data collection rules automatically. You can then associate the rules to machines running the new agent using built-in association policies.
> [!NOTE] > DCR Config Generator does not currently support additional configuration for [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) dependent on Log Analytics Agent.
To install DCR Config Generator:
1. Run the script:
- Option 1:
+ Option 1: Outputs **ready-to-deploy ARM template files** only that will create the generated DCR in the specified subscription and resource group, when deployed.
```powershell .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath ```
- Option 2 (if you just want the DCR payload JSON file):
+ Option 2: Outputs **ready-to-deploy ARM template files** and **the DCR JSON files** separately for you to deploy via other means. You need to set the `GetDcrPayload` parameter.
```powershell
- $dcrJson = Get-DCRJson -ResourceGroupName $rgName -WorkspaceName $workspaceName -PlatformType $platformType $dcrJson | ConvertTo-Json -Depth 10 | Out-File "<filepath>\OutputFiles\dcr_output.json"
+ .\WorkspaceConfigToDCRMigrationTool.ps1 -SubscriptionId $subId -ResourceGroupName $rgName -WorkspaceName $workspaceName -DCRName $dcrName -Location $location -FolderPath $folderPath -GetDcrPayload
``` **Parameters**
To install DCR Config Generator:
| `WorkspaceName` | Yes | Name of the target workspace. | | `DCRName` | Yes | Name of the new DCR. | | `Location` | Yes | Region location for the new DCR. |
- | `FolderPath` | No | Path in which to save the new data collection rules. By default, Azure Monitor uses the current directory. |
+ | `GetDcrPayload` | No | When set, it generates additional DCR JSON files
+ | `FolderPath` | No | Path in which to save the ARM template files and JSON files (optional). By default, Azure Monitor uses the current directory. |
-1. Review the output data collection rules. The script can produce two types of ARM template files, depending on the agent configuration in the target workspace:
+1. Review the output ARM template files. The script can produce two types of ARM template files, depending on the agent configuration in the target workspace:
- Windows ARM template and parameter files - if the target workspace contains Windows performance counters or Windows events. - Linux ARM template and parameter files - if the target workspace contains Linux performance counters or Linux Syslog events.
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
To send data to Log Analytics, create the data collection rule in the **same reg
[ ![Screenshot showing the Resources tab of the Data Collection Rules screen.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-virtual-machines-with-endpoint.png#lightbox) + 1. On the **Collect and deliver** tab, select **Add data source** to add a data source and set a destination. 1. Select a **Data source type**. 1. Select which data you want to collect. For performance counters, you can select from a predefined set of objects and their sampling rate. For events, you can select from a set of logs and severity levels.
azure-monitor Alerts Manage Alert Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-manage-alert-rules.md
Manage your alert rules in the Azure portal, or using the CLI or PowerShell.
1. From the top command bar, select **Alert rules**. You'll see all of your alert rules across subscriptions. You can filter the list of rules using the available filters: **Resource group**, **Resource type**, **Resource** and **Signal type**. 1. Select the alert rule that you want to edit. You can select multiple alert rules and enable or disable them. Multi-selecting rules can be useful when you want to perform maintenance on specific resources. 1. Edit any of the fields in the following sections. You can't edit the **Alert Rule Name**, **Scope**, or **Signal type** of an existing alert rule.
- - **Condition**. Learn more about conditions for [metric alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=metric#tabpanel_1_metric), [log alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=log#tabpanel_1_log), and [activity log alert rules](/azure/azure-monitor/alerts/alerts-create-new-alert-rule?tabs=activity-log#tabpanel_1_activity-log)
+ - **Condition**. Learn more about conditions for [metric alert rules](./alerts-create-new-alert-rule.md?tabs=metric#tabpanel_1_metric), [log alert rules](./alerts-create-new-alert-rule.md?tabs=log#tabpanel_1_log), and [activity log alert rules](./alerts-create-new-alert-rule.md?tabs=activity-log#tabpanel_1_activity-log)
- **Actions** - **Alert rule details** 1. Select **Save** on the top command bar.
azure-monitor Auto Collect Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/auto-collect-dependencies.md
Below is the currently supported list of dependency calls that are automatically
| |-| | [HTTP](https://nodejs.org/api/http.html), [HTTPS](https://nodejs.org/api/https.html) | 0.10+ | | <b>Storage clients</b> | |
-| [Redis](https://www.npmjs.com/package/redis) | 2.x |
+| [Redis](https://www.npmjs.com/package/redis) | 2.x - 3.x |
| [MongoDb](https://www.npmjs.com/package/mongodb); [MongoDb Core](https://www.npmjs.com/package/mongodb-core) | 2.x - 3.x |
-| [MySQL](https://www.npmjs.com/package/mysql) | 2.0.0 - 2.16.x |
-| [PostgreSql](https://www.npmjs.com/package/pg); | 6.x - 7.x |
+| [MySQL](https://www.npmjs.com/package/mysql) | 2.x |
+| [PostgreSql](https://www.npmjs.com/package/pg); | 6.x - 8.x |
| [pg-pool](https://www.npmjs.com/package/pg-pool) | 1.x - 2.x | | <b>Logging libraries</b> | | | [console](https://nodejs.org/api/console.html) | 0.10+ |
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
To migrate to diagnostic settings export:
2. [Migrate application to workspace-based](convert-classic-resource.md). 3. [Enable diagnostic settings export](create-workspace-resource.md#export-telemetry). Select **Diagnostic settings > add diagnostic setting** from within your Application Insights resource.
+> [!CAUTION]
+> If you want to store diagnostic logs in a Log Analytics workspace, there are two things to consider to avoid seeing duplicate data in Application Insights:
+> * The destination can't be the same Log Analytics workspace that your Application Insights resource is based on.
+> * The Application Insights user can't have access to both the Application Insights resource and the workspace created for diagnostic logs. This can be done with [Azure role-based access control (Azure RBAC)](./resources-roles-access-control.md).
+ <!--Link references--> [exportasa]: ../../stream-analytics/app-insights-export-sql-stream-analytics.md
-[roles]: ./resources-roles-access-control.md
+[roles]: ./resources-roles-access-control.md
azure-monitor Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/autoscale/autoscale-overview.md
This article describes Microsoft Azure autoscale and its benefits.
Azure autoscale supports many resource types. For more information about supported resources, see [autoscale supported resources](#supported-services-for-autoscale). > [!NOTE]
-> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](/azure/virtual-machine-scale-sets/overview) for faster and more reliable autoscale support.
+> [Availability sets](/archive/blogs/kaevans/autoscaling-azurevirtual-machines) are an older scaling feature for virtual machines with limited support. We recommend migrating to [virtual machine scale sets](../../virtual-machine-scale-sets/overview.md) for faster and more reliable autoscale support.
## What is autoscale
In contrast, scaling up and down, or vertical scaling, keeps the number of resou
### Predictive autoscale (preview)
-[Predictive autoscale](/azure/azure-monitor/autoscale/autoscale-predictive) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
+[Predictive autoscale](./autoscale-predictive.md) uses machine learning to help manage and scale Azure virtual machine scale sets with cyclical workload patterns. It forecasts the overall CPU load on your virtual machine scale set, based on historical CPU usage patterns. The scale set can then be scaled out in time to meet the predicted demand.
## Autoscale setup
Some commonly used metrics include CPU usage, memory usage, thread counts, queue
### Custom metrics
-Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](/azure/azure-monitor/app/app-insights-overview) so you can use those metrics decide when to scale.
+Use your own custom metrics that your application generates. Configure your application to send metrics to [Application Insights](../app/app-insights-overview.md) so you can use those metrics decide when to scale.
### Time
Rules can trigger one or more actions. Actions include:
+ Scale - Scale resources in or out. + Email - Send an email to the subscription admins, co-admins, and/or any other email address. + Webhooks - Call webhooks to trigger multiple complex actions inside or outside Azure. In Azure, you can:
- + Start an [Azure Automation runbook](/azure/automation/overview).
- + Call an [Azure Function](/azure/azure-functions/functions-overview).
- + Trigger an [Azure Logic App](/azure/logic-apps/logic-apps-overview).
+ + Start an [Azure Automation runbook](../../automation/overview.md).
+ + Call an [Azure Function](../../azure-functions/functions-overview.md).
+ + Trigger an [Azure Logic App](../../logic-apps/logic-apps-overview.md).
## Autoscale settings
The following services are supported by autoscale:
| Service | Schema & Documentation | | | |
-| Azure Virtual machines scale sets |[Overview of autoscale with Azure virtual machine scale sets](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview) |
+| Azure Virtual machines scale sets |[Overview of autoscale with Azure virtual machine scale sets](../../virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md) |
| Web apps |[Scaling Web Apps](autoscale-get-started.md) | | Azure API Management service|[Automatically scale an Azure API Management instance](../../api-management/api-management-howto-autoscale.md) | Azure Data Explorer Clusters|[Manage Azure Data Explorer clusters scaling to accommodate changing demand](/azure/data-explorer/manage-cluster-horizontal-scaling)|
The following services are supported by autoscale:
To learn more about autoscale, see the following resources: + [Azure Monitor autoscale common metrics](autoscale-common-metrics.md)
-+ [Scale virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)
-+ [Autoscale using Resource Manager templates for virtual machine scale sets](/azure/virtual-machine-scale-sets/tutorial-autoscale-powershell?toc=/azure/azure-monitor/toc.json)
++ [Scale virtual machine scale sets](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md?toc=%2fazure%2fazure-monitor%2ftoc.json)++ [Autoscale using Resource Manager templates for virtual machine scale sets](../../virtual-machine-scale-sets/tutorial-autoscale-powershell.md?toc=%2fazure%2fazure-monitor%2ftoc.json) + [Best practices for Azure Monitor autoscale](autoscale-best-practices.md) + [Use autoscale actions to send email and webhook alert notifications](autoscale-webhook-email.md) + [Autoscale REST API](/rest/api/monitor/autoscalesettings) + [Troubleshooting virtual machine scale sets and autoscale](../../virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md)
-+ [Troubleshooting Azure Monitor autoscale](/azure/azure-monitor/autoscale/autoscale-troubleshoot)
++ [Troubleshooting Azure Monitor autoscale](./autoscale-troubleshoot.md)
azure-monitor Tutorial Outages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/tutorial-outages.md
In this tutorial, you will:
## Pre-requisites - Install [.NET 5.0 or above](https://dotnet.microsoft.com/download). -- Install [the Azure CLI](https://docs.microsoft.com/cli/azure/install-azure-cli).
+- Install [the Azure CLI](/cli/azure/install-azure-cli).
## Set up the test application
Now that you've discovered the web app in-guest change and understand next steps
## Next steps
-Learn more about [Change Analysis](./change-analysis.md).
+Learn more about [Change Analysis](./change-analysis.md).
azure-monitor Data Collection Transformations Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-transformations-structure.md
ms.reviwer: nikeist
# Structure of transformation in Azure Monitor (preview)
-[Transformations in Azure Monitor](/azure/azure-monitor/essentials/data-collection-transformations) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
+[Transformations in Azure Monitor](./data-collection-transformations.md) allow you to filter or modify incoming data before it's stored in a Log Analytics workspace. They are implemented as a Kusto Query Language (KQL) statement in a [data collection rule (DCR)](data-collection-rule-overview.md). This article provides details on how this query is structured and limitations on the KQL language allowed.
## Transformation structure
Use [Identifier quoting](/azure/data-explorer/kusto/query/schema-entities/entity
## Next steps -- [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
+- [Create a data collection rule](../agents/data-collection-rule-azure-monitor-agent.md) and an association to it from a virtual machine using the Azure Monitor agent.
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
Last updated 07/10/2022
# Migrate from diagnostic settings storage retention to Azure Storage lifecycle management
-This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal) for retention.
+This guide walks you through migrating from using Azure diagnostic settings storage retention to using [Azure Storage lifecycle management](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal) for retention.
## Prerequisites
To set the rule for a specific subscription, resource group, and function app na
## Next steps
-[Configure a lifecycle management policy](/azure/storage/blobs/lifecycle-management-policy-configure?tabs=azure-portal).
+[Configure a lifecycle management policy](../../storage/blobs/lifecycle-management-policy-configure.md?tabs=azure-portal).
azure-monitor Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/insights/azure-sql.md
While Azure SQL Analytics (preview) is free to use, consumption of diagnostics t
- [Create your own dashboards](../visualize/tutorial-logs-dashboards.md) showing Azure SQL data. - [Create alerts](../alerts/alerts-overview.md) when specific Azure SQL events occur. - [Monitor Azure SQL Database with Azure Monitor](/azure/azure-sql/database/monitoring-sql-database-azure-monitor)-- [Monitor Azure SQL Managed Instance with Azure Monitor](/azure/azure-sql/database/monitoring-sql-managed-instance-azure-monitor)
+- [Monitor Azure SQL Managed Instance with Azure Monitor](/azure/azure-sql/managed-instance/monitoring-sql-managed-instance-azure-monitor)
azure-monitor Analyze Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/analyze-usage.md
find where TimeGenerated > ago(24h) project _BilledSize, _IsBillable, Computer,
**Count of billable events by computer** ```kusto
-find where TimeGenerated > ago(24h) project _IsBillable, Computer
+find where TimeGenerated > ago(24h) project _IsBillable, Computer, Type
| where _IsBillable == true and Type != "Usage" | extend computerName = tolower(tostring(split(Computer, '.')[0])) | summarize eventCount = count() by computerName
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Last updated 01/27/2022
Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across extremely large datasets. This article describes how to create a search job and how to query its resulting data. > [!NOTE]
-> The search job feature is currently in public preview and is not supported in workspaces with [customer-managed keys](customer-managed-keys.md).
+> The search job feature is currently in public preview and isn't supported in:
+> - Workspaces with [customer-managed keys](customer-managed-keys.md).
+> - The China East 2 region.
## When to use search jobs
azure-monitor Workbooks View Designer Conversion Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-view-designer-conversion-overview.md
This is a workbook with a data types over time tab:
## Replicate the View Designer overview tile
-In View Designer, you can use the overview tile to represent and summarize the overall state. These are presented in seven tiles, ranging from numbers to charts. In workbooks, you can create similar visualizations and pin them to your [Azure portal Dashboard](/azure/azure-portal/azure-portal-dashboards). Just like the overview tiles in the Workspace summary, pinned workbook items will link directly to the workbook view.
+In View Designer, you can use the overview tile to represent and summarize the overall state. These are presented in seven tiles, ranging from numbers to charts. In workbooks, you can create similar visualizations and pin them to your [Azure portal Dashboard](../../azure-portal/azure-portal-dashboards.md). Just like the overview tiles in the Workspace summary, pinned workbook items will link directly to the workbook view.
You can also take advantage of the high level of customization features provided with Azure Dashboard, which allows auto refresh, moving, sizing, and more filtering for your pinned items and visualizations.
With workbooks, you can choose to query one or both sections of the view. Formul
## Next steps -- [Sample conversions](workbooks-view-designer-conversions.md)
+- [Sample conversions](workbooks-view-designer-conversions.md)
azure-monitor Vminsights Dependency Agent Maintenance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/vminsights-dependency-agent-maintenance.md
The Dependency Agent collects data about processes running on the virtual machin
- The Dependency Agent requires the Log Analytics Agent to be installed on the same machine. - On both the Windows and Linux versions, the Dependency Agent collects data using a user-space service and a kernel driver.
- - Dependency Agent supports the same [Windows versions Log Analytics Agent supports](/azure/azure-monitor/agents/agents-overview#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI.
+ - Dependency Agent supports the same [Windows versions Log Analytics Agent supports](../agents/agents-overview.md#supported-operating-systems), except Windows Server 2008 SP2 and Azure Stack HCI.
- For Linux, see [Dependency Agent Linux support](#dependency-agent-linux-support). ## Upgrade Dependency Agent
Since the Dependency agent works at the kernel level, support is also dependent
## Next steps
-If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
+If you want to stop monitoring your VMs for a while or remove VM insights entirely, see [Disable monitoring of your VMs in VM insights](../vm/vminsights-optout.md).
azure-netapp-files Azacsnap Cmd Ref Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-configure.md
When adding a *SAP HANA database* to the configuration, the following values are
### Backint coexistence
-[Azure Backup](/azure/backup/) service provides an alternate backup tool for SAP HANA, where database and log backups are streamed into the
+[Azure Backup](../backup/index.yml) service provides an alternate backup tool for SAP HANA, where database and log backups are streamed into the
Azure Backup Service. Some customers would like to combine the streaming backint-based backups with regular snapshot-based backups. However, backint-based backups block other methods of backup, such as using a files-based backup or a storage snapshot-based backup (for example, AzAcSnap). Guidance is provided on
-the Azure Backup site on how to [Run SAP HANA native client backup to local disk on a database with Azure Backup enabled](/azure/backup/sap-hana-db-manage#run-sap-hana-native-client-backup-to-local-disk-on-a-database-with-azure-backup-enabled).
+the Azure Backup site on how to [Run SAP HANA native client backup to local disk on a database with Azure Backup enabled](../backup/sap-hana-db-manage.md).
The process described in the Azure Backup documentation has been implemented with AzAcSnap to automatically do the following steps:
azure-netapp-files Azacsnap Cmd Ref Runbefore Runafter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-runbefore-runafter.md
The following list of environment variables is generated by `azacsnap` and passe
### Example usage An example usage for this new feature is to upload a snapshot to Azure Blob for archival purposes using the azcopy tool
-([Copy or move data to Azure Storage by using AzCopy](/azure/storage/common/storage-use-azcopy-v10)).
+([Copy or move data to Azure Storage by using AzCopy](../storage/common/storage-use-azcopy-v10.md)).
The following crontab entry is a single line and runs `azacsnap` at five past midnight. Note the call to `snapshot-to-blob.sh` passing the snapshot name and snapshot prefix:
PORTAL_GENERATED_SAS="https://<targetstorageaccount>.blob.core.windows.net/<blob
## Next steps - [Take a backup](azacsnap-cmd-ref-backup.md)-- [Get snapshot details](azacsnap-cmd-ref-details.md)
+- [Get snapshot details](azacsnap-cmd-ref-details.md)
azure-netapp-files Azacsnap Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-preview.md
Return to this document for details on using the preview features.
> This section's content supplements [Configure Azure Application Consistent Snapshot tool](azacsnap-cmd-ref-configure.md) website page. Microsoft provides many storage options for deploying databases such as SAP HANA. Many of these options are detailed on the
-[Azure Storage types for SAP workload](/azure/virtual-machines/workloads/sap/planning-guide-storage) web page. Additionally there's a
-[Cost conscious solution with Azure premium storage](/azure/virtual-machines/workloads/sap/hana-vm-operations-storage#cost-conscious-solution-with-azure-premium-storage).
+[Azure Storage types for SAP workload](../virtual-machines/workloads/sap/planning-guide-storage.md) web page. Additionally there's a
+[Cost conscious solution with Azure premium storage](../virtual-machines/workloads/sap/hana-vm-operations-storage.md#cost-conscious-solution-with-azure-premium-storage).
AzAcSnap is able to take application consistent database snapshots when deployed on this type of architecture (that is, a VM with Managed Disks). However, the set up for this platform is slightly more complicated as in this scenario we need to block I/O to the mountpoint (using `xfs_freeze`) before taking a snapshot of the Managed
The steps to follow to set up Azure Key Vault and store the Service Principal in
- [Get started](azacsnap-get-started.md) - [Test AzAcSnap](azacsnap-cmd-ref-test.md)-- [Back up using AzAcSnap](azacsnap-cmd-ref-backup.md)
+- [Back up using AzAcSnap](azacsnap-cmd-ref-backup.md)
azure-percept Azure Percept For Deepstream Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/deepstream/azure-percept-for-deepstream-overview.md
+
+ Title: Azure Percept for DeepStream overview
+description: A description of Azure Percept for DeepStream developer tools that provide a custom developer experience.
+++++ Last updated : 08/10/2022++
+# Azure Percept for DeepStream overview
+
+Azure Percept for DeepStream includes developer tools that provide a custom developer experience. It enables you to create NVIDIA DeepStream containers using Microsoft-based images and guidance, supported models from NVIDIA out of the box, and/or bring your own models.
+
+DeepStream is NVIDIAΓÇÖs toolkit to develop and deploy Vision AI applications and services. It provides multi-platform, scalable, Transport Layer Security (TLS)-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
+
+## Azure Percept for DeepStream offers:
+
+- **Simplifying your development process**
+
+ Auto selection of AI model execution and inference provider: One of several execution providers, such as ORT, CUDA, and TENSORT, are automatically selected to simplify your development process.
+
+- **Customizing Region of Interest (ROI) to enable your business scenario**
+
+ Region of Interest (ROI) configuration widget: Percept Player, a web app widget, is included for customizing ROIs to enable event detection for your business scenario.
+
+- **Simplifying the configuration for pre/post processing**
+
+ You can add a Python-based model and parser using a configuration file, instead of hardcoding it into the pipeline.
+
+- **Offering a broad Pre-built AI model framework**
+
+ This solution supports many of the most common CV models in use today, for example NVIDIA TAO, ONNX, CAFFE, UFF (TensorFlow), and Triton.
+
+- **Supporting bring your own model**
+
+ Support for model and container customization, USB or RTSP camera and pre-recorded video streams, event-based video snippet storage in Azure Storage and Alerts, and AI model deployment via Azure IoT Module Twin update.
+
+## Azure Percept for DeepStream key components
+
+The following table provides a list of Azure Percept for DeepStreamΓÇÖs key components and a description of each one.
+
+| Components | Details |
+|-||
+| **Edge devices** | Azure Percept for DeepStream is available on the following devices:<br> - [Azure Stack HCI](/azure-stack/hci/overview): Requires a NVIDIA GPU (T4 or A2)<br> - [NVIDIA Jetson Orin](https://www.nvidia.com/autonomous-machines/embedded-systems/jetson-orin/)<br> - [NVIDIA Jetson Xavier](https://www.nvidia.com/autonomous-machines/embedded-systems/jetson-agx-xavier/)<br><br>**Note**<br>You can use any of the listed devices with any of the development paths. Some implementation steps may differ depending on the architecture of your device. Azure Stack HCI uses AMD64. Jetson devices use ARM64.<br><br> |
+| **Computer vision models** | Azure Percept for DeepStream can work with many different computer vision (CV) models as outlined:<br><br> - **NVIDIA Models** <br>For example: Body Pose Estimation and License Plate Recognition. License Plate Recognition includes three models: traffic cam net, license plate detection, and license plate reading and other Nivida Models.<br><br> - **ONNX Models** <br>For example: SSD-MobileNetV1, YOLOv4, Tiny YOLOv3, EfficentNet-Lite.<br><br> |
+| **Development Paths** | Azure Percept for DeepStream offers three development paths:<br><br> - **Getting started path** <br>This path uses pre-trained models and pre-recorded videos of simulated manufacturing environment to demonstrate the steps required to create an Edge AI solution using Azure Percept for DeepStream.<br>If you're just getting started on your computer vision (CV) app journey or simply want to learn more about Azure Percept for DeepStream, we recommend this path.<br><br> - **Pre-built model path** <br>This path provides pre-built parsers in Python for the CV models outlined earlier. You can easily deploy one of these models and integrate your own video stream.<br>If you're familiar with Azure IoT Edge solutions and want to leverage one of the supported models with an existing video stream, we recommend this path. <br><br> - **Bring your own model (BYOM) path**<br>This path provides you with steps of how to integrate your own custom model and parser into your Azure Percept for DeepStream Edge AI solution.<br>If you're an experienced developer who is familiar with cloud-based CV solutions and want a simplified deployment experience Azure Percept for DeepStream, we recommend this path.<br><br> |
+
+## Next steps
+
+Text to come.
+
+<!-- You're now ready to start using Azure Percept for DeepStream to create, manage, and deploy custom Edge AI solutions. We recommend the following resources to get started:
+
+- [Getting started checklist for Azure Percept for DeepStream](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EeWQwQ8T-LVDmTMqC62Gss0Bo_1Fbjj9I8mDSLYwlICd_Q?e=f9FajM)
+
+- [Tutorial: Deploy a supported model to your Azure Percept for DeepStream solution ](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EQ9Wux4CkO5Iss8s82lcZj4B9XCwagaVoUEKyK0q2y-A1w?e=YfOaWn) -->
azure-percept Azure Percept On Azure Stack Hci Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/hci/azure-percept-on-azure-stack-hci-overview.md
+
+ Title: Azure Percept on Azure Stack HCI overview
+description: A description of Azure Percept on Azure Stack HCI.
+++++ Last updated : 08/15/2022 ++
+# Azure Percept on Azure Stack HCI overview
+Azure Percept on Azure Stack HCI is a virtualized workload that enables you to extend the capabilities of your existing [Azure Stack HCI](https://azure.microsoft.com/products/azure-stack/hci/) deployments quickly and easily by adding sophisticated AI solutions at the Edge. It is available as a preconfigured virtual hard disk (VHDX) that functions as an Azure IoT Edge device with AI capabilities.
+
+## Azure Percept on Azure Stack HCI enables you:
+
+### Maximize your investments easily
+Maximize your existing investments in the Azure Stack HCI computer infrastructure when you run Azure Percept on Azure Stack HCI. You can leverage [Windows Admin Center (WAC)](https://www.microsoft.com/windows-server/windows-admin-center) management expertise with Azure Percept for Azure Stack HCI extension to ingest and analyze data streams from your existing IP camera infrastructure. Using WAC also enables you to easily deploy, manage, scale, and secure your Azure Percept virtual machine (VM).
+
+### Bring data to storage and compute
+Use Azure Stack HCIΓÇÖs robust storage and compute options to pre-process raw data at the Edge before sending it to Azure for further processing and training. Since artificial intelligence/machine learning (AI/ML) solutions at the edge generate and process a significant amount of data, using Azure Stack HCI reduces the amount of data transfer or bandwidth consumed into Azure.
+
+### Maintain device security
+Azure Percept on Azure Stack HCI provides multiple layers of security. Leverage security mechanisms and processes built into the solution, including virtual trusted platform module (TPM), secure boot, secure provisioning, trusted software, secure update, and [Microsoft Defender for IoT](https://www.microsoft.com/security/blog/2021/11/02/how-microsoft-defender-for-iot-can-secure-your-iot-devices/#:~:text=Microsoft%20Defender%20for%20IoT%20is%20an%20open%20platform,to%20enrich%20the%20information%20coming%20from%20multiple%20sources).
+
+## Key components of Azure Percept on Azure Stack HCI
+Azure Percept on Azure Stack HCI integrates with Azure Percept Studio, Azure IoT Edge, IoT Hub, and Spatial Analysis from Azure Cognitive Services to create an end-to-end intelligent solution that leverages your existing IP camera devices.
+
+The following diagram provides a high-level view of the Azure Percept on Azure Stack HCI architecture.
+
+![Architecture diagram for Azure Percept on Azure Stack HCI.](./media/azure-percept-component-diagram.png)
+
+**Azure Percept on Azure Stack HCI includes the following key components:**
+
+### Azure Stack HCI
+[Azure Stack HCI](https://azure.microsoft.com/products/azure-stack/hci/) is a hyperconverged infrastructure (HCI) cluster solution that hosts virtualized Windows and Linux workloads and their storage in a hybrid environment that combines on-premises infrastructure with Azure cloud services. It requires a minimum of two clustered compute nodes, scales to as many as 16 clustered nodes, and enables data pre-processing at the edge by providing robust storage and compute options. Azure Percept on Azure Stack HCI runs as a pre-configured VM on Azure Stack HCI and has failover capability to ensure continuous operation. For information about customizable solutions that you can configure to meet your needs, see [certified Azure Stack HCI systems](https://azurestackhcisolutions.azure.microsoft.com/#/catalog).
+
+### Azure Percept virtual machine (VM)
+The Azure Percept VM leverages a virtual hard disk (VHDX) that runs on the Azure Stack HCI device. It enables you to host your own AI models, communicate with the cloud via IoT Hub, and update the Azure Percept virtual machine (VM) so you can update containers, download models, and manage devices remotely.
+
+The Percept VM leverages Azure IoT Edge to communicate with [Azure IoT Hub](https://www.bing.com/aclk?ld=e8d3D-tqxgHU7f2fug-xNf9TVUCUyRhu5fu58-tWHmwhmAtKIzkXCQETOv1QnKdXCr1kFm6NQ4SA4K5mukLPrpKC5z7nTlhrXnaiTqPPGu2a47SnDq-aKylUzhYQLxKs1yyOtnDuD1DDg4q04CZdFUFwPani9jnp6DLiQPMoYBkhhEJ3FV6SFro1VVB67p_n_4De1B7A&u=aHR0cHMlM2ElMmYlMmZhenVyZS5taWNyb3NvZnQuY29tJTJmZW4tdXMlMmZmcmVlJTJmaW90JTJmJTNmT0NJRCUzZEFJRDIyMDAyNzdfU0VNX2VhM2NkYWExN2Y5MzFkNDE2NTkwYjgyMjdlMjk0ZjdmJTNhRyUzYXMlMjZlZl9pZCUzZGVhM2NkYWExN2Y5MzFkNDE2NTkwYjgyMjdlMjk0ZjdmJTNhRyUzYXMlMjZtc2Nsa2lkJTNkZWEzY2RhYTE3ZjkzMWQ0MTY1OTBiODIyN2UyOTRmN2Y&rlid=ea3cdaa17f931d416590b8227e294f7f&ntb=1). It runs locally and securely, performs AI inferencing at the Edge, and communicates with Azure services for security and updates. It includes [Defender for IoT](https://www.bing.com/ck/a?!&&p=4b4f5983a77f5d870170a12cd507a8d967bd32e10eab125544ac7aad1691be23JmltdHM9MTY1Mjc1MzE3OCZpZ3VpZD1mZmQyZGJiNi1iOWFlLTRiYjgtOTQ1MC1iM2FlNmQ1ZTBlNmUmaW5zaWQ9NTQ1Mg&ptn=3&fclid=f087fcb3-d585-11ec-b34a-9f80cb12a098&u=a1aHR0cHM6Ly9henVyZS5taWNyb3NvZnQuY29tL2VuLXVzL3NlcnZpY2VzL2lvdC1kZWZlbmRlci8&ntb=1) to provide a lightweight security agent that proactively monitors for security threats like botnets, brute force attempts, crypto miners, malware, and chatbots, that you can also integrate into your Azure Monitor infrastructure.
+
+### Azure Percept Windows Admin Center Extension (WAC)
+[Windows Admin Center (WAC)](https://www.microsoft.com/windows-server/windows-admin-center) is a locally deployed application accessed via your browser for managing Azure Stack HCI clusters, Windows Server, and more. Azure Percept on Azure Stack HCI is installed through a WAC extension that guides the user through configuring and deploying the Percept VM and related services. It creates a secure and performant AI video inferencing solution usable from the edge to the cloud.
+
+### Azure Percept Solution Development Paths
+Whether you're a beginner, an expert, or anywhere in between, from zero to low code, to creating or bringing your own models, Azure Percept has a solution development path for you to build your Edge artificial intelligence (AI) solution. Azure Percept has three solution development paths that you can use to build Edge AI solutions: Azure Percept Studio, Azure Percept for DeepStream, and Azure Percept Open-Source Project. You aren't limited to one path; you can choose any or all of them depending on your business needs. For more information about the solution development paths, visit [Azure Percept solution development paths overview](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EU92ZnNynDBGuVn3P5Xr5gcBFKS5HQguZm7O5sEENPUvPA?e=33T6Vi).
+
+#### *Azure Percept Studio*
+[Azure Percept Studio](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EeyEj0dBcplEs9LSFaz95DsBApnmxRMdjZ9I3QinSgO0yA?e=cbIJkI) is a user-friendly portal for creating, deploying, and operating Edge artificial intelligence (AI) solutions. Using a low-code to no-code approach, you can discover and complete guided workflows and create an end-to-end Edge AI solution. This solution integrates Azure IoT and Azure AI cloud services like Azure IoT Hub, IoT Edge, Azure Storage, Log Analytics, and Spatial Analysis from Azure Cognitive Services.
+
+#### *Azure Percept for DeepStream*
+[Azure Percept for DeepStream](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/ETDSdi6ruptBkwMqvLPRL90Bzv3ORhpmAZ1YLeGt1LvtVA?e=lY2Q4f&CID=DDDB383F-4BFE-4C97-86A7-70766B16EB93&wdLOR=cDA23C19C-5685-46EC-BA28-7C9DEC460A5B&isSPOFile=1&clickparams=eyJBcHBOYW1lIjoiVGVhbXMtRGVza3RvcCIsIkFwcFZlcnNpb24iOiIyNy8yMjA3MzEwMTAwNSIsIkhhc0ZlZGVyYXRlZFVzZXIiOmZhbHNlfQ%3D%3D) includes developer tools that provide a custom developer experience. It enables you to create NVIDIA DeepStream containers using Microsoft-based images and guidance, supported models from NVIDIA out of the box, and/or bring your own models (BYOM). DeepStream is NVIDIAΓÇÖs toolkit to develop and deploy Vision AI applications and services. It provides multi-platform, scalable, Transport Layer Security (TLS)-encrypted security that can be deployed on-premises, on the edge, and in the cloud.
+
+#### *Azure Percept Open-Source Project*
+[Azure Percept Open-Source Project](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/Eeoh0pZk5g1MqwJZUAZFEvEBMYmfAqdibII6Znm-PnnDIQ?e=4ZDfUT) is a framework for creating, deploying, and operating Edge artificial intelligence (AI) solutions at scale with the control and flexibility of open-source natively on your environment. Azure Percept Open-Source Project is fully open-sourced and leverages the open-source software (OSS) community to deliver enhanced experiences. It's a self-managed solution where you host the environment in your own cluster.
+
+## Next steps
+
+Text to come.
+
+<!-- Before you start setting up your Azure Percept virtual machine (VM), we recommend the following articles:
+- [Getting started checklist for Azure Percept on Azure Stack HCI](https://github.com/microsoft/santa-cruz-workload/blob/main/articles/getting-started-checklist-for-azure-percept.md)
+- [Azure Percept solution development paths overview](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EU92ZnNynDBGuVn3P5Xr5gcBFKS5HQguZm7O5sEENPUvPA?e=DKZtr6)
+
+If youΓÇÖre ready to start setting up your Azure Percept virtual machine (VM), we recommend the following tutorial:
+- [Tutorial: Setting up Azure Percept on Azure Stack HCI using WAC extension (Cluster server)](https://github.com/microsoft/santa-cruz-workload/blob/main/articles/tutorial-setting-up-azure-percept-using-wac-extension-cluster.md) -->
azure-percept Azure Percept Open Source Project Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/open-source/azure-percept-open-source-project-overview.md
+
+ Title: Azure Percept Open-Source Project overview
+description: An overview of the Azure Percept Open-Source project
+++++ Last updated : 08/17/2022 ++
+# Azure Percept Open-Source Project overview
+
+Azure Percept Open-Source Project is a framework for creating, deploying, and operating Edge artificial intelligence (AI) solutions at scale with the control and flexibility of open-source natively on your environment. It's fully open-sourced and leverages the open-source software (OSS) community to deliver enhanced experiences. And, as a self-managed solution, you can host the experience on your own Kubernetes clusters.
+
+Azure Percept Open-Source Project has a no- to low-code portal experience as well as APIs that can be used to build custom Edge AI applications. It supports running Edge AI apps by utilizing cameras and Edge devices with different Edge runtimes and accelerators across multiple locations at scale. Since it's designed with machine learning operations (MLOps) in mind, it provides support for active learning, continuous training, and data gathering using your machine learning (ML) models running at the edge.
+
+## Azure Percept Open-Source Project offers
+
+- **An integrated developer experience**
+
+ You can easily build camera-based Edge AI apps using first- and third-party ML models. In one seamless flow, you can leverage pre-built models from our partnerΓÇÖs Model Zoo and create your own ML models with Azure Custom Vision.
+
+- **Solution deployment and management experience at scale**
+
+ Azure Percept Open-Source Project is Kubernetes native, so you can run the experience wherever Kubernetes runs; on-premises, hybrid, cloud, or multicloud environments. You can manage your experience using Kubernetes native tools such as Kubectl, our unique command line interface (CLI), and/or our no- to low-code native web portal. Edge AI apps and assets you create are projected and managed as Kubernetes objects, which allows you to rely on the Kubernetes control plane to manage the state of your Edge AI assets across many environments at scale.
+
+- **Standard-based**
+
+ Azure Percept Open-Source Project is built on and supports popular industrial standards, protocols, and frameworks like Open Platform Communications Unified Architecture (OPC-UA), Open Network Video Interface Forum (ONVIF), OpenTelemetry, CloudEvents, Distributed Application Runtime (Dapr), Message Queuing Telemetry Transport (MQTT), Open Neural Network Exchange (ONNX), Akri, Kubectl, Helm, and many others.
+
+- **Zero-friction adoption**
+
+ Even without any Edge hardware, you can get started with a few commands, then seamlessly transit from prototype/pilot to production at scale. Azure Percept Open-Source Project has an easy-to-use no- to low-code portal experience that allows developers to create and manage Edge AI solutions in minutes instead of days or months.
+
+- **Azure powered and platform agnostic**
+
+ Azure Percept Open-Source Project natively uses and supports Azure Edge and AI Services like Azure IoT Hub, Azure IoT Edge, Azure Cognitive Services, Azure Storage Server, Azure ML, and so on. At the same time, it also allows you to modify the experience for use cases that require the use of other services (Azure or non-Azure) or other Open-Source Software (OSS) tools.
+
+## Next steps
+
+Text to come.
+
+<!-- You're now ready to start using Azure Percept Open-Source Project. We recommend the following resources to get started.
+
+ - TBD (getting started) How to get started and setup Azure Percept Open-Source Project
+
+- [Introduction to Azure Percept for Open-Source Project core concepts](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/EQwRE6w96T1OiO_kstWw1lMBs1yZFUow_ik3kx3rV12EVg?e=bactOi)
+
+- [Tutorial: Create an Edge AI solution with Azure Percept for Open-Source Project](https://microsoft.sharepoint-df.com/:w:/t/AzurePerceptHCIDocumentation/ERF8mxgtOqhIt2YJWFafuZoBC6kZ6hC-iRAMuCJeyZjD-w?e=BS4cN5)
+-->
azure-percept Overview Azure Percept Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/overview-azure-percept-studio.md
Title: Azure Percept Studio overview
+ Title: Azure Percept Studio overview v1
description: Learn more about Azure Percept Studio
Last updated 03/23/2021
-# Azure Percept Studio overview
+# Azure Percept Studio overview v1
[Azure Percept Studio](https://go.microsoft.com/fwlink/?linkid=2135819) is the single launch point for creating edge AI models and solutions. Azure Percept Studio allows you to discover and complete guided workflows that make it easy to integrate edge AI-capable hardware and powerful Azure AI and IoT cloud services.
azure-percept Azure Percept Studio Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/studio/azure-percept-studio-overview.md
+
+ Title: Azure Percept Studio overview
+description: Description of Azure Percept Studio.
+++++ Last updated : 08/08/2022++
+# Azure Percept Studio overview
+
+Azure Percept Studio is a user-friendly portal for creating, deploying, and operating Edge artificial intelligence (AI) solutions. Using a low-code to no-code approach, you can discover and complete guided workflows and create an end-to-end Edge AI solution. This solution integrates Azure IoT and Azure AI cloud services like Azure IoT Hub, IoT Edge, Azure Storage, Log Analytics, and Spatial Analysis from Azure Cognitive Services.
+
+With Azure Percept Studio, you can connect your Edge AI compute devices and cameras and then configure and apply the pre-built AI skills included with Azure Precept Studio to automate and transform your operations at the edge. For example, you can use your cameras to count people in an area, detect when people cross a line, or when people enter/exit a restricted or secured area. You can then use AI skills to help you analyze this data in real-time so you can manage queues, space utilization, and occupancy, like a store entrance or exit, a curbside pickup area, or intruders on secure premises.
+
+## Azure Percept Studio offers:
+
+- **No code, low code integrated flows**
+
+ Whether you're a beginner or an advanced developer working on a pilot solution, Azure Percept Studio offers access to well-integrated workflows that you can use to reduce friction around building Edge AI solutions. You can create a pilot Edge AI solution in 10 minutes.
+
+- **People understanding AI skills**
+
+ Azure Spatial Analysis is fully integrated in Azure Percept. Spatial Analysis detects the presence and movements of people in real time video feed from IP cameras. There are three skills available around people understanding: people counting in an area, detecting when people cross a line, and detecting when people enter/ exit and area.
+
+- **Gain insights and act**
+
+ Once your solution is created, you can operate your devices and solutions remotely, monitor multiple video streams, and create live inference telemetry. To optimize your operations at the Edge, you can then aggregate inference data over time and derive insights and trends that you can use in real time to create alerts that help you be proactive instead of reactive.
+
+## Next steps
+
+Text to come.
+
+<!-- If you havenΓÇÖt set up your Azure Percept on Azure Stack HCI, we recommend the following tutorial to start setting up your VM using Azure Percept Windows Admin Center Extension (WAC):
+
+- [Set up Azure Percept on Azure Stack HCI using WAC extensions](set-up-azure-percept-using-wac-extension-cluster.md)
+
+If you have already set up your Azure Percept on Azure Stack HCI and are ready to start building your edge AI solution, we recommend the following tutorial:
+
+- [Create a no-code Edge AI solution using Azure Percept Studio](AzP%20Studio%20Guide.md).-->
azure-resource-manager Deploy Service Catalog Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/deploy-service-catalog-quickstart.md
Title: Use Azure portal to deploy service catalog app
-description: Shows consumers of Managed Applications how to deploy a service catalog app through the Azure portal.
+ Title: Use Azure portal to deploy service catalog managed application
+description: Shows consumers of Azure Managed Applications how to deploy a service catalog managed application from the Azure portal.
-- Previously updated : 10/04/2018 + Last updated : 08/17/2022
-# Quickstart: Deploy service catalog app through Azure portal
-In the [preceding quickstart](publish-service-catalog-app.md), you published a managed application definition. In this quickstart, you create a service catalog app from that definition.
+# Quickstart: Deploy service catalog managed application from Azure portal
+
+In the quickstart article to [publish the definition](publish-service-catalog-app.md), you published an Azure managed application definition. In this quickstart, you use that definition to deploy a service catalog managed application. The deployment creates two resource groups. One resource group contains the managed application and the other is a managed resource group for the deployed resource. In this article, the managed application definition deploys a managed storage account.
+
+## Prerequisites
-## Create service catalog app
+To complete this quickstart, you need an Azure account with an active subscription. If you completed the quickstart to publish a definition, you should already have an account. Otherwise, [create a free account](https://azure.microsoft.com/free/) before you begin.
+
+## Create service catalog managed application
In the Azure portal, use the following steps:
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Select **Create a resource**.
- ![Create a resource](./media/deploy-service-catalog-quickstart/create-new.png)
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/create-resource.png" alt-text="Create a resource":::
+
+1. Search for _Service Catalog Managed Application_ and select it from the available options.
-1. Search for **Service Catalog Managed Application** and select it from the available options.
+1. **Service Catalog Managed Application** is displayed. Select **Create**.
- ![Search for service catalog application](./media/deploy-service-catalog-quickstart/select-service-catalog.png)
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/create-service-catalog-managed-application.png" alt-text="Select create":::
-1. You see a description of the Managed Application service. Select **Create**.
+1. The portal shows the managed application definitions that you can access. From the available definitions, select the one you want to deploy. In this quickstart, use the **Managed Storage Account** definition that you created in the preceding quickstart. Select **Create**.
- ![Select create](./media/deploy-service-catalog-quickstart/create-service-catalog.png)
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/select-service-catalog-managed-application.png" alt-text="Screenshot that shows managed application definitions that you can select and deploy.":::
-1. The portal shows the managed application definitions that you have access to. From the available definitions, select the one you wish to deploy. In this quickstart, use the **Managed Storage Account** definition that you created in the preceding quickstart. Select **Create**.
+1. Provide values for the **Basics** tab and select **Next: Storage settings**.
- ![Select definition to deploy](./media/deploy-service-catalog-quickstart/select-definition.png)
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/basics-info.png" alt-text="Screenshot that highlights the information needed on the basics tab.":::
-1. Provide values for the **Basics** tab. Select the Azure subscription to deploy your service catalog app to. Create a new resource group named **applicationGroup**. Select a location for your app. When finished, select **OK**.
+ - **Subscription**: Select the subscription where you want to deploy the managed application.
+ - **Resource group**: Select the resource group. For this example, create a resource group named _applicationGroup_.
+ - **Region**: Select the location where you want to deploy the resource.
+ - **Application Name**: Enter a name for your application. For this example, use _demoManagedApplication_.
+ - **Managed Resource Group**: Uses a default name in the format `mrg-{definitionName}-{dateTime}` like the example _mrg-ManagedStorage-20220817085240_. You can change the name.
- ![Provide values for basic](./media/deploy-service-catalog-quickstart/provide-basics.png)
+1. Enter a prefix for the storage account name and select the storage account type. Select **Next: Review + create**.
-1. Provide a prefix for the storage account name. Select the type of storage account to create. When finished, select **OK**.
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/storage-info.png" alt-text="Screenshot that shows the information needed to create a storage account.":::
- ![Provide values for storage](./media/deploy-service-catalog-quickstart/provide-storage.png)
+ - **Storage account name prefix**: Use only lowercase letters and numbers and a maximum of 11 characters. During deployment, the prefix is concatenated with a unique string to create the storage account name.
+ - **Storage account type**: Select **Change type** to choose a storage account type. The default is Standard LRS.
-1. Review the summary. After validation succeeds, select **OK** to begin deployment.
+1. Review the summary of the values you selected and verify **Validation Passed** is displayed. Select **Create** to begin the deployment.
- ![View summary](./media/deploy-service-catalog-quickstart/view-summary.png)
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/summary-validation.png" alt-text="Screenshot that summarizes the values you selected and shows the validation status.":::
## View results
-After the service catalog app has been deployed, you have two new resource groups. One resource group holds the service catalog app. The other resource group holds the resources for the service catalog app.
+After the service catalog managed application is deployed, you have two new resource groups. One resource group contains the managed application. The other resource group contains the managed resource that was deployed. In this example, a managed storage account.
+
+### Managed application
+
+Go to the resource group named **applicationGroup**. The resource group contains your managed application named _demoManagedApplication_.
+
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/view-application-group.png" alt-text="Screenshot that shows the resource group that contains the managed application.":::
+
+### Managed resource
+
+Go to the managed resource group with the name prefix **mrg-ManagedStorage** to see the resource that was deployed. The resource group contains the managed storage account that uses the prefix you specified. In this example, the storage account prefix is _demoappstg_.
+
+ :::image type="content" source="./media/deploy-service-catalog-quickstart/view-managed-resource-group.png" alt-text="Screenshot that shows the managed resource group that contains the resource deployed by the managed application.":::
+
+The storage account that's created by the managed application has a role assignment. In the [publish the definition](publish-service-catalog-app.md#create-an-azure-active-directory-user-group-or-application) article, you created an Azure Active Directory group. That group was used in the managed application definition. When you deployed the managed application, a role assignment for that group was added to the managed storage account.
+
+To see the role assignment from the Azure portal:
+
+1. Go to the **mrg-ManagedStorage** resource group.
+1. Select **Access Control (IAM)** > **Role assignments**.
+
+ You can also view the resource's **Deny assignments**.
+
+The role assignment gives the application's publisher access to manage the storage account. In this example, the publisher might be your IT department. The _Deny assignments_ prevents customers from making changes to a managed resource's configuration. Managed apps are designed so that customers don't need to maintain the resources. The _Deny assignment_ excludes the Azure Active Directory group that was assigned in **Role assignments**.
+
+## Clean up resources
-1. View the resource group named **applicationGroup** to see the service catalog app.
+When your finished with the managed application, you can delete the resource groups and that will remove all the resources you created. For example, in this quickstart you created the resource groups _applicationGroup_ and a managed resource group with the prefix _mrg-ManagedStorage_.
- ![View application](./media/deploy-service-catalog-quickstart/view-managed-application.png)
+1. From Azure portal **Home**, in the search field, enter _resource groups_.
+1. Select **Resource groups**.
+1. Select **applicationGroup** and **Delete resource group**.
+1. To confirm the deletion, enter the resource group name and select **Delete**.
-1. View the resource group named **applicationGroup{hash-characters}** to see the resources for the service catalog app.
+When the resource group that contains the managed application is deleted, the managed resource group is also deleted. In this example, when _applicationGroup_ is deleted the _mrg-ManagedStorage_ resource group is also deleted.
- ![View resources](./media/deploy-service-catalog-quickstart/view-resources.png)
+If you want to delete the managed application definition, you can delete the resource groups you created in the quickstart to [publish the definition](publish-service-catalog-app.md).
## Next steps
-* To learn how to create the definition files for a managed application, see [Create and publish a managed application definition](publish-service-catalog-app.md).
-* For Azure CLI, see [Deploy service catalog app with Azure CLI](./scripts/managed-application-cli-sample-create-application.md).
-* For PowerShell, see [Deploy service catalog app with PowerShell](./scripts/managed-application-poweshell-sample-create-application.md).
+- To learn how to create the definition files for a managed application, see [Quickstart: Create and publish an Azure Managed Application definition](publish-service-catalog-app.md).
+- For Azure CLI, see [Deploy managed application with Azure CLI](./scripts/managed-application-cli-sample-create-application.md).
+- For PowerShell, see [Deploy managed application with PowerShell](./scripts/managed-application-poweshell-sample-create-application.md).
azure-resource-manager Publish Service Catalog App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/publish-service-catalog-app.md
To publish a managed application to your service catalog, do the following tasks
To complete this quickstart, you need the following items: -- If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- An Azure account with an active subscription. If you don't have an account, [create a free account](https://azure.microsoft.com/free/) before you begin.
- [Visual Studio Code](https://code.visualstudio.com/) with the latest [Azure Resource Manager Tools extension](https://marketplace.visualstudio.com/items?itemName=msazurermtools.azurerm-vscode-tools). - Install the latest version of [Azure PowerShell](/powershell/azure/install-az-ps) or [Azure CLI](/cli/azure/install-azure-cli).
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/best-practices.md
The following information can be helpful when you work with [resources](./syntax
] ```
- For more details about comments and metadata see [Understand the structure and syntax of ARM templates](/azure/azure-resource-manager/templates/syntax#comments-and-metadata).
+ For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
* If you use a *public endpoint* in your template (such as an Azure Blob storage public endpoint), *don't hard-code* the namespace. Use the `reference` function to dynamically retrieve the namespace. You can use this approach to deploy the template to different public namespace environments without manually changing the endpoint in the template. Set the API version to the same version that you're using for the storage account in your template.
The following information can be helpful when you work with [resources](./syntax
## Comments
-In addition to the `comments` property, comments using the `//` syntax are supported. For more details about comments and metadata see [Understand the structure and syntax of ARM templates](/azure/azure-resource-manager/templates/syntax#comments-and-metadata). You may choose to save JSON files that contain `//` comments using the `.jsonc` file extension, to indicate the JSON file contains comments. The ARM service will also accept comments in any JSON file including parameters files.
+In addition to the `comments` property, comments using the `//` syntax are supported. For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata). You may choose to save JSON files that contain `//` comments using the `.jsonc` file extension, to indicate the JSON file contains comments. The ARM service will also accept comments in any JSON file including parameters files.
## Visual Studio Code ARM Tools
After you've completed your template, run the test toolkit to see if there are w
## Next steps * For information about the structure of the template file, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* For recommendations about how to build templates that work in all Azure cloud environments, see [Develop ARM templates for cloud consistency](./template-cloud-consistency.md).
+* For recommendations about how to build templates that work in all Azure cloud environments, see [Develop ARM templates for cloud consistency](./template-cloud-consistency.md).
azure-resource-manager Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/deploy-cli.md
az deployment group create \
--parameters storageAccountType=Standard_GRS ```
-The value of the `--template-file` parameter must be a Bicep file or a `.json` or `.jsonc` file. The `.jsonc` file extension indicates the file can contain `//` style comments. The ARM system accepts `//` comments in `.json` files. It does not care about the file extension. For more details about comments and metadata see [Understand the structure and syntax of ARM templates](/azure/azure-resource-manager/templates/syntax#comments-and-metadata).
+The value of the `--template-file` parameter must be a Bicep file or a `.json` or `.jsonc` file. The `.jsonc` file extension indicates the file can contain `//` style comments. The ARM system accepts `//` comments in `.json` files. It does not care about the file extension. For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
The Azure deployment template can take a few minutes to complete. When it finishes, you see a message that includes the result:
az deployment group create \
--template-file storage.json \ --parameters '@storage.parameters.jsonc' ```
-For more details about comments and metadata see [Understand the structure and syntax of ARM templates](/azure/azure-resource-manager/templates/syntax#comments-and-metadata).
+For more details about comments and metadata see [Understand the structure and syntax of ARM templates](./syntax.md#comments-and-metadata).
If you are using Azure CLI with version 2.3.0 or older, you can deploy a template with multi-line strings or comments using the `--handle-extended-json-format` switch. For example:
If you are using Azure CLI with version 2.3.0 or older, you can deploy a templat
* To roll back to a successful deployment when you get an error, see [Rollback on error to successful deployment](rollback-on-error.md). * To specify how to handle resources that exist in the resource group but aren't defined in the template, see [Azure Resource Manager deployment modes](deployment-modes.md). * To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
+* For tips on resolving common deployment errors, see [Troubleshoot common Azure deployment errors with Azure Resource Manager](common-deployment-errors.md).
azure-resource-manager Template Tutorial Export Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-tutorial-export-template.md
Title: Tutorial - Export template from the Azure portal description: Learn how to use an exported template to complete your template development. Previously updated : 09/09/2020 Last updated : 08/17/2022
# Tutorial: Use exported template from the Azure portal
-In this tutorial series, you've created a template to deploy an Azure storage account. In the next two tutorials, you add an *App Service plan* and a *website*. Instead of creating templates from scratch, you learn how to export templates from the Azure portal and how to use sample templates from the [Azure Quickstart templates](https://azure.microsoft.com/resources/templates/). You customize those templates for your use. This tutorial focuses on exporting templates, and customizing the result for your template. It takes about **14 minutes** to complete.
+In this tutorial series, you create a template to deploy an Azure storage account. In the next two tutorials, you add an **App Service plan** and a **website**. Instead of creating templates from scratch, you learn how to export templates from the Azure portal and how to use sample templates from the [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/). You customize those templates for your use. This tutorial focuses on exporting templates and customizing the result for your template. This instruction takes **14 minutes** to complete.
## Prerequisites We recommend that you complete the [tutorial about outputs](template-tutorial-add-outputs.md), but it's not required.
-You must have Visual Studio Code with the Resource Manager Tools extension, and either Azure PowerShell or Azure CLI. For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
+You need to have Visual Studio Code with the Resource Manager Tools extension and either Azure PowerShell or Azure Command-Line Interface (CLI). For more information, see [template tools](template-tutorial-create-first-template.md#get-tools).
## Review template
-At the end of the previous tutorial, your template had the following JSON:
+At the end of the previous tutorial, your template had the following JSON file:
:::code language="json" source="~/resourcemanager-templates/get-started-with-templates/add-outputs/azuredeploy.json":::
This template works well for deploying storage accounts, but you might want to a
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource**.
-1. In **Search the Marketplace**, enter **App Service plan**, and then select **App Service plan**. Don't select **App Service plan (classic)**
+1. In **Search services and Marketplace**, enter **App Service Plan**, and then select **App Service Plan**.
1. Select **Create**.
-1. Enter:
+1. On the **Create App Service Plan** page, enter the following:
- - **Subscription**: select your Azure subscription.
- - **Resource Group**: Select **Create new** and then specify a name. Provide a different resource group name than the one you have been using in this tutorial series.
- - **Name**: enter a name for the App service plan.
- - **Operating System**: select **Linux**.
- - **Region**: select an Azure location. For example, **Central US**.
- - **Pricing tier**: to save costs, change the SKU to **Basic B1** (under Dev/Test).
+ - **Subscription**: Select your Azure subscription from the drop-down menu.
+ - **Resource Group**: Select **Create new** and then specify a name. Provide a different resource group name than the one you've been using in this tutorial series.
+ - **Name**: enter a name for the App Service Plan.
+ - **Operating System**: Select **Linux**.
+ - **Region**: Select an Azure location from the drop-down menu, such as **Central US**.
+ - **Pricing Tier**: To save costs, select **Change size** to change the **SKU and size** to **first Basic (B1)**, under **Dev / Test** for less demanding workloads.
![Resource Manager template export template portal](./media/template-tutorial-export-template/resource-manager-template-export.png) 1. Select **Review and create**.
This template works well for deploying storage accounts, but you might want to a
![Go to resource](./media/template-tutorial-export-template/resource-manager-template-export-go-to-resource.png)
-1. Select **Export template**.
+1. From the left menu, under **Automation**, select **Export template**.
![Resource Manager template export template](./media/template-tutorial-export-template/resource-manager-template-export-template.png)
This template works well for deploying storage accounts, but you might want to a
![Resource Manager template export template exported template](./media/template-tutorial-export-template/resource-manager-template-exported-template.png) > [!IMPORTANT]
-> Typically, the exported template is more verbose than you might want when creating a template. For example, the SKU object in the exported template has five properties. This template works, but you could just use the `name` property. You can start with the exported template, and then modify it as you like to fit your requirements.
+> Typically, the exported template is more verbose than you might want when creating a template. The SKU object, for example, in the exported template has five properties. This template works, but you could just use the `name` property. You can start with the exported template and then modify it as you like to fit your requirements.
## Revise existing template
-The exported template gives you most of the JSON you need, but you need to customize it for your template. Pay particular attention to differences in parameters and variables between your template and the exported template. Obviously, the export process doesn't know the parameters and variables that you've already defined in your template.
+The exported template gives you most of the JSON you need, but you have to customize it for your template. Pay particular attention to differences in parameters and variables between your template and the exported template. Obviously, the export process doesn't know the parameters and variables that you've already defined in your template.
The following example highlights the additions to your template. It contains the exported code plus some changes. First, it changes the name of the parameter to match your naming convention. Second, it uses your location parameter for the location of the app service plan. Third, it removes some of the properties where the default value is fine.
New-AzResourceGroupDeployment `
# [Azure CLI](#tab/azure-cli)
-To run this deployment command, you must have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
+To run this deployment command, you need to have the [latest version](/cli/azure/install-azure-cli) of Azure CLI.
```azurecli az deployment group create \
az deployment group create \
> [!NOTE]
-> If the deployment failed, use the `verbose` switch to get information about the resources being created. Use the `debug` switch to get more information for debugging.
+> If the deployment fails, use the `verbose` switch to get information about the resources you're creating. Use the `debug` switch to get more information for debugging.
## Verify deployment
You can verify the deployment by exploring the resource group from the Azure por
1. Sign in to the [Azure portal](https://portal.azure.com). 1. From the left menu, select **Resource groups**. 1. Select the resource group you deployed to.
-1. The resource group contains a storage account and an App Service plan.
+1. The resource group contains a storage account and an App Service Plan.
## Clean up resources If you're moving on to the next tutorial, you don't need to delete the resource group.
-If you're stopping now, you might want to clean up the resources you deployed by deleting the resource group.
+If you're stopping now, you might want to delete the resource group.
-1. From the Azure portal, select **Resource group** from the left menu.
-2. Enter the resource group name in the **Filter by name** field.
-3. Select the resource group name.
+1. From the Azure portal, select **Resource groups** from the left menu.
+2. Type the resource group name in the **Filter for any field...** text field.
+3. Check the box next to **myResourceGroup** and select **myResourceGroup** or your resource group name.
4. Select **Delete resource group** from the top menu. ## Next steps
-You learned how to export a template from the Azure portal, and how to use the exported template for your template development. You can also use the Azure Quickstart templates to simplify template development.
+You learned how to export a template from the Azure portal and how to use the exported template for your template development. You can also use the Azure Quickstart Templates to simplify template development.
> [!div class="nextstepaction"]
-> [Use Azure Quickstart templates](template-tutorial-quickstart-template.md)
+> [Use Azure Quickstart Templates](template-tutorial-quickstart-template.md)
azure-video-analyzer Detect Motion Record Video Edge Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-analyzer/video-analyzer-docs/edge/detect-motion-record-video-edge-devices.md
To play the MP4 clip:
1. Sign in by using the credentials that were generated when you set up your Azure resources. 1. At the command prompt, go to the relevant directory. The default location is /var/media. You should see the MP4 files in the directory.
-1. Use [Secure Copy (SCP)](../../../virtual-machines/linux/copy-files-to-linux-vm-using-scp.md) to copy the files to your local machine.
+1. Use [Secure Copy (SCP)](../../../virtual-machines/copy-files-to-vm-using-scp.md) to copy the files to your local machine.
1. Play the files by using [VLC media player](https://www.videolan.org/vlc/) or any other MP4 player. ## Clean up resources
azure-video-indexer Create Account Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/create-account-portal.md
Last updated 06/10/2022
[!INCLUDE [Gate notice](./includes/face-limited-access.md)] --
-This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The account created is an Azure Resource Manager (ARM) based account which is enabled with all Video Indexer features and capabilities. For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
+This tutorial walks you through the steps of creating an Azure Video Indexer account and its accompanying resources by using the Azure portal. The created account is an Azure Resource Manager (ARM) based account. For information about different Azure Video Indexer account types, see the [Overview of account types](accounts-overview.md) topic.
## Prerequisites
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
HCX cloud manager in Azure VMware solutions can now be accessible over a public
HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections.
-For more information, please see [Enable HCX over the internet](/azure/azure-vmware/enable-hcx-access-over-internet)
+For more information, please see [Enable HCX over the internet](./enable-hcx-access-over-internet.md)
## July 7, 2022
For more information on this vCenter version, see [VMware vCenter Server 6.7 Upd
>This is non-disruptive and should not impact Azure VMware Services or workloads. During maintenance, various VMware alerts, such as _Lost network connectivity on DVPorts_ and _Lost uplink redundancy on DVPorts_, appear in vCenter Server and clear automatically as the maintenance progresses. ## Post update
-Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
+Once complete, newer versions of VMware components appear. If you notice any issues or have any questions, contact our support team by opening a support ticket.
azure-vmware Disaster Recovery Using Vmware Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/disaster-recovery-using-vmware-site-recovery-manager.md
While Microsoft aims to simplify VMware SRM and vSphere Replication installation
## Scale limitations
-To learn about the limits for the VMware Site Recovery Manager Add-On with the Azure VMware Soltuion, check the [Azure subscription and service limits, quotas, and constraints.](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-vmware-solution-limits)
+To learn about the limits for the VMware Site Recovery Manager Add-On with the Azure VMware Soltuion, check the [Azure subscription and service limits, quotas, and constraints.](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-vmware-solution-limits)
## SRM licenses
VMware and Microsoft support teams will engage each other as needed to troublesh
- [vSphere Replication administration](https://docs.vmware.com/en/vSphere-Replication/8.2/com.vmware.vsphere.replication-admin.doc/GUID-35C0A355-C57B-430B-876E-9D2E6BE4DDBA.html) - [Pre-requisites and Best Practices for SRM installation](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html) - [Network ports for SRM](https://docs.vmware.com/en/Site-Recovery-Manager/8.3/com.vmware.srm.install_config.doc/GUID-499D3C83-B8FD-4D4C-AE3D-19F518A13C98.html)-- [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)
+- [Network ports for vSphere Replication](https://kb.vmware.com/s/article/2087769)
azure-vmware Enable Managed Snat For Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-managed-snat-for-workloads.md
With this capability, you:
- Are unable to view connection logs. - Have a limit of 128 000 concurrent connections.
-## Prerequisites
-- Azure Solution VMware private cloud-- DNS Server configured on the NSX-T Datacenter- ## Reference architecture
-The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge.
+The architecture shows Internet access outbound from your Azure VMware Solution private cloud using an Azure VMware Solution Managed SNAT Service.
:::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip-snat.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the SNAT Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip-snat-expanded.png":::
azure-vmware Enable Public Ip Nsx Edge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/enable-public-ip-nsx-edge.md
With this capability, you have the following features:
- DDoS Security protection against network traffic in and out of the Internet. - HCX Migration support over the Public Internet.
-## Reference architecture
+## Prerequisites
+- Azure VMware Solution private cloud
+- DNS Server configured on the NSX-T Datacenter
+
+## Reference architecture
The architecture shows Internet access to and from your Azure VMware Solution private cloud using a Public IP directly to the NSX Edge. :::image type="content" source="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip.png" alt-text="Diagram that shows architecture of Internet access to and from your Azure VMware Solution Private Cloud using a Public IP directly to the NSX Edge." border="false" lightbox="media/public-ip-nsx-edge/architecture-internet-access-avs-public-ip-expanded.png":::
backup Azure Backup Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-backup-glossary.md
Refer to [Azure Resource Manager documentation](../azure-resource-manager/manage
## Azure Disk Encryption (ADE)
-Refer to [Azure Disk Encryption documentation](../security/fundamentals/azure-disk-encryption-vms-vmss.md).
+Refer to [Azure Disk Encryption documentation](../virtual-machines/disk-encryption-overview.md).
## Backend storage / Cloud storage / Backup storage
backup Backup Azure Vms Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-encryption.md
Azure Backup can back up and restore Azure VMs using ADE with and without the Az
**Unmanaged** | Yes | Yes **Managed** | Yes | Yes -- Learn more about [ADE](../security/fundamentals/azure-disk-encryption-vms-vmss.md), [Key Vault](../key-vault/general/overview.md), and [KEKs](../virtual-machine-scale-sets/disk-encryption-key-vault.md#set-up-a-key-encryption-key-kek).-- Read the [FAQ](../security/fundamentals/azure-disk-encryption-vms-vmss.md) for Azure VM disk encryption.
+- Learn more about [ADE](../virtual-machines/disk-encryption-overview.md), [Key Vault](../key-vault/general/overview.md), and [KEKs](../virtual-machine-scale-sets/disk-encryption-key-vault.md#set-up-a-key-encryption-key-kek).
+- Read the [FAQ](../virtual-machines/disk-encryption-overview.md) for Azure VM disk encryption.
### Limitations
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md
This article discusses about how to:
- This feature currently **doesn't support backup using MARS agent**, and you may not be able to use a CMK-encrypted vault for the same. The MARS agent uses a user passphrase-based encryption. This feature also doesn't support backup of classic VMs. -- This feature isn't related to [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md), which uses guest-based encryption of a VM's disk using BitLocker (for Windows) and DM-Crypt (for Linux).
+- This feature isn't related to [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md), which uses guest-based encryption of a VM's disk using BitLocker (for Windows) and DM-Crypt (for Linux).
- The Recovery Services vault can be encrypted only with keys stored in Azure Key Vault, located in the **same region**. Also, keys must be **RSA keys** only and should be in **enabled** state.
bastion Bastion Connect Vm Ssh Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-linux.md
Title: 'Connect to a Linux VM using SSH' description: Learn how to use Azure Bastion to connect to Linux VM using SSH.- Previously updated : 10/12/2021 Last updated : 08/18/2022
This article shows you how to securely and seamlessly create an SSH connection to your Linux VMs located in an Azure virtual network directly through the Azure portal. When you use Azure Bastion, your VMs don't require a client, agent, or additional software. You can also connect to a Linux VM using RDP. For information, see [Create an RDP connection to a Linux VM](bastion-connect-vm-rdp-linux.md).
-Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it is provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md).
+Azure Bastion provides secure connectivity to all of the VMs in the virtual network in which it's provisioned. Using Azure Bastion protects your virtual machines from exposing RDP/SSH ports to the outside world, while still providing secure access using RDP/SSH. For more information, see the [What is Azure Bastion?](bastion-overview.md) overview article.
-When connecting to a Linux virtual machine using SSH, you can use both username/password and SSH keys for authentication. You can connect to your VM with SSH keys by using either:
-
-* A private key that you manually enter
-* A file that contains the private key information
+When connecting to a Linux virtual machine using SSH, you can use both username/password and SSH keys for authentication.
The SSH private key must be in a format that begins with `"--BEGIN RSA PRIVATE KEY--"` and ends with `"--END RSA PRIVATE KEY--"`.
In order to make a connection, the following roles are required:
In order to connect to the Linux VM via SSH, you must have the following ports open on your VM: * Inbound port: SSH (22) ***or***
-* Inbound port: Custom value (you will then need to specify this custom port when you connect to the VM via Azure Bastion)
+* Inbound port: Custom value (you'll then need to specify this custom port when you connect to the VM via Azure Bastion)
> [!NOTE] > If you want to specify a custom port value, Azure Bastion must be configured using the Standard SKU. The Basic SKU does not allow you to specify custom ports. >
-## <a name="username"></a>Connect: Using username and password
+## Bastion connection page
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. In the [Azure portal](https://portal.azure.com), go to the virtual machine that you want to connect to. On the **Overview** page, select **Connect**, then select **Bastion** from the dropdown to open the Bastion connection page. You can also select **Bastion** from the left pane.
:::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, enter the **Username** and **Password**.
+1. On the **Bastion** connection page, click the **Connection Settings** arrow to expand all the available settings. If you are using a Bastion **Standard** SKU, you have more available settings than a Basic SKU.
+
+ :::image type="content" source="./media/bastion-connect-vm-ssh-linux/connection-settings.png" alt-text="Screenshot shows connection settings.":::
+
+1. Authenticate and connect using one of the methods in the following sections.
+
+ * [Username and password](#username-and-password)
+ * [Private key from local file](#private-key-from-local-file)
+ * [Password - Azure Key Vault](#passwordazure-key-vault)
+ * [Private key - Azure Key Vault](#private-keyazure-key-vault)
+
+## Username and password
+
+Use the following steps to authenticate using username and password.
++
+1. To authenticate using a username and password, configure the following settings:
+
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **Password** from the dropdown.
+ * **Username**: Enter the username.
+ * **Password**: Enter the **Password**.
+
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
+
+1. Click **Connect** to connect to the VM.
+
+## Private key from local file
+
+Use the following steps to authenticate using an SSH private key from a local file.
++
+1. To authenticate using a private key from a local file, configure the following settings:
+
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **SSH Private Key from Local File** from the dropdown.
+ * **Local File**: Select the local file.
+ * **SSH Passphrase**: Enter the SSH passphrase if necessary.
+
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
+
+1. Click **Connect** to connect to the VM.
+
+## Password - Azure Key Vault
+
+Use the following steps to authenticate using a password from Azure Key Vault.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/password.png" alt-text="Screenshot shows Password authentication.":::
-1. Select **Connect** to connect to the VM.
-## <a name="privatekey"></a>Connect: Manually enter a private key
+1. To authenticate using a password from Azure Key Vault, configure the following settings:
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **Password from Azure Key Vault** from the dropdown.
+ * **Username**: Enter the username.
+ * **Subscription**: Select the subscription.
+ * **Azure Key Vault**: Select the Key Vault.
+ * **Azure Key Vault Secret**: Select the Key Vault secret containing the value of your SSH private key.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot of the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, enter the **Username** and **SSH Private Key**.
+ * If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/ssh-private-key.png" alt-text="Screenshot of SSH Private Key authentication.":::
-1. Enter your private key into the text area **SSH Private Key** (or paste it directly).
-1. Select **Connect** to connect to the VM.
+ * Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
-## <a name="ssh"></a>Connect: Using a private key file
+ > [!NOTE]
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ >
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot depicts the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, enter the **Username** and **SSH Private Key from Local File**.
+1. Click **Connect** to connect to the VM.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/private-key-file.png" alt-text="Screenshot depicts SSH Private Key file.":::
+## Private key - Azure Key Vault
-1. Browse for the file, then select **Open**.
-1. Select **Connect** to connect to the VM. Once you click Connect, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
+Use the following steps to authenticate using a private key stored in Azure Key Vault.
-## <a name="akv"></a>Connect: Using a private key stored in Azure Key Vault
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. To authenticate using a private key stored in Azure Key Vault, configure the following settings:
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/connect.png" alt-text="Screenshot showing the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, enter the **Username** and select **SSH Private Key from Azure Key Vault**.
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **SSH Private Key from Azure Key Vault** from the dropdown.
+ * **Username**: Enter the username.
+ * **Subscription**: Select the subscription.
+ * **Azure Key Vault**: Select the Key Vault.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/ssh-key-vault.png" alt-text="Screenshot showing SSH Private Key from Azure Key Vault.":::
-1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key.
+ * If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
- * If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
+ * Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
- * Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
+ > [!NOTE]
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ >
- > [!NOTE]
- > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
- >
+ * **Azure Key Vault Secret**: Select the Key Vault secret containing the value of your SSH private key.
- :::image type="content" source="./media/bastion-connect-vm-ssh-linux/private-key-stored.png" alt-text="Screenshot showing Azure Key Vault." lightbox="./media/bastion-connect-vm-ssh-linux/private-key-stored.png":::
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
-1. Select the **Azure Key Vault Secret** dropdown and select the Key Vault secret containing the value of your SSH private key.
-1. Select **Connect** to connect to the VM. Once you click **Connect**, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
+1. Click **Connect** to connect to the VM.
## Next steps
bastion Bastion Connect Vm Ssh Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/bastion-connect-vm-ssh-windows.md
Title: 'Connect to a Windows VM using SSH' description: Learn how to use Azure Bastion to connect to Windows VM using SSH.- Previously updated : 09/20/2021 Last updated : 08/18/2022
Azure Bastion provides secure connectivity to all of the VMs in the virtual netw
> If you want to create an SSH connection to a Windows VM, Azure Bastion must be configured using the Standard SKU. >
-When connecting to a Windows virtual machine using SSH, you can use both username/password and SSH keys for authentication. You can connect to your VM with SSH keys by using either:
-
-* A private key that you manually enter
-* A file that contains the private key information
+When connecting to a Windows virtual machine using SSH, you can use both username/password and SSH keys for authentication.
The SSH private key must be in a format that begins with `"--BEGIN RSA PRIVATE KEY--"` and ends with `"--END RSA PRIVATE KEY--"`. ## Prerequisites
-Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
+Make sure that you have set up an Azure Bastion host for the virtual network in which the VM resides. For more information, see [Create an Azure Bastion host](tutorial-create-host-portal.md). Once the Bastion service is provisioned and deployed in your virtual network, you can use it to connect to any VM in this virtual network.
To SSH to a Windows virtual machine, you must also ensure that:
-* Your Windows virtual machine is running Windows Server 2019 or later
+* Your Windows virtual machine is running Windows Server 2019 or later.
* You have OpenSSH Server installed and running on your Windows virtual machine. To learn how to do this, see [Install OpenSSH](/windows-server/administration/openssh/openssh_install_firstuse). * Azure Bastion has been configured to use the Standard SKU.
In order to connect to the Windows VM via SSH, you must have the following ports
Currently, Azure Bastion only supports connecting to Windows VMs via SSH using **OpenSSH**.
-## <a name="username"></a>Connect: Using username and password
+## Bastion connection page
+
+1. In the [Azure portal](https://portal.azure.com), go to the virtual machine that you want to connect to. On the **Overview** page, select **Connect**, then select **Bastion** from the dropdown to open the Bastion connection page. You can also select **Bastion** from the left pane.
+
+ :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-linux/connect.png":::
+
+1. On the **Bastion** connection page, click the **Connection Settings** arrow to expand all the available settings. If you are using a Bastion **Standard** SKU, you have more available settings than a Basic SKU.
+
+ :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connection-settings.png" alt-text="Screenshot shows connection settings.":::
+
+1. Authenticate and connect using one of the methods in the following sections.
+
+ * [Username and password](#username-and-password)
+ * [Private key from local file](#private-key-from-local-file)
+ * [Password - Azure Key Vault](#passwordazure-key-vault)
+ * [Private key - Azure Key Vault](#private-keyazure-key-vault)
+
+## Username and password
+
+Use the following steps to authenticate using username and password.
++
+1. To authenticate using a username and password, configure the following settings:
+
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **Password** from the dropdown.
+ * **Username**: Enter the username.
+ * **Password**: Enter the **Password**.
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot of overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png":::
+1. Click **Connect** to connect to the VM.
-1. After you select Bastion, select **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, expand the **Connection Settings** section and select **SSH**. If you plan to use an inbound port different from the standard SSH port (22), enter the **Port**.
+## Private key from local file
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connection-settings.png" alt-text="Screenshot showing the Connection settings." lightbox="./media/bastion-connect-vm-ssh-windows/connection-settings.png":::
+Use the following steps to authenticate using an SSH private key from a local file.
-1. Enter the **Username** and **Password**, and then select **Connect** to connect to the VM.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/authentication.png" alt-text="Screenshot of Password authentication." lightbox="./media/bastion-connect-vm-ssh-windows/authentication.png":::
+1. To authenticate using a private key from a local file, configure the following settings:
-## <a name="privatekey"></a>Connect: Manually enter a private key
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **SSH Private Key from Local File** from the dropdown.
+ * **Local File**: Select the local file.
+ * **SSH Passphrase**: Enter the SSH passphrase if necessary.
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot shows the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, expand the **Connection Settings** section and select **SSH**. If you plan to use an inbound port different from the standard SSH port (22), enter the **Port**.
+1. Click **Connect** to connect to the VM.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connection-settings-manual.png" alt-text="Screenshot of Connection settings." lightbox="./media/bastion-connect-vm-ssh-windows/connection-settings-manual.png":::
+## Password - Azure Key Vault
-1. Enter the **Username** and **SSH Private Key**. Enter your private key into the text area **SSH Private Key** (or paste it directly).
+Use the following steps to authenticate using a password from Azure Key Vault.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/authentication-manual.png" alt-text="Screenshot of SSH key authentication." lightbox="./media/bastion-connect-vm-ssh-windows/authentication-manual.png":::
-1. Select **Connect** to connect to the VM.
+1. To authenticate using a password from Azure Key Vault, configure the following settings:
-## <a name="ssh"></a>Connect: Using a private key file
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **Password from Azure Key Vault** from the dropdown.
+ * **Username**: Enter the username.
+ * **Subscription**: Select the subscription.
+ * **Azure Key Vault**: Select the Key Vault.
+ * **Azure Key Vault Secret**: Select the Key Vault secret containing the value of your SSH private key.
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
+ * If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot depicts the overview for a virtual machine in Azure portal with Connect selected" lightbox="./media/bastion-connect-vm-ssh-windows/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, expand the **Connection Settings** section and select **SSH**. If you plan to use an inbound port different from the standard SSH port (22), enter the **Port**.
+ * Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connection-settings-file.png" alt-text="Screenshot depicts Connection settings." lightbox="./media/bastion-connect-vm-ssh-windows/connection-settings-file.png":::
+ > [!NOTE]
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ >
-1. Enter the **Username** and **SSH Private Key from Local File**. Browse for the file, then select **Open**.
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/authentication-file.png" alt-text="Screenshot depicts SSH key file." lightbox="./media/bastion-connect-vm-ssh-windows/authentication-file.png":::
+1. Click **Connect** to connect to the VM.
-1. Select **Connect** to connect to the VM. Once you click Connect, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
+## Private key - Azure Key Vault
-## <a name="akv"></a>Connect: Using a private key stored in Azure Key Vault
+Use the following steps to authenticate using a private key stored in Azure Key Vault.
-1. Open the [Azure portal](https://portal.azure.com). Go to the virtual machine that you want to connect to, then click **Connect** and select **Bastion** from the dropdown.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connect.png" alt-text="Screenshot is the overview for a virtual machine in Azure portal with Connect selected." lightbox="./media/bastion-connect-vm-ssh-windows/connect.png":::
-1. After you select Bastion, click **Use Bastion**. If you didn't provision Bastion for the virtual network, see [Configure Bastion](./quickstart-host-portal.md).
-1. On the **Connect using Azure Bastion** page, expand the **Connection Settings** section and select **SSH**. If you plan to use an inbound port different from the standard SSH port (22), enter the **Port**.
+1. To authenticate using a private key stored in Azure Key Vault, configure the following settings:
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/connection-settings-akv.png" alt-text="Screenshot showing Connection settings." lightbox="./media/bastion-connect-vm-ssh-windows/connection-settings-akv.png":::
+ * **Protocol**: Select SSH.
+ * **Port**: Input the port number. Custom port connections are available for the Standard SKU only.
+ * **Authentication type**: Select **SSH Private Key from Azure Key Vault** from the dropdown.
+ * **Username**: Enter the username.
+ * **Subscription**: Select the subscription.
+ * **Azure Key Vault**: Select the Key Vault.
-1. Enter the **Username** and select **SSH Private Key from Azure Key Vault**.
+ * If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/ssh-key-vault.png" alt-text="Screenshot showing SSH Private Key from Azure Key Vault.":::
-1. Select the **Azure Key Vault** dropdown and select the resource in which you stored your SSH private key.
+ * Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
- * If you didnΓÇÖt set up an Azure Key Vault resource, see [Create a key vault](../key-vault/secrets/quick-create-powershell.md) and store your SSH private key as the value of a new Key Vault secret.
- * Make sure you have **List** and **Get** access to the secrets stored in the Key Vault resource. To assign and modify access policies for your Key Vault resource, see [Assign a Key Vault access policy](../key-vault/general/assign-access-policy-portal.md).
+ > [!NOTE]
+ > Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
+ >
- >[!NOTE]
- >Please store your SSH private key as a secret in Azure Key Vault using the **PowerShell** or **Azure CLI** experience. Storing your private key via the Azure Key Vault portal experience will interfere with the formatting and result in unsuccessful login. If you did store your private key as a secret using the portal experience and no longer have access to the original private key file, see [Update SSH key](../virtual-machines/extensions/vmaccess.md#update-ssh-key) to update access to your target VM with a new SSH key pair.
- >
+ * **Azure Key Vault Secret**: Select the Key Vault secret containing the value of your SSH private key.
- :::image type="content" source="./media/bastion-connect-vm-ssh-windows/private-key-stored.png" alt-text="Screenshot showing Azure Key Vault." lightbox="./media/bastion-connect-vm-ssh-windows/private-key-stored.png":::
+1. To work with the VM in a new browser tab, select **Open in new browser tab**.
-1. Select the **Azure Key Vault Secret** dropdown and select the Key Vault secret containing the value of your SSH private key.
-1. Select **Connect** to connect to the VM. Once you click **Connect**, SSH to this virtual machine will directly open in the Azure portal. This connection is over HTML5 using port 443 on the Bastion service over the private IP of your virtual machine.
+1. Click **Connect** to connect to the VM.
## Next steps
bastion Tutorial Create Host Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-create-host-portal.md
your resources using the following steps:
In this tutorial, you deployed Bastion to a virtual network and connected to a VM. You then removed the public IP address from the VM. Next, learn about and configure additional Bastion features. > [!div class="nextstepaction"]
-> [Bastion features and configuration settings](configuration-settings.md)<br>
+> [Bastion features and configuration settings](configuration-settings.md)
+
+> [!div class="nextstepaction"]
> [Bastion - VM connections and features](vm-about.md)
batch Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/disk-encryption.md
- Title: Create a pool with disk encryption enabled
-description: Learn how to use disk encryption configuration to encrypt nodes with a platform-managed key.
- Previously updated : 04/16/2021---
-# Create a pool with disk encryption enabled
-
-When you create an Azure Batch pool using [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration), you can encrypt compute nodes in the pool with a platform-managed key by specifying the disk encryption configuration.
-
-This article explains how to create a Batch pool with disk encryption enabled.
-
-## Why use a pool with disk encryption configuration?
-
-With a Batch pool, you can access and store data on the OS and temporary disks of the compute node. Encrypting the server-side disk with a platform-managed key will safeguard this data with low overhead and convenience.
-
-Batch will apply one of these disk encryption technologies on compute nodes, based on pool configuration and regional supportability.
--- [Managed disk encryption at rest with platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys)-- [Encryption at host using a platform-managed Key](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data)-- [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md)-
-You won't be able to specify which encryption method will be applied to the nodes in your pool. Instead, you provide the target disks you want to encrypt on their nodes, and Batch can choose the appropriate encryption method, ensuring the specified disks are encrypted on the compute node.
-
-> [!IMPORTANT]
-> If you are creating your pool with a Linux [custom image](batch-sig-images.md), you can only enable disk encryption only if your pool is using an [Encryption At Host Supported VM size](../virtual-machines/disk-encryption.md#supported-vm-sizes).
-> Encryption At Host is not currently supported on User Subscription Pools until the feature becomes [publicly available in Azure](../virtual-machines/disks-enable-host-based-encryption-portal.md#prerequisites).
-
-## Azure portal
-
-When creating a Batch pool in the the Azure portal, select either **TemporaryDisk** or **OsAndTemporaryDisk** under **Disk Encryption Configuration**.
--
-After the pool is created, you can see the disk encryption configuration targets in the pool's **Properties** section.
--
-## Examples
-
-The following examples show how to encrypt the OS and temporary disks on a Batch pool using the Batch .NET SDK, the Batch REST API, and the Azure CLI.
-
-### Batch .NET SDK
-
-```csharp
-pool.VirtualMachineConfiguration.DiskEncryptionConfiguration = new DiskEncryptionConfiguration(
- targets: new List<DiskEncryptionTarget> { DiskEncryptionTarget.OsDisk, DiskEncryptionTarget.TemporaryDisk }
- );
-```
-
-### Batch REST API
-
-REST API URL:
-
-```
-POST {batchURL}/pools?api-version=2020-03-01.11.0
-client-request-id: 00000000-0000-0000-0000-000000000000
-```
-
-Request body:
-
-```
-"pool": {
- "id": "pool2",
- "vmSize": "standard_a1",
- "virtualMachineConfiguration": {
- "imageReference": {
- "publisher": "Canonical",
- "offer": "UbuntuServer",
- "sku": "18.04-LTS"
- },
- "diskEncryptionConfiguration": {
- "targets": [
- "OsDisk",
- "TemporaryDisk"
- ]
- }
- "nodeAgentSKUId": "batch.node.ubuntu 18.04"
- },
- "resizeTimeout": "PT15M",
- "targetDedicatedNodes": 5,
- "targetLowPriorityNodes": 0,
- "taskSlotsPerNode": 3,
- "enableAutoScale": false,
- "enableInterNodeCommunication": false
-}
-```
-
-### Azure CLI
-
-```azurecli-interactive
-az batch pool create \
- --id diskencryptionPool \
- --vm-size Standard_DS1_V2 \
- --target-dedicated-nodes 2 \
- --image canonical:ubuntuserver:18.04-LTS \
- --node-agent-sku-id "batch.node.ubuntu 18.04" \
- --disk-encryption-targets OsDisk TemporaryDisk
-```
-
-## Next steps
--- Learn more about [server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md).-- For an in-depth overview of Batch, see [Batch service workflow and resources](batch-service-workflow-features.md).+
+ Title: Create a pool with disk encryption enabled
+description: Learn how to use disk encryption configuration to encrypt nodes with a platform-managed key.
+ Last updated : 04/16/2021
+ms.devlang: csharp
+++
+# Create a pool with disk encryption enabled
+
+When you create an Azure Batch pool using [Virtual Machine Configuration](nodes-and-pools.md#virtual-machine-configuration), you can encrypt compute nodes in the pool with a platform-managed key by specifying the disk encryption configuration.
+
+This article explains how to create a Batch pool with disk encryption enabled.
+
+## Why use a pool with disk encryption configuration?
+
+With a Batch pool, you can access and store data on the OS and temporary disks of the compute node. Encrypting the server-side disk with a platform-managed key will safeguard this data with low overhead and convenience.
+
+Batch will apply one of these disk encryption technologies on compute nodes, based on pool configuration and regional supportability.
+
+- [Managed disk encryption at rest with platform-managed keys](../virtual-machines/disk-encryption.md#platform-managed-keys)
+- [Encryption at host using a platform-managed Key](../virtual-machines/disk-encryption.md#encryption-at-hostend-to-end-encryption-for-your-vm-data)
+- [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md)
+
+You won't be able to specify which encryption method will be applied to the nodes in your pool. Instead, you provide the target disks you want to encrypt on their nodes, and Batch can choose the appropriate encryption method, ensuring the specified disks are encrypted on the compute node.
+
+> [!IMPORTANT]
+> If you are creating your pool with a Linux [custom image](batch-sig-images.md), you can only enable disk encryption only if your pool is using an [Encryption At Host Supported VM size](../virtual-machines/disk-encryption.md#supported-vm-sizes).
+> Encryption At Host is not currently supported on User Subscription Pools until the feature becomes [publicly available in Azure](../virtual-machines/disks-enable-host-based-encryption-portal.md#prerequisites).
+
+## Azure portal
+
+When creating a Batch pool in the the Azure portal, select either **TemporaryDisk** or **OsAndTemporaryDisk** under **Disk Encryption Configuration**.
++
+After the pool is created, you can see the disk encryption configuration targets in the pool's **Properties** section.
++
+## Examples
+
+The following examples show how to encrypt the OS and temporary disks on a Batch pool using the Batch .NET SDK, the Batch REST API, and the Azure CLI.
+
+### Batch .NET SDK
+
+```csharp
+pool.VirtualMachineConfiguration.DiskEncryptionConfiguration = new DiskEncryptionConfiguration(
+ targets: new List<DiskEncryptionTarget> { DiskEncryptionTarget.OsDisk, DiskEncryptionTarget.TemporaryDisk }
+ );
+```
+
+### Batch REST API
+
+REST API URL:
+
+```
+POST {batchURL}/pools?api-version=2020-03-01.11.0
+client-request-id: 00000000-0000-0000-0000-000000000000
+```
+
+Request body:
+
+```
+"pool": {
+ "id": "pool2",
+ "vmSize": "standard_a1",
+ "virtualMachineConfiguration": {
+ "imageReference": {
+ "publisher": "Canonical",
+ "offer": "UbuntuServer",
+ "sku": "18.04-LTS"
+ },
+ "diskEncryptionConfiguration": {
+ "targets": [
+ "OsDisk",
+ "TemporaryDisk"
+ ]
+ }
+ "nodeAgentSKUId": "batch.node.ubuntu 18.04"
+ },
+ "resizeTimeout": "PT15M",
+ "targetDedicatedNodes": 5,
+ "targetLowPriorityNodes": 0,
+ "taskSlotsPerNode": 3,
+ "enableAutoScale": false,
+ "enableInterNodeCommunication": false
+}
+```
+
+### Azure CLI
+
+```azurecli-interactive
+az batch pool create \
+ --id diskencryptionPool \
+ --vm-size Standard_DS1_V2 \
+ --target-dedicated-nodes 2 \
+ --image canonical:ubuntuserver:18.04-LTS \
+ --node-agent-sku-id "batch.node.ubuntu 18.04" \
+ --disk-encryption-targets OsDisk TemporaryDisk
+```
+
+## Next steps
+
+- Learn more about [server-side encryption of Azure Disk Storage](../virtual-machines/disk-encryption.md).
+- For an in-depth overview of Batch, see [Batch service workflow and resources](batch-service-workflow-features.md).
center-sap-solutions Register Existing System https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/register-existing-system.md
In this how-to guide, you'll learn how to register an existing SAP system with *
You can register SAP systems with ACSS that run on the following configurations: - SAP NetWeaver or ABAP stacks-- SUSE and RHEL Linux operating systems
+- Windows, SUSE and RHEL Linux operating systems
- HANA, DB2, SQL Server, Oracle, Max DB, and SAP ASE databases The following SAP system configurations aren't supported in ACSS: -- Windows operating system - HANA Large Instance (HLI) - Systems with HANA Scale-out configuration - Java stack
center-sap-solutions Start Stop Sap Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/center-sap-solutions/start-stop-sap-systems.md
Through the Azure portal, you can start and stop:
- Single-Server - High Availability (HA) - Distributed Non-HA-- SAP systems that run on Linux operating systems (OS).-- SAP HA systems that use Pacemaker clustering software. Other certified cluster software isn't currently supported.
+- SAP systems that run on Windows and Linux operating systems (OS).
+- SAP HA systems that use Linux Pacemaker clustering software and Windows Server Failover Clustering (WSFC). Other certified cluster software isn't currently supported.
## Prerequisites
cognitive-services Speech Ssml Phonetic Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-ssml-phonetic-sets.md
Phonetic alphabets are used with the [Speech Synthesis Markup Language (SSML)](s
Speech service supports the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet) suprasegmentals that are listed here. You set `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation).
-|`ipa` | Symbol |
-|-|-|
-| `ˈ` | Primary stress |
-| `ˌ` | Secondary stress |
-| `.` | Syllable boundary |
-| `ː` | Long |
-| `‿` | Linking |
-> [!NOTE]
-> long suprasegmental symbol is `ː` not the punctuation colon `:`.
+|`ipa` | Symbol | Note|
+|-|-|-|
+| `ˈ` | Primary stress | Don’t use single quote ( ‘ or ' ) though it looks similar. |
+| `ˌ` | Secondary stress | Don’t use comma ( , ) though it looks similar. |
+| `.` | Syllable boundary | |
+| `ː` | Long | Don’t use colon ( : or :) though it looks similar. |
+| `‿` | Linking | |
+
+> [!TIP]
+> You can use [the international phonetic alphabet keyboard](https://www.internationalphoneticalphabet.org/html-ipa-keyboard-v1/keyboard/) to create the correct `ipa` suprasegmentals.
For some locales, Speech service defines its own phonetic alphabets, which ordinarily map to the [International Phonetic Alphabet (IPA)](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet). The eight locales that support the Microsoft Speech API (SAPI, or `sapi`) are en-US, fr-FR, de-DE, es-ES, ja-JP, zh-CN, zh-HK, and zh-TW. For those eight locales, you set `sapi` or `ipa` as the `alphabet` in [SSML](speech-synthesis-markup.md#use-phonemes-to-improve-pronunciation). See the sections in this article for the phonemes that are specific to each locale.
+> [!NOTE]
+> The following tables list viseme IDs corresponding to phonemes for different locales. When viseme ID is 0, it indicates silence.
+ ## ar-EG/ar-SA [!INCLUDE [ar-EG](./includes/phonetic-sets/text-to-speech/ar-eg.md)]
cognitive-services Create Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/custom-translator/v2-preview/how-to/create-manage-workspace.md
Previously updated : 08/15/2022 Last updated : 08/17/2022
> [!NOTE] > Region must match the region that was selected during the resource creation. You can use **KEY 1** or **KEY 2**.
- :::image type="content" source="../media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
- > [!NOTE] > All uploaded customer content, custom model binaries, custom model configurations, and training logs are kept encrypted-at-rest in the selected region.
+ :::image type="content" source="../media/quickstart/resource-key.png" alt-text="Screenshot illustrating the resource key.":::
+ :::image type="content" source="../media/quickstart/create-workspace-1.png" alt-text="Screenshot illustrating workspace creation."::: ## Manage workspace settings
cognitive-services Request Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/request-limits.md
Previously updated : 08/15/2022 Last updated : 08/17/2022
These limits are restricted to Microsoft's standard translation models. Custom t
The Translator has a maximum latency of 15 seconds using standard models and 120 seconds when using custom models. Typically, responses *for text within 100 characters* are returned in 150 milliseconds to 300 milliseconds. The custom translator models have similar latency characteristics on sustained request rate and may have a higher latency when your request rate is intermittent. Response times will vary based on the size of the request and language pair. If you don't receive a translation or an [error response](./reference/v3-0-reference.md#errors) within that timeframe, check your code, your network connection, and retry.
-## Sentence length limits
-
-When you're using the [BreakSentence](./reference/v3-0-break-sentence.md) function, sentence length is limited to 275 characters. There are exceptions for these languages:
-
-| Language | Code | Character limit |
-|-||--|
-| Chinese | `zh` | 166 |
-| German | `de` | 800 |
-| Italian | `it` | 800 |
-| Japanese | `ja` | 166 |
-| Portuguese | `pt` | 800 |
-| Spanish | `es` | 800 |
-| Thai | `th` | 180 |
-
-> [!NOTE]
-> This limit doesn't apply to translations.
- ## Next steps * [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/translator-text-api/)
cognitive-services Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Translator/sovereign-clouds.md
Previously updated : 08/15/2022 Last updated : 08/17/2022
curl -X POST "https://api.translator.azure.cn/translate?api-version=3.0&from=en&
#### Document Translation custom endpoint
-#### Document Translation custom endpoint
- ```http
-https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.us/translator/text/batch/v1.0
+https://<NAME-OF-YOUR-RESOURCE>.cognitiveservices.azure.cn/translator/text/batch/v1.0
``` ### Example batch translation request
cognitive-services Role Based Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/role-based-access-control.md
- Title: Language service role-based access control (RBAC)-
-description: Use this article to learn about access controls for Azure Cognitive Service for Language
------ Previously updated : 08/02/2022----
-# Language role-based access control
-
-Azure Cognitive Service for Language supports Azure role-based access control (Azure RBAC), an authorization system for managing individual access to Azure resources. Using Azure RBAC, you assign different team members different levels of permissions for your projects authoring resources. See the [Azure RBAC documentation](/azure/role-based-access-control/) for more information.
-
-## Enable Azure Active Directory authentication
-
-To use Azure RBAC, you must enable Azure Active Directory authentication. You can [create a new resource with a custom subdomain](../../authentication.md) or [create a custom subdomain for your existing resource](../../cognitive-services-custom-subdomains.md#how-does-this-impact-existing-resources).
-
-## Add role assignment to Language Authoring resource
-
-Azure RBAC can be assigned to a Language Authoring resource. To grant access to an Azure resource, you add a role assignment.
-1. In the [Azure portal](https://ms.portal.azure.com/), select **All services**.
-2. Select **Cognitive Services**, and navigate to your specific Language Authoring resource.
-
- > [!NOTE]
- > You can also set up Azure RBAC for whole resource groups, subscriptions, or management groups. Do this by selecting the desired scope level and then navigating to the desired item. For example, selecting **Resource groups** and then navigating to a specific resource group.
-
-1. Select **Access control (IAM)** on the left navigation pane.
-1. Select **Add**, then select **Add role assignment**.
-1. On the **Role** tab on the next screen, select a role you want to add.
-1. On the **Members** tab, select a user, group, service principal, or managed identity.
-1. On the **Review + assign** tab, select **Review + assign** to assign the role.
-
-Within a few minutes, the target will be assigned the selected role at the selected scope. For help with these steps, see [Assign Azure roles using the Azure portal](/azure/role-based-access-control/role-assignments-portal).
-
-## Language role types
-
-Use the following table to determine access needs for your Language projects.
-
-These custom roles only apply to Language authoring resources.
-
-> [!NOTE]
-> * All prebuilt capabilities are accessible to all roles.
-> * The "Owner" and "Contributor" roles take priority over custom language roles.
-> * Azure Active Directory (Azure AD) is only used for custom Language roles.
-
-### Cognitive Services Language reader
-
-A user that should only be validating and reviewing the Language apps, typically a tester to ensure the application is performing well before deploying the project. They may want to review the applicationΓÇÖs assets to notify the app developers of any changes that need to be made, but do not have direct access to make them. Readers will have access to view the evaluation results.
--
- :::column span="":::
- **Capabilities**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * Read
- * Test
- :::column-end:::
- :::column span="":::
- * All GET APIs under:
- * [Language Authoring CLU APIs](/rest/api/language/conversational-analysis-authoring)
- * [Language Authoring Text Analysis APIs](/rest/api/language/text-analysis-authoring)
- * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
- * Only the `TriggerExportProjectJob` POST operation under:
- * [Language Authoring CLU export API](/rest/api/language/conversational-analysis-authoring/export?tabs=HTTP)
- * [Language Authoring Text Analysis export API](/rest/api/language/text-analysis-authoring/export?tabs=HTTP)
- * Only Export POST operation under:
- * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects/export)
- * All the Batch testing web APIs
- *[Language Runtime CLU APIs](/rest/api/language/conversation-analysis-runtime)
- *[Language Runtime Text Analysis APIs](/rest/api/language/text-analysis-runtime)
- :::column-end:::
-
-### Cognitive Services Language writer
-
-A user that is responsible for building and modifying an application, as a collaborator in a larger team. The collaborator can modify the Language apps in any way, train those changes, and validate/test those changes in the portal. However, this user shouldnΓÇÖt have access to deploying this application to the runtime, as they may accidentally reflect their changes in production. They also shouldnΓÇÖt be able to delete the application or alter its prediction resources and endpoint settings (assigning or unassigning prediction resources, making the endpoint public). This restricts this role from altering an application currently being used in production. They may also create new applications under this resource, but with the restrictions mentioned.
-
- :::column span="":::
- **Capabilities**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * All functionalities under Cognitive Services Language Reader.
- * Ability to:
- * Train
- * Write
- :::column-end:::
- :::column span="":::
- * All APIs under Language reader
- * All POST, PUT and PATCH APIs under:
- * [Language Authoring CLU APIs](/rest/api/language/conversational-analysis-authoring)
- * [Language Authoring Text Analysis APIs](/rest/api/language/text-analysis-authoring)
- * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
- Except for
- * Delete deployment
- * Delete trained model
- * Delete project
- * Deploy model
- :::column-end:::
-
-### Cognitive Services Language owner
-
-These users are the gatekeepers for the Language applications in production environments. They should have full access to any of the underlying functions and thus can view everything in the application and have direct access to edit any changes for both authoring and runtime environments
-
- :::column span="":::
- **Functionality**
- :::column-end:::
- :::column span="":::
- **API Access**
- :::column-end:::
- :::column span="":::
- * All functionalities under Cognitive Services Language Writer
- * Deploy
- * Delete
- :::column-end:::
- :::column span="":::
- * All APIs available under:
- * [Language Authoring CLU APIs](/rest/api/language/conversational-analysis-authoring)
- * [Language Authoring Text Analysis APIs](/rest/api/language/text-analysis-authoring)
- * [Question Answering Projects](/rest/api/cognitiveservices/questionanswering/question-answering-projects)
-
- :::column-end:::
cognitive-services Conversation Summarization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/how-to/conversation-summarization.md
Previously updated : 04/27/2022 Last updated : 08/18/2022 # How to use conversation summarization (preview) + > [!IMPORTANT] > The conversation summarization feature is a preview capability provided ΓÇ£AS ISΓÇ¥ and ΓÇ£WITH ALL FAULTS.ΓÇ¥ As such, Conversation Summarization (preview) should not be implemented or deployed in any production use. The customer is solely responsible for any use of conversation summarization.
There's another feature in Azure Cognitive Service for Language named [document
## Submitting data
-> [!NOTE]
-> * To use conversation summarization, you must [submit an online request and have it approved](https://aka.ms/applyforconversationsummarization/).
-> * Conversation summarization is only available through Language resources in the following regions:
-> * North Europe
-> * East US
-> * UK South
-> * Conversation summarization is only available using:
-> * REST API
-> * Python
- You submit documents to the API as strings of text. Analysis is performed upon receipt of the request. Because the API is [asynchronous](../../concepts/use-asynchronously.md), there may be a delay between sending an API request and receiving the results. For information on the size and number of requests you can send per minute and second, see the data limits below. When you use this feature, the API results are available for 24 hours from the time the request was ingested, and is indicated in the response. After this time period, the results are purged and are no longer available for retrieval.
Conversation summarization also enables you to get summaries from speech transcr
## Getting conversation summarization results + When you get results from language detection, you can stream the results to an application or save the output to a file on the local system. The following text is an example of content you might submit for summarization. This is only an example, the API can accept much longer input text. See [data limits](../../concepts/data-limits.md) for more information.
In the above example, the API might return the following summarized sentences:
## See also
-* [Summarization overview](../overview.md)
+* [Summarization overview](../overview.md)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Previously updated : 06/03/2022 Last updated : 08/18/2022
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 08/15/2022 Last updated : 08/18/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: using document summarization and conversation summarization (preview) + ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
communication-services Sub Eligibility Number Capability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/sub-eligibility-number-capability.md
Additional details on eligible subscription types are as follows:
| Number Type | Eligible Azure Agreement Type | | :- | :-- | | Toll-Free and Local (Geographic) | Modern Customer Agreement (Field and Customer Led), Modern Partner Agreement (CSP), Enterprise Agreement* |
-| Short-Codes | Modern Customer Agreement (Field Led) and Enterprise Agreement Only |
+| Short-Codes | Modern Customer Agreement (Field Led) and Enterprise Agreement Only** |
\* Allowing the purchase of Italian phone numbers for CSP and LSP customers is planned only for General Availability launch.
+\** Applications from all other subscription types will be reviewed and approved on a case-by-case bases
+ ## Number capabilities The capabilities that are available to you depend on the country that you're operating within (your Azure billing address location), your use case, and the phone number type that you've selected. These capabilities vary by country due to regulatory requirements.
communication-services Program Brief Guidelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/program-brief-guidelines.md
# Short Code Program Brief Filling Guidelines+ [!INCLUDE [Short code eligibility notice](../../includes/public-preview-include-short-code-eligibility.md)]
In these fields, you must provide a URL of the website where customers will disc
##### Examples: **SMS**
-Contoso.com: Announcing our Holiday Sale. Reply YES to save 5% on your next Contoso purchase. Txt OFF to stop, HELP for terms and conditions.
+Contoso.com: Announcing our Holiday Sale. Reply YES to save 5% on your next Contoso purchase. Msg&Data Rates May Apply. Txt HELP or terms&conditions. Txt STOP to opt-out.
**Web opt-in**
Contoso.com: Announcing our Holiday Sale. Reply YES to save 5% on your next Cont
**IVR**
-To sign up for our last-minute travel deals, Press 1. Message and data rates may apply Visit margiestravel.com for privacy and terms and conditions.
+*Example 1:*
+
+**Agent** - To sign up for our last-minute travel deals, Press 1. Message and data rates may apply. Visit margiestravel.com for privacy and terms and conditions.
+
+*Example 2:*
+**Contoso bot** - Would you like to receive appointment reminders through text message to the phone number you've saved in your account? Messages and data rates may apply. If you want to opt in, say YES, say NO to skip.
+**End-User** - YES
## Contact Details ### Point of contact email address
In this field, you are required to provide information on traffic spikes and the
Example: Traffic spikes are expected for delivery notifications program around holidays like Christmas. ## Templates+
+Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you will be asked for your preference to manage opt-outs. If you opt-in, the opt-out management service will automatically use your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword.
+ ### Opt-in confirmation message CTIA requires that the customer must actively opt into short code programs by sending a keyword from their mobile device to the short code, providing consent on website, IVR, etc.
Message senders are required to respond to messages containing the HELP keyword
In this field, you are required to provide a sample of the response message that is sent to the customer upon receiving the HELP keyword.
-**Example:** Thanks for texting Contoso! Call 1-800-800-8000 for support.
+**Example:** Contoso Appointment reminders: Get help at support@contoso.com or 1-800 123 4567.Msg&Data Rates May Apply. Txt HELP for help. Txt STOP to opt-out.
### Opt-out message Message senders are required to have mechanisms to opt customers out of the program and respond to messages containing the STOP keyword with the program name and confirmation that no additional messages will be sent. In this field, you are required to provide a sample of the response message that is sent to the customer upon receiving the STOP keyword.
-**Example:** Contoso Alerts: YouΓÇÖre opted out and will receive no further messages.
+**Example:** Contoso Appointment reminders: YouΓÇÖre opted out and will receive no further messages.
Please see our [guide on opt-outs](./sms-faq.md#how-does-azure-communication-services-handle-opt-outs-for-toll-free-numbers) to learn about how Azure Communication Services handles opt-outs.
communication-services Sms Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/sms/sms-faq.md
Opt-outs for US toll-free numbers are mandated and enforced by US carriers and c
- The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications. ### How does Azure Communication Services handle opt-outs for short codes?-- **STOP** - If a text message recipient wishes to opt-out, they can send ΓÇÿSTOPΓÇÖ to the short code. Azure Communication Services sends the following default response for STOP: *"You have successfully been unsubscribed to messages from this number. Reply START to resubscribe"*-- **START/UNSTOP** - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOP to the toll-free number. Azure Communication Service sends the following default response for START/UNSTOP: *ΓÇ£You have successfully been re-subscribed to messages from this number. Reply STOP to unsubscribe.ΓÇ¥*-- Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥-- The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
+Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you will be asked for your preference to manage opt-outs. If you opt-in to use it, the opt-out management service will automatically use your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword.
+
+*Example:*
+- **STOP** - If a text message recipient wishes to opt-out, they can send ΓÇÿSTOPΓÇÖ to the short code. Azure Communication Services sends the following default response for STOP: *"Contoso Alerts: YouΓÇÖre opted out and will receive no further messages."*
+- **START/UNSTOP** - If the recipient wishes to resubscribe to text messages from a toll-free number, they can send ΓÇÿSTARTΓÇÖ or ΓÇÿUNSTOP to the toll-free number. Azure Communication Service sends the following default response for START/UNSTOP: *ΓÇ£Contoso Promo Alerts: 3 msgs/week. Msg&Data Rates May Apply. Reply HELP for help. Reply STOP to opt-out.ΓÇ¥*
+- **HELP** - If the recipient wishes to get help with your service, they can send 'HELP' to the short code. Azure Communication Service sends the response you configured in the program brief for HELP: *"Thanks for texting Contoso! Call 1-800-800-8000 for support."*
+
+Azure Communication Services will detect the STOP message and block all further messages to the recipient. The delivery report will indicate a failed delivery with status message as ΓÇ£Sender blocked for given recipient.ΓÇ¥ The STOP, UNSTOP and START messages will be relayed back to you. Azure Communication Services encourages you to monitor and implement these opt-outs to ensure that no further message send attempts are made to recipients who have opted out of your communications.
## Short codes ### What is the eligibility to apply for a short code?
-Short Code availability is currently restricted to paid Azure enterprise subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md).
+Short Code availability is currently restricted to paid Azure subscriptions that have a billing address in the United States. Short Codes cannot be acquired on trial accounts or using Azure free credits. For more details, check out our [subscription eligibility page](../numbers/sub-eligibility-number-capability.md).
### Can you text to a toll-free number from a short code? No. Texting to a toll-free number from a short code is not supported. You also wont be able to receive a message from a toll-free number to a short code.
No. Texting to a toll-free number from a short code is not supported. You also w
Short codes do not fall under E.164 formatting guidelines and do not have a country code, or a "+" sign prefix. In the SMS API request, your short code should be passed as the 5-6 digit number you see in your short codes blade without any prefix. ### How long does it take to get a short code? What happens after a short code program brief application is submitted?
-Once you have submitted the short code program brief application in the Azure portal, Azure Communication Services works with the aggregators to get your application approved by each mobile carrier. This process generally takes 8-12 weeks.
+Once you have submitted the short code program brief application in the Azure portal, the service desk works with the aggregators to get your application approved by each wireless carrier. This process generally takes 8-12 weeks. We will let you know any updates and the status of your applications via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com.
+
+## Toll-Free Verification
+### What is toll free verification and why is it mandatory?
+The toll-free verification process ensures that your services running on toll-free numbers (TFNs) comply with carrier policies and industry best practices. This also provides relevant service information to reduce the likelihood of false positive filtering and wrongful spam blocks.ΓÇ»
+
+September 30, 2022 onwards, all new TFNs must complete a toll-free verification process. All existing TFNs must complete a toll-free verification process by September 30, 2022. If unverified, the TFNs may face SMS service interruptions. Verification can take up to 2-3 weeks.
+
+This decision has been made to ensure that the toll-free messaging channel is aligned with both short code and 10 DLC, whereby all services are reviewed. It also ensures that the sending brand and the type of traffic your messaging channels deliver is known, documented, and verified.
+### How do I submit a toll-free verification?
+For submitting the toll-free verification form, go to the Azure Communications Service Resource that your toll-free number is associated with in Azure portal and navigate to the Phone numbers blade. Click on the Toll-Free verification application link displayed in the infobox at the top of the phone numbers blade.
+
+### How is my data being used?
+Toll-free verification (TFV) involves an integration between Microsoft and the Toll-Free messaging aggregator. The toll-free messaging aggregator is the final reviewer and approver of the TFV application. Microsoft must share the TFV application information with the toll-free messaging aggregator for them to confirm that the program details meet the CTIA guidelines and standards set by carriers. By submitting a TFV form, you agree that Microsoft may share the TFV application details as necessary for provisioning the toll-free number.
+
+### What happens if I don't verify my toll-free numbers?
+Unverified numbers may face SMS service interruptions and are subject to carrier filtering and throttling.
+
+### What happens after I submit the toll-free verification form?
+Once we receive your toll-free verification form, we will relay it to the toll-free messaging aggregator for them to review and approve it. This process takes 2-3 weeks. We will let you know any updates and the status of your applications via the email you provide in the application. For more questions about your submitted application, please email acstnrequest@microsoft.com.
+
+### Can I send messages while I wait for approval?
+You will be able to send messages while you wait for approval but the traffic will be subject to carrier filtering and throttling if it's flagged as spam.
+
## Character and rate limits ### What is the SMS character limit? The size of a single SMS message is 140 bytes. The character limit per single message being sent depends on the message content and encoding used. Azure Communication Services supports both GSM-7 and UCS-2 encoding.
US and CA carriers charge an added fee for SMS messages sent and/or received fro
### When will we come to know of changes to these surcharges? As with similar Azure services, customers will be notified at least 30 days prior to the implementation of any price changes. These charges will be reflected on our SMS pricing page along with the effective dates.
-
+ ## Emergency support ### Can a customer use Azure Communication Services for emergency purposes?
-Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
+Azure Communication Services does not support text-to-911 functionality in the United States, but itΓÇÖs possible that you may have an obligation to do so under the rules of the Federal Communications Commission (FCC). You should assess whether the FCCΓÇÖs text-to-911 rules apply to your service or application. To the extent you're covered by these rules, you'll be responsible for routing 911 text messages to emergency call centers that request them. You're free to determine your own text-to-911 delivery model, but one approach accepted by the FCC involves automatically launching the native dialer on the userΓÇÖs mobile device to deliver 911 texts through the underlying mobile carrier.
communication-services Apply For Short Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/apply-for-short-code.md
Previously updated : 11/30/2021 Last updated : 08/16/2021
# Quickstart: Apply for a short code [!INCLUDE [Short code eligibility notice](../../includes/public-preview-include-short-code-eligibility.md)]
## Get a short code To begin provisioning a short code, go to your Communication Services resource on the [Azure portal](https://portal.azure.com). ## Apply for a short code
-Navigate to the Short Codes blade in the resource menu and click on "Get" button to launch the short code program brief application wizard.
+Navigate to the Short Codes blade in the resource menu and click on "Get" button to launch the short code program brief application wizard. For detailed guidance on how to fill out the program brief application check the [program brief filling guidelines](../../concepts/sms/program-brief-guidelines.md).
-The wizard on the short codes blade will walk you through a series of questions about the program as well as a description of content which helps carriers review and approve your short code program brief. For detailed guidance on how to fill out the program brief application please check [program brief filling guidelines](../../concepts/sms/program-brief-guidelines.md).
+## Pre-requisites
+The wizard on the short codes blade will walk you through a series of questions about the program as well as a description of content which will be shared with the carriers for them to review and approve your short code program brief. Review the pre-requisites tab for a list of the program content deliverables you'll need to attach with your application.
-The Short Code Program Brief registration requires details about your messaging program, including the user experience (e.g., call to action, opt-in, opt-out, and message flows) and information about your company. This information helps mobile carriers ensure that your program meets the CTIA (Cellular Telecommunications Industry Association) guidelines as well as regulatory requirements. A short code Program Brief application consists of the following 4 sections:
+
+The Short Code Program Brief registration requires details about your messaging program, including the user experience (for example., call to action, opt-in, opt-out, and message flows) and information about your company. This information helps mobile carriers ensure that your program meets the CTIA (Cellular Telecommunications Industry Association) guidelines and regulatory requirements.
+
+A short code Program Brief application consists of the following five sections:
### Program Details
-You will first need to provide the program name and choose the country/region where you would like to provision the phone number.
+You'll first need to provide the program name and choose the country/region where you would like to provision the phone number.
:::image type="content" source="./media/apply-for-short-code/program-details.png" alt-text="Screenshot showing program details section.":::
Configuring your short code is broken down into two steps:
- The selection of short code type - The selection of the short code features
-You can select from two short code types: Random, and Vanity. If you select a random short code, you will get a short code that is randomly selected by the U.S. Common Short Codes Association (CSCA). If you select a vanity short code, you are required to input a prioritized list of vanity short codes that youΓÇÖd like to use for your program. The alternatives in the list will be used if the first short code in your list is not available to lease. Example: 234567, 234578, 234589. You can look up the list of available short codes in the [US Short Codes Directory](https://usshortcodedirectory.com/).
+You can select from two short code types: Random, and Vanity. If you select a random short code, you'll get a short code that is randomly selected by the U.S. Common Short Codes Association (CSCA). If you select a vanity short code, you are required to input a prioritized list of vanity short codes that youΓÇÖd like to use for your program. The alternatives in the list will be used if the first short code in your list isn't available to lease. Example: 234567, 234578, 234589. You can look up the list of available short codes in the [US Short Codes Directory](https://usshortcodedirectory.com/).
When youΓÇÖve selected a number type, you can then choose the message type, and target date. Short code registration with carriers usually takes 8-12 weeks, so this target date should be selected considering this registration period. > [!Note]
-> Azure Communication Service currently only supports SMS. Please check [roadmap](https://github.com/Azure/Communication/blob/master/roadmap.md) for MMS launch.
+> Azure Communication Service currently only supports SMS. Check [roadmap](https://github.com/Azure/Communication/blob/master/roadmap.md) for MMS launch.
+
+## Program content details
+This section requires you to provide details about your program such as recurrence of the program, program call to action, type and description of program, privacy policy, and terms of the program.
-#### Enter program information
-This section requires you to provide details about your program such as recurrence of the program, messaging content, type and description of program, privacy policy, and terms of the program.
### Contact Details This section requires you to provide information about your company and customer care in the case that end users need help or support with the program.
This section requires you to provide information about your company and customer
### Volume Details This section requires you to provide an estimate of the number of messages you plan on sending per user per month and disclose any expected traffic spikes as part of the program. ### Template Information
-This section captures sample messages related to opt-in, opt-out, and other message flows.
+This section captures sample messages related to opt-in, opt-out, and other message flows. This tab features a message samples view where you can review sample templates to help you create a template for your use case.
+
+Azure communication service offers an opt-out management service for short codes that allows customers to configure responses to mandatory keywords STOP/START/HELP. Prior to provisioning your short code, you'll be asked for your preference to manage opt-outs. If you opt-in, the opt-out management service will automatically use your responses in the program brief for Opt-in/ Opt-out/ Help keywords in response to STOP/START/HELP keyword.
++
+### Review
+Once completed, review the short code request details, fees, SMS laws and industry standards and submit the completed application through the Azure Portal.
-Once completed, review the Program Brief information provided and submit the completed application through the Azure Portal.
-This program brief will now be automatically sent to the Azure Communication ServicesΓÇÖ service desk for review. The service desk specifically is looking to ensure that the provided information is in the right format before sending to all US mobile carriers for approval. The carriers will then review the details of the short code program, a process that can typically take between 8-12 weeks. Once carriers approve the program brief, you will be notified via email. You can now start sending and receiving messages on this short code for your messaging programs.
+This program brief will now be automatically sent to the Azure Communication ServicesΓÇÖ service desk for review. The service desk specifically is looking to ensure that the provided information is in the right format before sending to all US mobile carriers for approval. The carriers will then review the details of the short code program, a process that can typically take between 8-12 weeks. Once carriers approve the program brief, you'll be notified via email. You can now start sending and receiving messages on this short code for your messaging programs.
## Troubleshooting
-Common questions and issues:
-- Purchasing short codes is supported in the US only. To purchase phone numbers, ensure that:
- - The associated Azure subscription billing address is located in the United States. You cannot move a resource to another subscription at this time.
- - Your Communication Services resource is provisioned in the United States data location. You cannot move a resource to another data location at this time.
-- Short codes release is not supported currently.
+#### Common questions and issues:
+- **Purchasing short codes is supported in the US only. To purchase phone numbers, ensure that:**
+ - The associated Azure subscription billing address is located in the United States. You can't move a resource to another subscription at this time.
+ - Your Communication Services resource is provisioned in the United States data location. You can't move a resource to another data location at this time.
+- **Updating a short code application**
+ - Once submitted, you can't edit, view or cancel the short code application. If the Service desk team requires any updates to be made, you'll be notified via email and you'll be able to edit the application with the updates.
+ - If you'd like a copy of your application or for any issues, [contact us](https://emails-ppe.azure.microsoft.com/redirect/?destination=https%3A%2F%2Fportal.azure.com%2F%23blade%2FMicrosoft_Azure_Support%2FHelpAndSupportBlade%2Foverview&p=bT0wMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDAmdT1hZW8tcHJldmlldyZsPXBvcnRhbC5henVyZS5jb20%3D).
+- **Cancelling a short code application**
+ - Cancelling short code applications in the Azure portal is not supported. If you'd like to cancel your application after submitting the program brief, [contact us](https://emails-ppe.azure.microsoft.com/redirect/?destination=https%3A%2F%2Fportal.azure.com%2F%23blade%2FMicrosoft_Azure_Support%2FHelpAndSupportBlade%2Foverview&p=bT0wMDAwMDAwMC0wMDAwLTAwMDAtMDAwMC0wMDAwMDAwMDAwMDAmdT1hZW8tcHJldmlldyZsPXBvcnRhbC5henVyZS5jb20%3D)
## Next steps
Common questions and issues:
The following documents may be interesting to you: -- Familiarize yourself with the [SMS SDK](../../concepts/sms/sdk-features.md)
+- Familiarize yourself with the [SMS SDK](../../concepts/sms/sdk-features.md)
communication-services Chat Android Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/chat-android-push-notification.md
+
+ Title: Enable push notifications in your Android chat app
+
+description: Learn how to enable push notification in Android App by using Azure Communication Chat SDK
+++ Last updated : 08/16/2022+++++
+# Enable push notifications
+Push notifications let clients be notified for incoming messages and other operations occurring in a chat thread in situations where the mobile app isn't running in the foreground. Azure Communication Services supports a [list of events that you can subscribe to](../concepts/chat/concepts.md#push-notifications).
+> [!NOTE]
+> Chat push notifications are supported for Android SDK in versions starting from 1.1.0-beta.4 and 1.1.0. It is recommended that you use version 1.2.0 or newer, as older versions have a known issue with the registration renewal. Steps from 8 to 12 are only needed for versions equal to or greater than 1.2.0.
+
+1. Set up Firebase Cloud Messaging for the ChatQuickstart project. Complete steps `Create a Firebase project`, `Register your app with Firebase`, `Add a Firebase configuration file`, `Add Firebase SDKs to your app`, and `Edit your app manifest` in [Firebase Documentation](https://firebase.google.com/docs/cloud-messaging/android/client).
+
+2. Create a Notification Hub within the same subscription as your Communication Services resource, configure your Firebase Cloud Messaging settings for the hub, and link the Notification Hub to your Communication Services resource. See [Notification Hub provisioning](../concepts/notifications.md#notification-hub-provisioning).
+3. Create a new file called `MyFirebaseMessagingService.java` in the same directory where `MainActivity.java` resides. Copy the following code into `MyFirebaseMessagingService.java`. You will need to replace `<your_package_name>` with the package name used in `MainActivity.java`. You can use your own value for `<your_intent_name>`. This value will be used in step 6 below.
+
+ ```java
+ package <your_package_name>;
+
+ import android.content.Intent;
+ import android.util.Log;
+
+ import androidx.localbroadcastmanager.content.LocalBroadcastManager;
+
+ import com.azure.android.communication.chat.models.ChatPushNotification;
+ import com.google.firebase.messaging.FirebaseMessagingService;
+ import com.google.firebase.messaging.RemoteMessage;
+
+ import java.util.concurrent.Semaphore;
+
+ public class MyFirebaseMessagingService extends FirebaseMessagingService {
+ private static final String TAG = "MyFirebaseMsgService";
+ public static Semaphore initCompleted = new Semaphore(1);
+
+ @Override
+ public void onMessageReceived(RemoteMessage remoteMessage) {
+ try {
+ Log.d(TAG, "Incoming push notification.");
+
+ initCompleted.acquire();
+
+ if (remoteMessage.getData().size() > 0) {
+ ChatPushNotification chatPushNotification =
+ new ChatPushNotification().setPayload(remoteMessage.getData());
+ sendPushNotificationToActivity(chatPushNotification);
+ }
+
+ initCompleted.release();
+ } catch (InterruptedException e) {
+ Log.e(TAG, "Error receiving push notification.");
+ }
+ }
+
+ private void sendPushNotificationToActivity(ChatPushNotification chatPushNotification) {
+ Log.d(TAG, "Passing push notification to Activity: " + chatPushNotification.getPayload());
+ Intent intent = new Intent("<your_intent_name>");
+ intent.putExtra("PushNotificationPayload", chatPushNotification);
+ LocalBroadcastManager.getInstance(this).sendBroadcast(intent);
+ }
+ }
+
+ ```
+
+4. At the top of file `MainActivity.java`, add the following import statements:
+
+ ```java
+ import android.content.BroadcastReceiver;
+ import android.content.Context;
+ import android.content.Intent;
+ import android.content.IntentFilter;
+
+ import androidx.localbroadcastmanager.content.LocalBroadcastManager;
+ import com.azure.android.communication.chat.models.ChatPushNotification;
+ import com.google.android.gms.tasks.OnCompleteListener;
+ import com.google.android.gms.tasks.Task;
+ import com.google.firebase.messaging.FirebaseMessaging;
+ ```
+
+5. Add the following code to the `MainActivity` class:
+
+ ```java
+ private BroadcastReceiver firebaseMessagingReceiver = new BroadcastReceiver() {
+ @Override
+ public void onReceive(Context context, Intent intent) {
+ ChatPushNotification pushNotification =
+ (ChatPushNotification) intent.getParcelableExtra("PushNotificationPayload");
+
+ Log.d(TAG, "Push Notification received in MainActivity: " + pushNotification.getPayload());
+
+ boolean isHandled = chatAsyncClient.handlePushNotification(pushNotification);
+ if (!isHandled) {
+ Log.d(TAG, "No listener registered for incoming push notification!");
+ }
+ }
+ };
++
+ private void startFcmPushNotification() {
+ FirebaseMessaging.getInstance().getToken()
+ .addOnCompleteListener(new OnCompleteListener<String>() {
+ @Override
+ public void onComplete(@NonNull Task<String> task) {
+ if (!task.isSuccessful()) {
+ Log.w(TAG, "Fetching FCM registration token failed", task.getException());
+ return;
+ }
+
+ // Get new FCM registration token
+ String token = task.getResult();
+
+ // Log and toast
+ Log.d(TAG, "Fcm push token generated:" + token);
+ Toast.makeText(MainActivity.this, token, Toast.LENGTH_SHORT).show();
+
+ chatAsyncClient.startPushNotifications(token, new Consumer<Throwable>() {
+ @Override
+ public void accept(Throwable throwable) {
+ Log.w(TAG, "Registration failed for push notifications!", throwable);
+ }
+ });
+ }
+ });
+ }
+
+ ```
+
+6. Update the function `onCreate` in `MainActivity`.
+
+ ```java
+ @Override
+ protected void onCreate(Bundle savedInstanceState) {
+ super.onCreate(savedInstanceState);
+ setContentView(R.layout.activity_main);
+
+ LocalBroadcastManager
+ .getInstance(this)
+ .registerReceiver(
+ firebaseMessagingReceiver,
+ new IntentFilter("<your_intent_name>"));
+ }
+ ```
+
+7. Put the following code below the comment `<RECEIVE CHAT MESSAGES>` in `MainActivity`:
+
+```java
+ startFcmPushNotification();
+
+ chatAsyncClient.addPushNotificationHandler(CHAT_MESSAGE_RECEIVED, (ChatEvent payload) -> {
+ Log.i(TAG, "Push Notification CHAT_MESSAGE_RECEIVED.");
+ ChatMessageReceivedEvent event = (ChatMessageReceivedEvent) payload;
+ // You code to handle ChatMessageReceived event
+ });
+```
+
+8. Add the `xmlns:tools` field to the `AndroidManifest.xml` file:
+
+```
+ <manifest xmlns:android="http://schemas.android.com/apk/res/android"
+ xmlns:tools="http://schemas.android.com/tools"
+ package="com.azure.android.communication.chat.sampleapp">
+```
+
+9. Disable the default initializer for `WorkManager` in `AndroidManifest.xml`:
+
+```
+ <!-- Disable the default initializer of WorkManager so that we could override it in MyAppConfiguration -->
+ <provider
+ android:name="androidx.startup.InitializationProvider"
+ android:authorities="${applicationId}.androidx-startup"
+ android:exported="false"
+ tools:node="merge">
+ <!-- If you are using androidx.startup to initialize other components -->
+ <meta-data
+ android:name="androidx.work.WorkManagerInitializer"
+ android:value="androidx.startup"
+ tools:node="remove" />
+ </provider>
+ <!-- End of Disabling default initializer of WorkManager -->
+```
+
+10. Add the `WorkManager` dependency to your `build.gradle` file:
+
+```
+ def work_version = "2.7.1"
+ implementation "androidx.work:work-runtime:$work_version"
+```
+
+11. Add a custom `WorkManager` initializer by creating a class implementing `Configuration.Provider`:
+
+```java
+public class MyAppConfiguration extends Application implements Configuration.Provider {
+ Consumer<Throwable> exceptionHandler = new Consumer<Throwable>() {
+ @Override
+ public void accept(Throwable throwable) {
+ Log.i("YOUR_TAG", "Registration failed for push notifications!" + throwable.getMessage());
+ }
+ };
+ @Override
+ public void onCreate() {
+ super.onCreate();
+ WorkManager.initialize(getApplicationContext(), getWorkManagerConfiguration());
+ }
+ @NonNull
+ @Override
+ public Configuration getWorkManagerConfiguration() {
+ return new Configuration.Builder().
+ setWorkerFactory(new RegistrationRenewalWorkerFactory(COMMUNICATION_TOKEN_CREDENTIAL, exceptionHandler)).build();
+ }
+}
+```
+
+12. Add the `android:name=.MyAppConfiguration` field, which uses the class name from step 11, into `AndroidManifest.xml`:
+
+```
+<application
+ android:allowBackup="true"
+ android:icon="@mipmap/ic_launcher"
+ android:label="@string/app_name"
+ android:roundIcon="@mipmap/ic_launcher_round"
+ android:theme="@style/Theme.AppCompat"
+ android:supportsRtl="true"
+ android:name=".MyAppConfiguration"
+>
+```
connectors Connectors Native Reqres https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-native-reqres.md
Title: Receive and respond to HTTPS requests
-description: Handle inbound HTTPS calls from external services using Azure Logic Apps.
+ Title: Handle inbound or incoming HTTPS calls
+description: Receive and respond to HTTPS requests sent to workflows in Azure Logic Apps.
ms.suite: integration ms.reviewers: estfan, azla Previously updated : 08/04/2021 Last updated : 08/16/2022 tags: connectors
-# Receive and respond to inbound HTTPS requests in Azure Logic Apps
+# Handle incoming or inbound HTTPS requests sent to workflows in Azure Logic Apps
-With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the built-in Request trigger and Response action, you can create automated tasks and workflows that can receive inbound requests over HTTPS. To send outbound requests instead, use the built-in [HTTP trigger or HTTP action](../connectors/connectors-native-http.md).
+To run your logic app workflow after receiving an HTTPS request from another service, you can start your workflow with the Request built-in trigger. Your workflow can then respond to the HTTPS request by using Response built-in action.
-For example, you can have your logic app:
+The following list describes some example tasks that your workflow can perform when you use the Request trigger and Response action:
* Receive and respond to an HTTPS request for data in an on-premises database.
-* Trigger a workflow when an external webhook event happens.
+* Receive and respond to an HTTPS request from another logic app workflow.
-* Receive and respond to an HTTPS call from another logic app.
+* Trigger a workflow run when an external webhook event happens.
-This article shows how to use the Request trigger and Response action so that your logic app can receive and respond to inbound calls.
+To run your workflow by sending an outgoing or outbound request instead, use the [HTTP built-in trigger or HTTP built-in action](connectors-native-http.md).
-For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+## Prerequisites
-> [!NOTE]
->
-> In a Standard logic app workflow that starts with the Request trigger (but not a webhook trigger), you can
-> use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger
-> by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review
-> [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
+* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* The logic app workflow where you want to receive the inbound HTTPS request. To start your workflow with a Request trigger, you have to start with a blank workflow. To use the Response action, your workflow must start with the Request trigger.
-## Prerequisites
+If you're new to Azure Logic Apps, review the following get started documentation:
-* An Azure account and subscription. If you don't have a subscription, you can [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+* [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)
-* Basic knowledge about [how to create logic apps](../logic-apps/quickstart-create-first-logic-app-workflow.md). If you're new to logic apps, review [What is Azure Logic Apps](../logic-apps/logic-apps-overview.md)?
+* [Quickstart: Create a Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-first-logic-app-workflow.md)
-<a name="add-request"></a>
+* [Create a Standard logic app workflow in single-tenant Azure Logic Apps](../logic-apps/create-single-tenant-workflows-azure-portal.md)
-## Add Request trigger
+<a name="add-request-trigger"></a>
-This built-in trigger creates a manually callable endpoint that can handle *only* inbound requests over HTTPS. When a caller sends a request to this endpoint, the [Request trigger](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) fires and runs the logic app. For more information about how to call this trigger, see [Call, trigger, or nest workflows with HTTPS endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+## Add a Request trigger
-Your logic app keeps an inbound request open only for a [limited time](../logic-apps/logic-apps-limits-and-config.md#http-limits). Assuming that your logic app includes a [Response action](#add-response), if your logic app doesn't send a response back to the caller after this time passes, your logic app returns a `504 GATEWAY TIMEOUT` status to the caller. If your logic app doesn't include a Response action, your logic app immediately returns a `202 ACCEPTED` status to the caller.
+The Request trigger creates a manually callable endpoint that can handle *only* inbound requests over HTTPS. When the calling service sends a request to this endpoint, the Request trigger fires and runs the logic app workflow. For information about how to call this trigger, review [Call, trigger, or nest workflows with HTTPS endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
-1. Sign in to the [Azure portal](https://portal.azure.com). Create a blank logic app.
+## [Consumption](#tab/consumption)
-1. After Logic App Designer opens, in the search box, enter `http request` as your filter. From the triggers list, select the **When an HTTP request is received** trigger.
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
- ![Select Request trigger](./media/connectors-native-reqres/select-request-trigger.png)
+1. On the designer, under the search box, select **Built-in**. In the search box, enter **http request**. From the triggers list, select the trigger named **When a HTTP request is received**.
- The Request trigger shows these properties:
+ ![Screenshot showing Azure portal, Consumption workflow designer, search box with "http request" entered, and "When a HTTP request" trigger selected.](./media/connectors-native-reqres/select-request-trigger-consumption.png)
- ![Request trigger](./media/connectors-native-reqres/request-trigger.png)
+ The HTTP request trigger information box appears on the designer.
+
+ ![Screenshot showing Consumption workflow with Request trigger information box.](./media/connectors-native-reqres/request-trigger-consumption.png)
+
+1. In the trigger information box, provide the following values as necessary:
| Property name | JSON property name | Required | Description | ||--|-|-|
- | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save the logic app and is used for calling your logic app |
- | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body |
+ | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save your workflow and is used for sending a request that triggers your workflow. |
+ | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the Request trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
|||||
-1. In the **Request Body JSON Schema** box, optionally enter a JSON schema that describes the body in the incoming request, for example:
-
- ![Example JSON schema](./media/connectors-native-reqres/provide-json-schema.png)
+ The following example shows a sample JSON schema:
- The designer uses this schema to generate tokens for the properties in the request. That way, your logic app can parse, consume, and pass along data from the request through the trigger into your workflow.
+ ![Screenshot showing Consumption workflow and Request trigger with example JSON schema.](./media/connectors-native-reqres/provide-json-schema-consumption.png)
- Here is the sample schema:
+ The following example shows the complete sample JSON schema:
```json {
Your logic app keeps an inbound request open only for a [limited time](../logic-
} ```
- When you enter a JSON schema, the designer shows a reminder to include the `Content-Type` header in your request and set that header value to `application/json`. For more information, see [Handle content types](../logic-apps/logic-apps-content-type.md).
+ When you enter a JSON schema, the designer shows a reminder to include the **Content-Type** header in your request and set that header value to **application/json**. For more information, see [Handle content types](../logic-apps/logic-apps-content-type.md).
- ![Reminder to include "Content-Type" header](./media/connectors-native-reqres/include-content-type.png)
+ ![Screenshot showing Consumption workflow, Request trigger, and reminder to include "Content-Type" header.](./media/connectors-native-reqres/include-content-type-consumption.png)
- Here's what this header looks like in JSON format:
+ The following example shows how the **Content-Type** header appears in JSON format:
```json {
Your logic app keeps an inbound request open only for a [limited time](../logic-
1. In the Request trigger, select **Use sample payload to generate schema**.
- ![Screenshot with "Use sample payload to generate schema" selected](./media/connectors-native-reqres/generate-from-sample-payload.png)
+ ![Screenshot showing Consumption workflow, Request trigger, and "Use sample payload to generate schema" selected.](./media/connectors-native-reqres/generate-from-sample-payload-consumption.png)
1. Enter the sample payload, and select **Done**.
- ![Enter sample payload to generate schema](./media/connectors-native-reqres/enter-payload.png)
+ ![Screenshot showing Consumption workflow, Request trigger, and sample payload entered to generate schema.](./media/connectors-native-reqres/enter-payload-consumption.png)
- Here is the sample payload:
+ The following example shows the sample payload:
```json {
Your logic app keeps an inbound request open only for a [limited time](../logic-
1. To check that the inbound call has a request body that matches your specified schema, follow these steps:
- 1. To enforce the inbound message to have the same exact fields that your schema describes, in your schema, add the `required` property and specify the required fields. Add the `addtionalProperties` and set the value to `false`.
+ 1. To enforce the inbound message to have the same exact fields that your schema describes, in your schema, add the **`required`** property and specify the required fields. Add the **`addtionalProperties`** property, and set the value to **`false`**.
- For example, the following schema specifies that the inbound message must have the `msg` field and not any other fields:
+ For example, the following schema specifies that the inbound message must have the **`msg`** field and not any other fields:
```json {
Your logic app keeps an inbound request open only for a [limited time](../logic-
1. In the trigger's settings, turn on **Schema Validation**, and select **Done**.
- If the inbound call's request body doesn't match your schema, the trigger returns an `HTTP 400 Bad Request` error.
+ If the inbound call's request body doesn't match your schema, the trigger returns an **HTTP 400 Bad Request** error.
-1. To specify additional properties, open the **Add new parameter** list, and select the parameters that you want to add.
+1. To add other properties or parameters to the trigger, open the **Add new parameter** list, and select the parameters that you want to add.
| Property name | JSON property name | Required | Description | ||--|-|-|
Your logic app keeps an inbound request open only for a [limited time](../logic-
| **Relative path** | `relativePath` | No | The relative path for the parameter that the logic app's endpoint URL can accept | |||||
- This example adds the **Method** property:
+ The following example adds the **Method** property:
- ![Add Method parameter](./media/connectors-native-reqres/add-parameters.png)
+ ![Screenshot showing Consumption workflow, Request trigger, and adding the "Method" parameter.](./media/connectors-native-reqres/add-parameters-consumption.png)
The **Method** property appears in the trigger so that you can select a method from the list.
- ![Select method](./media/connectors-native-reqres/select-method.png)
+ ![Screenshot showing Consumption workflow, Request trigger, and the "Method" list opened with a method selected.](./media/connectors-native-reqres/select-method-consumption.png)
+
+1. When you're ready, save your workflow. On the designer toolbar, select **Save**.
+
+ This step generates the URL that you can use to send a request that triggers the workflow.
+
+1. To copy the generated URL, select the copy icon next to the URL.
+
+ ![Screenshot showing Consumption workflow, Request trigger, and URL copy button selected.](./media/connectors-native-reqres/generated-url-consumption.png)
+
+ > [!NOTE]
+ >
+ > If you want to include the hash or pound symbol (**#**) in the URI
+ > when making a call to the Request trigger, use this encoded version instead: `%25%23`
+
+## [Standard](#tab/standard)
+
+1. In the [Azure portal](https://portal.azure.com), open your blank logic app workflow in the designer.
+
+1. On the designer, select **Choose an operation**. On the pane that appears, under the search box, select **Built-in**.
+
+1. In the search box, enter **http request**. From the triggers list, select the trigger named **When a HTTP request is received**.
+
+ ![Screenshot showing Azure portal, Standard workflow designer, search box with "http request" entered, and "When a HTTP request" trigger selected.](./media/connectors-native-reqres/select-request-trigger-standard.png)
+
+ The HTTP request trigger information box appears on the designer.
+
+ ![Screenshot showing Standard workflow with Request trigger information box.](./media/connectors-native-reqres/request-trigger-standard.png)
+
+1. In the trigger information box, provide the following values as necessary:
+
+ | Property name | JSON property name | Required | Description |
+ ||--|-|-|
+ | **HTTP POST URL** | {none} | Yes | The endpoint URL that's generated after you save your workflow and is used for sending a request that triggers your workflow. |
+ | **Request Body JSON Schema** | `schema` | No | The JSON schema that describes the properties and values in the incoming request body. The designer uses this schema to generate tokens for the properties in the request. That way, your workflow can parse, consume, and pass along outputs from the Request trigger into your workflow. <br><br>If you don't have a JSON schema, you can generate the schema from a sample payload by using the **Use sample payload to generate schema** capability. |
+ |||||
+
+ The following example shows a sample JSON schema:
+
+ ![Screenshot showing Standard workflow and Request trigger with example JSON schema.](./media/connectors-native-reqres/provide-json-schema-standard.png)
+
+ The following example shows the complete sample JSON schema:
+
+ ```json
+ {
+ "type": "object",
+ "properties": {
+ "account": {
+ "type": "object",
+ "properties": {
+ "name": {
+ "type": "string"
+ },
+ "ID": {
+ "type": "string"
+ },
+ "address": {
+ "type": "object",
+ "properties": {
+ "number": {
+ "type": "string"
+ },
+ "street": {
+ "type": "string"
+ },
+ "city": {
+ "type": "string"
+ },
+ "state": {
+ "type": "string"
+ },
+ "country": {
+ "type": "string"
+ },
+ "postalCode": {
+ "type": "string"
+ }
+ }
+ }
+ }
+ }
+ }
+ }
+ ```
+
+ When you enter a JSON schema, the designer shows a reminder to include the **Content-Type** header in your request and set that header value to **application/json**. For more information, see [Handle content types](../logic-apps/logic-apps-content-type.md).
+
+ ![Screenshot showing Standard workflow, Request trigger, and reminder to include "Content-Type" header.](./media/connectors-native-reqres/include-content-type-standard.png)
+
+ The following example shows how the **Content-Type** header appears in JSON format:
+
+ ```json
+ {
+ "Content-Type": "application/json"
+ }
+ ```
-1. Now, add another action as the next step in your workflow. Under the trigger, select **Next step** so that you can find the action that you want to add.
+ To generate a JSON schema that's based on the expected payload (data), you can use a tool such as [JSONSchema.net](https://jsonschema.net), or you can follow these steps:
- For example, you can respond to the request by [adding a Response action](#add-response), which you can use to return a customized response and is described later in this topic.
+ 1. In the Request trigger, select **Use sample payload to generate schema**.
- Your logic app keeps the incoming request open only for a [limited time](../logic-apps/logic-apps-limits-and-config.md#http-limits). Assuming that your logic app workflow includes a Response action, if the logic app doesn't return a response after this time passes, your logic app returns a `504 GATEWAY TIMEOUT` to the caller. Otherwise, if your logic app doesn't include a Response action, your logic app immediately returns a `202 ACCEPTED` response to the caller.
+ ![Screenshot showing Standard workflow, Request trigger, and "Use sample payload to generate schema" selected.](./media/connectors-native-reqres/generate-from-sample-payload-standard.png)
-1. When you're done, save your logic app. On the designer toolbar, select **Save**.
+ 1. Enter the sample payload, and select **Done**.
- This step generates the URL to use for sending the request that triggers the logic app. To copy this URL, select the copy icon next to the URL.
+ ![Screenshot showing Standard workflow, Request trigger, and sample payload entered to generate schema.](./media/connectors-native-reqres/enter-payload-standard.png)
- ![URL to use triggering your logic app](./media/connectors-native-reqres/generated-url.png)
+ The following example shows the sample payload:
+
+ ```json
+ {
+ "account": {
+ "name": "Contoso",
+ "ID": "12345",
+ "address": {
+ "number": "1234",
+ "street": "Anywhere Street",
+ "city": "AnyTown",
+ "state": "AnyState",
+ "country": "USA",
+ "postalCode": "11111"
+ }
+ }
+ }
+ ```
+
+1. To check that the inbound call has a request body that matches your specified schema, follow these steps:
+
+ 1. To enforce the inbound message to have the same exact fields that your schema describes, in your schema, add the **`required`** property and specify the required fields. Add the **`addtionalProperties`** property, and set the value to **`false`**.
+
+ For example, the following schema specifies that the inbound message must have the **`msg`** field and not any other fields:
+
+ ```json
+ {
+ "properties": {
+ "msg": {
+ "type": "string"
+ }
+ },
+ "type": "object",
+ "required": ["msg"],
+ "additionalProperties": false
+ }
+ ```
+
+ 1. In the Request trigger's title bar, select the ellipses button (**...**).
+
+ 1. In the trigger's settings, turn on **Schema Validation**, and select **Done**.
+
+ If the inbound call's request body doesn't match your schema, the trigger returns an **HTTP 400 Bad Request** error.
+
+1. To add other properties or parameters to the trigger, open the **Add new parameter** list, and select the parameters that you want to add.
+
+ | Property name | JSON property name | Required | Description |
+ ||--|-|-|
+ | **Method** | `method` | No | The method that the incoming request must use to call the logic app |
+ | **Relative path** | `relativePath` | No | The relative path for the parameter that the logic app's endpoint URL can accept |
+ |||||
+
+ The following example adds the **Method** property:
+
+ ![Screenshot showing Standard workflow, Request trigger, and adding the "Method" parameter.](./media/connectors-native-reqres/add-parameters-standard.png)
+
+ The **Method** property appears in the trigger so that you can select a method from the list.
+
+ ![Screenshot showing Standard workflow, Request trigger, and the "Method" list opened with a method selected.](./media/connectors-native-reqres/select-method-standard.png)
+
+1. When you're ready, save your workflow. On the designer toolbar, select **Save**.
+
+ This step generates the URL that you can use to send a request that triggers the workflow.
+
+1. To copy the generated URL, select the copy icon next to the URL.
+
+ ![Screenshot showing Standard workflow, Request trigger, and URL copy button selected.](./media/connectors-native-reqres/generated-url-standard.png)
> [!NOTE]
+ >
> If you want to include the hash or pound symbol (**#**) in the URI > when making a call to the Request trigger, use this encoded version instead: `%25%23`
-1. To test your logic app, send an HTTP request to the generated URL.
+
- For example, you can use a tool such as [Postman](https://www.getpostman.com/) to send the HTTP request. For more information about the trigger's underlying JSON definition and how to call this trigger, see these topics, [Request trigger type](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) and [Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+Now, continue building your workflow by adding another action as the next step. For example, you can respond to the request by [adding a Response action](#add-response), which you can use to return a customized response and is described later in this article.
-For more information about security, authorization, and encryption for inbound calls to your logic app, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
+> [!NOTE]
+>
+> Your workflow keeps an inbound request open only for a [limited time](../logic-apps/logic-apps-limits-and-config.md#http-limits).
+> Assuming that your workflow also includes a Response action, if your workflow doesn't return a response to the caller
+> after this time expires, your workflow returns the **504 GATEWAY TIMEOUT** status to the caller. If your workflow
+> doesn't include a Response action, your workflow immediately returns the **202 ACCEPTED** status to the caller.
+
+For information about security, authorization, and encryption for inbound calls to your workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app resource with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
## Trigger outputs
-Here's more information about the outputs from the Request trigger:
+The following table lists the outputs from the Request trigger:
| JSON property name | Data type | Description | |--|--|-|
Here's more information about the outputs from the Request trigger:
## Add a Response action
-When you use the Request trigger to handle inbound requests, you can model the response and send the payload results back to the caller by using the built-in [Response action](../logic-apps/logic-apps-workflow-actions-triggers.md#response-action). You can use the Response action *only* with the Request trigger. This combination with the Request trigger and Response action creates the [request-response pattern](https://en.wikipedia.org/wiki/Request%E2%80%93response). Except for inside Foreach loops and Until loops, and parallel branches, you can add the Response action anywhere in your workflow.
+When you use the Request trigger to receive inbound requests, you can model the response and send the payload results back to the caller by using the Response built-in action, which works *only* with the Request trigger. This combination with the Request trigger and Response action creates the [request-response pattern](https://en.wikipedia.org/wiki/Request%E2%80%93response). Except for inside Foreach loops and Until loops, and parallel branches, you can add the Response action anywhere in your workflow.
> [!IMPORTANT]
-> If a Response action includes these headers, Logic Apps removes these headers from the generated response message without showing any warning or error:
>
-> * `Allow`
-> * `Content-*` headers except for `Content-Disposition`, `Content-Encoding`, and `Content-Type` when you use POST and PUT operations, but are not included for GET operations
-> * `Cookie`
-> * `Expires`
-> * `Last-Modified`
-> * `Set-Cookie`
-> * `Transfer-Encoding`
+> * If your Response action includes the following headers, Azure Logic Apps automatically
+> removes these headers from the generated response message without showing any warning
+> or error. Azure Logic Apps won't include these headers, although the service won't
+> stop you from saving workflows that have a Response action with these headers.
+>
+> * `Allow`
+> * `Content-*` headers except for `Content-Disposition`, `Content-Encoding`, and `Content-Type` when you use POST and PUT operations, but are not included for GET operations
+> * `Cookie`
+> * `Expires`
+> * `Last-Modified`
+> * `Set-Cookie`
+> * `Transfer-Encoding`
>
-> Although Logic Apps won't stop you from saving logic apps that have a Response action with these headers, Logic Apps ignores these headers.
+> * If you have one or more Response actions in a complex workflow with branches, make sure that the workflow
+> processes at least one Response action during runtime. Otherwise, if all Response actions are skipped,
+> the caller receives a **502 Bad Gateway** error, even if the workflow finishes successfully.
-1. In the Logic App Designer, under the step where you want to add a Response action, select **New step**.
+## [Consumption](#tab/consumption)
- For example, using the Request trigger from earlier:
+1. On the workflow designer, under the step where you want to add the Response action, select **New step**.
- ![Add new step](./media/connectors-native-reqres/add-response.png)
+ Or, to add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
- To add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
+ The following example adds the Response action after the Request trigger from the preceding section:
-1. Under **Choose an action**, in the search box, enter `response` as your filter, and select the **Response** action.
+ ![Screenshot showing Azure portal, Consumption workflow, and "New step" selected.](./media/connectors-native-reqres/add-response-consumption.png)
- ![Select the Response action](./media/connectors-native-reqres/select-response-action.png)
+1. On the designer, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **response**. From the actions list, select the **Response** action.
- The Request trigger is collapsed in this example for simplicity.
+ For simplicity, the following examples show a collapsed Request trigger.
-1. Add any values that are required for the response message.
+ ![Screenshot showing Azure portal, Consumption workflow, "Choose an operation" search box with "response" entered, and and Response action selected](./media/connectors-native-reqres/select-response-action-consumption.png)
+
+1. In the Response action information box, add the required values for the response message.
In some fields, clicking inside their boxes opens the dynamic content list. You can then select tokens that represent available outputs from previous steps in the workflow. Properties from the schema specified in the earlier example now appear in the dynamic content list.
- For example, for the **Headers** box, include `Content-Type` as the key name, and set the key value to `application/json` as mentioned earlier in this topic. For the **Body** box, you can select the trigger body output from the dynamic content list.
+ For example, for the **Headers** box, include **Content-Type** as the key name, and set the key value to **application/json** as mentioned earlier in this article. For the **Body** box, you can select the trigger body output from the dynamic content list.
- ![Response action details](./media/connectors-native-reqres/response-details.png)
+ ![Screenshot showing Azure portal, Consumption workflow, and Response action information.](./media/connectors-native-reqres/response-details-consumption.png)
To view the headers in JSON format, select **Switch to text view**.
- ![Headers - Switch to text view](./media/connectors-native-reqres/switch-to-text-view.png)
+ ![Screenshot showing Azure portal, Consumption workflow, and Response action headers in "Switch to text" view.](./media/connectors-native-reqres/switch-to-text-view-consumption.png)
- Here is more information about the properties that you can set in the Response action.
+ The following table has more information about the properties that you can set in the Response action.
| Property name | JSON property name | Required | Description | ||--|-|-|
When you use the Request trigger to handle inbound requests, you can model the r
| **Body** | `body` | No | The response body | |||||
-1. To specify additional properties, such as a JSON schema for the response body, open the **Add new parameter** list, and select the parameters that you want to add.
+1. To add more properties for the action, such as a JSON schema for the response body, open the **Add new parameter** list, and select the parameters that you want to add.
-1. When you're done, save your logic app. On the designer toolbar, select **Save**.
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
-> [!IMPORTANT]
-> If you have one or more Response actions in a complex workflow with branches, make sure
-> that the workflow run processes at least one Response action during runtime.
-> Otherwise, if all Response actions are skipped, the caller receives a **502 Bad Gateway** error, even if the workflow finishes successfully.
+## [Standard](#tab/standard)
+
+1. On the workflow designer, under the step where you want to add the Response action, select plus sign (**+**), and then select **Add new action**.
+
+ Or, to add an action between steps, move your pointer over the arrow between those steps. Select the plus sign (**+**) that appears, and then select **Add an action**.
+
+ The following example adds the Response action after the Request trigger from the preceding section:
+
+ ![Screenshot showing Azure portal, Standard workflow, and "Add an action" selected.](./media/connectors-native-reqres/add-response-standard.png)
+
+1. On the designer, under the **Choose an operation** search box, select **Built-in**. In the search box, enter **response**. From the actions list, select the **Response** action.
+
+ ![Screenshot showing Azure portal, Standard workflow, "Choose an operation" search box with "response" entered, and and Response action selected](./media/connectors-native-reqres/select-response-action-standard.png)
+
+1. In the Response action information box, add the required values for the response message.
+
+ In some fields, clicking inside their boxes opens the dynamic content list. You can then select tokens that represent available outputs from previous steps in the workflow. Properties from the schema specified in the earlier example now appear in the dynamic content list.
+
+ For example, for the **Headers** box, include **Content-Type** as the key name, and set the key value to **application/json** as mentioned earlier in this article. For the **Body** box, you can select the trigger body output from the dynamic content list.
+
+ ![Screenshot showing Azure portal, Standard workflow, and Response action information.](./media/connectors-native-reqres/response-details-standard.png)
+
+ To view the headers in JSON format, select **Switch to text view**.
+
+ ![Screenshot showing Azure portal, Standard workflow, and Response action headers in "Switch to text" view.](./media/connectors-native-reqres/switch-to-text-view-standard.png)
+
+ The following table has more information about the properties that you can set in the Response action.
+
+ | Property name | JSON property name | Required | Description |
+ ||--|-|-|
+ | **Status Code** | `statusCode` | Yes | The status code to return in the response |
+ | **Headers** | `headers` | No | A JSON object that describes one or more headers to include in the response |
+ | **Body** | `body` | No | The response body |
+ |||||
+
+1. To add more properties for the action, such as a JSON schema for the response body, open the **Add new parameter** list, and select the parameters that you want to add.
+
+1. When you're done, save your workflow. On the designer toolbar, select **Save**.
+++
+## Test your workflow
+
+To test your workflow, send an HTTP request to the generated URL. For example, you can use a tool such as [Postman](https://www.getpostman.com/) to send the HTTP request. For more information about the trigger's underlying JSON definition and how to call this trigger, see these topics, [Request trigger type](../logic-apps/logic-apps-workflow-actions-triggers.md#request-trigger) and [Call, trigger, or nest workflows with HTTP endpoints in Azure Logic Apps](../logic-apps/logic-apps-http-endpoint.md).
+
+## Security and authentication
+
+In a Standard logic app workflow that starts with the Request trigger (but not a webhook trigger), you can use the Azure Functions provision for authenticating inbound calls sent to the endpoint created by that trigger by using a managed identity. This provision is also known as "**Easy Auth**". For more information, review [Trigger workflows in Standard logic apps with Easy Auth](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/trigger-workflows-in-standard-logic-apps-with-easy-auth/ba-p/3207378).
+
+For more information about security, authorization, and encryption for inbound calls to your logic app workflow, such as [Transport Layer Security (TLS)](https://en.wikipedia.org/wiki/Transport_Layer_Security), previously known as Secure Sockets Layer (SSL), [Azure Active Directory Open Authentication (Azure AD OAuth)](../active-directory/develop/index.yml), exposing your logic app with Azure API Management, or restricting the IP addresses that originate inbound calls, see [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests).
## Next steps * [Secure access and data - Access for inbound calls to request-based triggers](../logic-apps/logic-apps-securing-a-logic-app.md#secure-inbound-requests)
-* [Connectors for Logic Apps](../connectors/apis-list.md)
+* [Managed or Azure-hosted connectors in Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors)
container-apps Dapr Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-overview.md
Previously updated : 06/21/2022 Last updated : 08/18/2022 # Dapr integration with Azure Container Apps
scopes:
## Current supported Dapr version
-Azure Container Apps supports Dapr version 1.7.3.
+Azure Container Apps supports Dapr version 1.8.3.
Version upgrades are handled transparently by Azure Container Apps. You can find the current version via the Azure portal and the CLI.
container-instances Container Instances Tutorial Azure Function Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-tutorial-azure-function-trigger.md
Title: Tutorial - Trigger container group by Azure function
-description: Create an HTTP-triggered, serverless PowerShell function to automate creation of Azure container instances
+description: Create an HTTP-triggered, serverless PowerShell function to automate creation of Azure Container Instances
cosmos-db Analytical Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/analytical-store-introduction.md
The following constraints are applicable on the operational data in Azure Cosmos
{"id": 2, "name": "john"} ``` - * The first document of the collection defines the initial analytical store schema. * Documents with more properties than the initial schema will generate new columns in analytical store. * Columns can't be removed.
salary: 1000000
The leaf property `streetNo` within the nested object `address` will be represented in the analytical store schema as a column `address.object.streetNo.int32`. The datatype is added as a suffix to the column. This way, if another document is added to the transactional store where the value of leaf property `streetNo` is "123" (note it's a string), the schema of the analytical store automatically evolves without altering the type of a previously written column. A new column added to the analytical store as `address.object.streetNo.string` where this value of "123" is stored.
-##### Data type to suffix map
+##### Data type to suffix map for full fidelity schema
-Here's a map of all the property data types and their suffix representations in the analytical store:
+Here's a map of all the property data types and their suffix representations in the analytical store in full fidelity schema representation:
|Original data type |Suffix |Example | ||||
Here's a map of all the property data types and their suffix representations in
|NULL | ".NULL" | NULL| |String| ".string" | "ABC"| |Timestamp | ".timestamp" | Timestamp(0, 0)|
-|DateTime |".date" | ISODate("2020-08-21T07:43:07.375Z")|
|ObjectId |".objectId" | ObjectId("5f3f7b59330ec25c132623a2")| |Document |".object" | {"a": "a"}|
cosmos-db Integrations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/integrations-overview.md
Samples to get started:
* [Quickstart: ToDo Application with a Node.js API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/azure-samples/todo-nodejs-mongo) to get started. \ This sample includes everything you need to build, deploy, and monitor an Azure solution using React.js for the Web application, Node.js for the API, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging.
-* [Quickstart: ToDo Application with a C# API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-csharp-mongo) \
+* [Quickstart: ToDo Application with a C# API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-csharp-cosmos-sql) \
This sample demonstrates how to build an Azure solution using C#, Azure Cosmos DB API for MongoDB for storage, and Azure Monitor for monitoring and logging. * [Quickstart: ToDo Application with a Python API and Azure Cosmos DB API for MongoDB on Azure App Service](https://github.com/Azure-Samples/todo-python-mongo) \
Azure AD managed identities eliminate the need for developers to manage credenti
Learn about other key integrations: * [Monitor Azure Cosmos DB with Azure Monitor.](/azure/cosmos-db/monitor-cosmos-db?tabs=azure-diagnostics.md)
-* [Set up analytics with Azure Synapse Link.](/azure/cosmos-db/configure-synapse-link.md)
+* [Set up analytics with Azure Synapse Link.](/azure/cosmos-db/configure-synapse-link)
cosmos-db How To Dotnet Query Items https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/how-to-dotnet-query-items.md
The [Container.GetItemLinqQueryable<>](/dotnet/api/microsoft.azure.cosmos.contai
Now that you've queried multiple items, try one of our end-to-end tutorials with the SQL API. > [!div class="nextstepaction"]
-> [Build a .NET console app in Azure Cosmos DB SQL API](sql-api-get-started.md)
+> [Build an app that queries and adds data to Azure Cosmos DB SQL API](/learn/modules/build-dotnet-app-cosmos-db-sql-api/)
cosmos-db Sql Api Java Sdk Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/sql-api-java-sdk-samples.md
where *sample.synchronicity.MainClass* can be
...etc... > [!NOTE]
-> Each sample is self-contained; it sets itself up and cleans up after itself. The samples issue multiple calls to create a `CosmosContainer`. Each time this is done, your subscription is billed for 1 hour of usage for the performance tier of the collection created.
+> Each sample is self-contained; it sets itself up and cleans up after itself. The samples issue multiple calls to create a `CosmosContainer` or `CosmosAsyncContainer`. Each time this is done, your subscription is billed for 1 hour of usage for the performance tier of the collection created.
> > ## Database examples
-The [Database CRUD Samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
+The Database CRUD Sample files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
| Task | API reference | | | |
-| [Create a database](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L77-L85) | CosmosClient.createDatabaseIfNotExists |
-| [Read a database by ID](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L88-L95) | CosmosClient.getDatabase |
-| [Read all the databases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L98-L112) | CosmosClient.readAllDatabases |
-| [Delete a database](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L115-L123) | CosmosDatabase.delete |
+| Create a database | [CosmosClient.createDatabaseIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L76-L84) <br> [CosmosAsyncClient.createDatabaseIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L80-L89) |
+| Read a database by ID | [CosmosClient.getDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L87-L94) <br> [CosmosAsyncClient.getDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L92-L99) |
+| Read all the databases | [CosmosClient.readAllDatabases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L97-L111) <br> [CosmosAsyncClient.readAllDatabases](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L102-L124) |
+| Delete a database | [CosmosDatabase.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/databasecrud/sync/DatabaseCRUDQuickstart.java#L114-L122) <br> [CosmosAsyncDatabase.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/databasecrud/async/DatabaseCRUDQuickstartAsync.java#L127-L135) |
## Collection examples
-The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-jav) conceptual article.
+The Collection CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
| Task | API reference | | | |
-| [Create a collection](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L97-L112) | CosmosDatabase.createContainerIfNotExists |
-| [Change configured performance of a collection](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L115-L123) | CosmosContainer.replaceProvisionedThroughput |
-| [Get a collection by ID](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L126-L133) | CosmosDatabase.getContainer |
-| [Read all the collections in a database](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L136-L150) | CosmosDatabase.readAllContainers |
-| [Delete a collection](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L153-L161) | CosmosContainer.delete |
+| Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L92-L107) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L96-L111) |
+| Change configured performance of a collection | [CosmosContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L110-L118) <br> [CosmosAsyncContainer.replaceProvisionedThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L114-L122) |
+| Get a collection by ID | [CosmosDatabase.getContainer](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L121-L128) <br> [CosmosAsyncDatabase.getContainer](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L125-L132) |
+| Read all the collections in a database | [CosmosDatabase.readAllContainers](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L131-L145) <br> [CosmosAsyncDatabase.readAllContainers](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L135-L158) |
+| Delete a collection | [CosmosContainer.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/containercrud/sync/ContainerCRUDQuickstart.java#L148-L156) <br> [CosmosAsyncContainer.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/containercrud/async/ContainerCRUDQuickstartAsync.java#L161-L169) |
## Autoscale collection examples To learn more about autoscale before running these samples, take a look at these instructions for enabling autoscale in your [account](https://azure.microsoft.com/resources/templates/cosmosdb-sql-autoscale/) and in your [databases and containers](../provision-throughput-autoscale.md).
-The [autoscale Database CRUD Samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java) file shows how to perform the following tasks.
+The autoscale database sample files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/async/AutoscaleDatabaseCRUDQuickstartAsync.java) show how to perform the following task.
| Task | API reference | | | |
-| [Create a database with specified autoscale max throughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java#L78-L89) | CosmosClient.createDatabase<br>ThroughputProperties.createAutoscaledThroughput |
+| Create a database with specified autoscale max throughput | [CosmosClient.createDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/sync/AutoscaleDatabaseCRUDQuickstart.java#L77-L88) <br> [CosmosAsyncClient.createDatabase](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscaledatabasecrud/async/AutoscaleDatabaseCRUDQuickstartAsync.java#L81-L94) |
-The [autoscale Collection CRUD Samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java) file shows how to perform the following tasks.
+
+The autoscale collection samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java) show how to perform the following tasks.
| Task | API reference | | | |
-| [Create a collection with specified autoscale max throughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L97-L110) | CosmosDatabase.createContainerIfNotExists |
-| [Change configured autoscale max throughput of a collection](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L113-L120) | CosmosContainer.replaceThroughput |
-| [Read autoscale throughput configuration of a collection](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L122-L133) | CosmosContainer.readThroughput |
+| Create a collection with specified autoscale max throughput | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L97-L110) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L101-L114) |
+| Change configured autoscale max throughput of a collection | [CosmosContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L113-L120) <br> [CosmosAsyncContainer.replaceThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L117-L124) |
+| Read autoscale throughput configuration of a collection | [CosmosContainer.readThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/sync/AutoscaleContainerCRUDQuickstart.java#L122-L133) <br> [CosmosAsyncContainer.readThroughput](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/autoscalecontainercrud/async/AutoscaleContainerCRUDQuickstartAsync.java#L126-L137) |
## Analytical storage collection examples
-The [Analytical storage Collection CRUD Samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java) file shows how to perform the following tasks. To learn about the Azure Cosmos collections before running the following samples, read about Azure Cosmos DB Synapse and Analytical Store.
+The Analytical storage Collection CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java) and [async](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java) show how to perform the following tasks. To learn about the Azure Cosmos collections before running the following samples, read about Azure Cosmos DB Synapse and Analytical Store.
-| Task | API reference |
-| | |
-| [Create a collection](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java#L93-L108) | CosmosDatabase.createContainerIfNotExists |
+| Task | API reference |
+| | |
+| Create a collection | [CosmosDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/sync/AnalyticalContainerCRUDQuickstart.java#L91-L106) <br> [CosmosAsyncDatabase.createContainerIfNotExists](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/analyticalcontainercrud/async/AnalyticalContainerCRUDQuickstartAsync.java#L91-L106) |
## Document examples
-The [Document CRUD Samples](https://github.com/Azure/azure-documentdb-jav) conceptual article.
+The Document CRUD Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) conceptual article.
| Task | API reference | | | |
-| [Create a document](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L133-L147) | CosmosContainer.createItem |
-| [Read a document by ID](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L179-L193) | CosmosContainer.readItem |
-| [Query for documents](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L162-L176) | CosmosContainer.queryItems |
-| [Replace a document](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L195-L210) | CosmosContainer.replaceItem |
-| [Upsert a document](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L212-L2225) | CosmosContainer.upsertItem |
-| [Delete a document](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L303-L310) | CosmosContainer.deleteItem |
-| [Replace a document with conditional ETag check](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L227-L264) | AccessCondition.setType<br>AccessCondition.setCondition |
-| [Read document only if document has changed](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L266-L300) | AccessCondition.setType<br>AccessCondition.setCondition |
-| [Partial document update](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/patch/sync/SamplePatchQuickstart.java) | CosmosContainer.patchItem |
+| Create a document | [CosmosContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L132-L146) <br> [CosmosAsyncContainer.createItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L188-L212) |
+| Read a document by ID | [CosmosContainer.readItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L177-L192) <br> [CosmosAsyncContainer.readItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L318-L340) |
+| Query for documents | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L161-L175) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L270-L287) |
+| Replace a document | [CosmosContainer.replaceItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L177-L192) <br> [CosmosAsyncContainer.replaceItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L318-L340) |
+| Upsert a document | [CosmosContainer.upsertItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L194-L207) <br> [CosmosAsyncContainer.upsertItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L342-L364) |
+| Delete a document | [CosmosContainer.deleteItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L285-L292) <br> [CosmosAsyncContainer.deleteItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L494-L510) |
+| Replace a document with conditional ETag check | [CosmosItemRequestOptions.setIfMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L209-L246) (sync) <br>[CosmosItemRequestOptions.setIfMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L366-L418) (async) |
+| Read document only if document has changed | [CosmosItemRequestOptions.setIfNoneMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/documentcrud/sync/DocumentCRUDQuickstart.java#L248-L282) (sync) <br>[CosmosItemRequestOptions.setIfNoneMatchETag](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/documentcrud/async/DocumentCRUDQuickstartAsync.java#L420-L491) (async)|
+| Partial document update | [CosmosContainer.patchItem](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/patch/sync/SamplePatchQuickstart.java) |
+| Bulk document update | [Bulk samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/bulk/async/SampleBulkQuickStartAsync.java) |
+| Transactional batch | [batch samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/batch/async/SampleBatchQuickStartAsync.java) |
## Indexing examples The [Collection CRUD Samples](https://github.com/Azure/azure-documentdb-jav#include-exclude-paths) conceptual articles. | Task | API reference | | | |
-| Exclude a document from the index | ExcludedIndex<br>IndexingPolicy |
-| Use Lazy Indexing | IndexingPolicy.IndexingMode |
-| [Include specified documents paths in the index](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L145-L148) | IndexingPolicy.IncludedPaths |
-| [Exclude specified documents paths from the index](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L150-L153) | IndexingPolicy.ExcludedPaths |
-| [Create a composite index](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L171-L186) | IndexingPolicy.setCompositeIndexes<br>CompositePath |
-| Force a range scan operation on a hash indexed path | FeedOptions.EnableScanInQuery |
-| Use range indexes on Strings | IndexingPolicy.IncludedPaths<br>RangeIndex |
-| Perform an index transform | - |
-| [Create a geospatial index](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L157-L166) | IndexingPolicy.setSpatialIndexes<br>SpatialSpec<br>SpatialType |
+| Include specified documents paths in the index | [IndexingPolicy.IncludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L143-L146) |
+| Exclude specified documents paths from the index | [IndexingPolicy.ExcludedPaths](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L148-L151) |
+| Create a composite index | [IndexingPolicy.setCompositeIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L167-L184) <br> CompositePath |
+| Create a geospatial index | [IndexingPolicy.setSpatialIndexes](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/indexmanagement/sync/SampleIndexManagement.java#L153-L165) <br> SpatialSpec <br> SpatialType |
+<!-- | Exclude a document from the index | ExcludedIndex<br>IndexingPolicy | -->
+<!-- | Use Lazy Indexing | IndexingPolicy.IndexingMode | -->
+<!-- | Force a range scan operation on a hash indexed path | FeedOptions.EnableScanInQuery | -->
+<!-- | Use range indexes on Strings | IndexingPolicy.IncludedPaths<br>RangeIndex | -->
+<!-- | Perform an index transform | - | -->
+ For more information about indexing, see [Azure Cosmos DB indexing policies](../index-policy.md). ## Query examples
-The [Query Samples](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
+The Query Samples files for [sync](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
| Task | API reference | | | |
-| [Query for all documents](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L210-L214) | CosmosContainer.queryItems |
-| [Query for equality using ==](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L291-L295) | CosmosContainer.queryItems |
-| [Query for inequality using != and NOT](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L297-L305) | CosmosContainer.queryItems |
-| [Query using range operators like >, <, >=, <=](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L307-L312) | CosmosContainer.queryItems |
-| [Query using range operators against strings](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L314-L319) | CosmosContainer.queryItems |
-| [Query with ORDER BY](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L321-L326) | CosmosContainer.queryItems |
-| [Query with DISTINCT](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L328-L333) | CosmosContainer.queryItems |
-| [Query with aggregate functions](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L335-L343) | CosmosContainer.queryItems |
-| [Work with subdocuments](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L345-L353) | CosmosContainer.queryItems |
-| [Query with intra-document Joins](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L355-L377) | CosmosContainer.queryItems |
-| [Query with string, math, and array operators](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L379-L390) | CosmosContainer.queryItems |
-| [Query with parameterized SQL using SqlQuerySpec](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L392-L421) |CosmosContainer.queryItems |
-| [Query with explicit paging](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L216-L266) | CosmosContainer.queryItems |
-| [Query partitioned collections in parallel](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L268-L289) | CosmosContainer.queryItems |
-| Query with ORDER BY for partitioned collections | CosmosContainer.queryItems |
+| Query for all documents | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L204-L208) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L244-L247)|
+| Query for equality using == | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L286-L290) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L325-L329)|
+| Query for inequality using != and NOT | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L292-L300) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L331-L339)|
+| Query using range operators like >, <, >=, <= | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L302-L307) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L341-L346)|
+| Query using range operators against strings | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L309-L314) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L348-L353)|
+| Query with ORDER BY | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L316-L321) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L355-L360)|
+| Query with DISTINCT | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L323-L328) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L362-L367)|
+| Query with aggregate functions | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L330-L338) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L369-L377)|
+| Work with subdocuments | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L340-L348) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L379-L387)|
+| Query with intra-document Joins | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L350-L372) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L389-L411)|
+| Query with string, math, and array operators | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L374-L385) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L413-L424)|
+| Query with parameterized SQL using SqlQuerySpec | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L387-L416) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L426-L455)|
+| Query with explicit paging | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L211-L261) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L250-L300)|
+| Query partitioned collections in parallel | [CosmosContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/queries/sync/QueriesQuickstart.java#L263-L284) <br> [CosmosAsyncContainer.queryItems](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/queries/async/QueriesQuickstartAsync.java#L302-L323)|
+<!-- | Query with ORDER BY for partitioned collections | CosmosContainer.queryItems <br> CosmosAsyncContainer.queryItems | -->
## Change feed examples
-The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav).
+The [Change Feed Processor Sample](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/jav) and [Change feed processor](https://docs.microsoft.com/azure/cosmos-db/sql/change-feed-processor?tabs=java).
| Task | API reference | | | |
-| [Basic change feed functionality](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L124-L154) |ChangeFeedProcessor.changeFeedProcessorBuilder |
-| Read change feed from a specific time | ChangeFeedProcessor.changeFeedProcessorBuilder |
-| [Read change feed from the beginning](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L124-L154) | - |
+| Basic change feed functionality | [ChangeFeedProcessor.changeFeedProcessorBuilder](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L141-L172) |
+| Read change feed from the beginning | [ChangeFeedProcessorOptions.setStartFromBeginning()](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/0ead4ca33dac72c223285e1db866c9dc06f5fb47/src/main/java/com/azure/cosmos/examples/changefeed/SampleChangeFeedProcessor.java#L65) |
+<!-- | Read change feed from a specific time | ChangeFeedProcessor.changeFeedProcessorBuilder | -->
## Server-side programming examples
The [Stored Procedure Sample](https://github.com/Azure-Samples/azure-cosmos-java
| Task | API reference | | | |
-| [Create a stored procedure](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L132-L151) | CosmosScripts.createStoredProcedure |
-| [Execute a stored procedure](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L167-L181) | CosmosStoredProcedure.execute |
-| [Delete a stored procedure](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L183-L193) | CosmosStoredProcedure.delete |
+| Create a stored procedure | [CosmosScripts.createStoredProcedure](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L134-L153) |
+| Execute a stored procedure | [CosmosStoredProcedure.execute](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L213-L227) |
+| Delete a stored procedure | [CosmosStoredProcedure.delete](https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/blob/main/src/main/java/com/azure/cosmos/examples/storedprocedure/sync/SampleStoredProcedure.java#L254-L264) |
-## User management examples
+<!-- ## User management examples
The User Management Sample file shows how to do the following tasks: | Task | API reference | | | | | Create a user | - | | Set permissions on a collection or document | - |
-| Get a list of a user's permissions |- |
+| Get a list of a user's permissions |- | -->
## Next steps
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
na Previously updated : 06/07/2022 Last updated : 08/17/2022
-# Azure DDoS Protection Standard overview
+# What is Azure DDoS Protection Standard?
Distributed denial of service (DDoS) attacks are some of the largest availability and security concerns facing customers that are moving their applications to the cloud. A DDoS attack attempts to exhaust an application's resources, making the application unavailable to legitimate users. DDoS attacks can be targeted at any endpoint that is publicly reachable through the internet.
-Azure DDoS Protection Standard, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It is automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes.
+Azure DDoS Protection Standard, combined with application design best practices, provides enhanced DDoS mitigation features to defend against DDoS attacks. It's automatically tuned to help protect your specific Azure resources in a virtual network. Protection is simple to enable on any new or existing virtual network, and it requires no application or resource changes.
+
+## Key benefits
+
+### Always-on traffic monitoring
+ Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. DDoS Protection Standard instantly and automatically mitigates the attack, once it's detected.
+
+### Adaptive real time tuning
+ Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time.
+
+### DDoS Protection telemetry, monitoring, and alerting
+DDoS Protection Standard applies three auto-tuned mitigation policies (TCP SYN, TCP, and UDP) for each public IP of the protected resource, in the virtual network that has DDoS enabled. The policy thresholds are auto-configured via machine learning-based network traffic profiling. DDoS mitigation occurs for an IP address under attack only when the policy threshold is exceeded.
+
+### Azure DDoS Rapid Response
+ During an active attack, Azure DDoS Protection Standard customers have access to the DDoS Rapid Response (DRR) team, who can help with attack investigation during an attack and post-attack analysis. For more information, see [Azure DDoS Rapid Response](ddos-rapid-response.md).
+
+## SKUs
+
+Azure DDoS Protection has two available SKUs. DDoS Protection Basic and DDoS Protection Standard. For more information about configuring DDoS Protection Standard, see [Quickstart: Create and configure Azure DDoS Protection Standard](manage-ddos-protection.md).
+
+The following table shows features and corresponding SKUs.
+
+| Feature | DDoS Protection Basic | DDoS Protection Standard |
+||||
+| Active traffic monitoring & always on detection| Yes | Yes|
+| Automatic attack mitigation | Yes | Yes |
+| Availability guarantee| Not available | Yes |
+| Application based mitigation policies | Not available | Yes|
+| Metrics & alerts | Not available | Yes |
+| Mitigation reports | Not available | Yes |
+| Mitigation flow logs| Not available | Yes|
+| Mitigation policy customizations | Not available | Yes|
+| DDoS rapid response support | Not available| Yes|
## Features -- **Native platform integration:** Natively integrated into Azure. Includes configuration through the Azure portal. DDoS Protection Standard understands your resources and resource configuration.-- **Turnkey protection:** Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required. -- **Always-on traffic monitoring:** Your application traffic patterns are monitored 24 hours a day, 7 days a week, looking for indicators of DDoS attacks. DDoS Protection Standard instantly and automatically mitigates the attack, once it is detected.-- **Adaptive tuning:** Intelligent traffic profiling learns your application's traffic over time, and selects and updates the profile that is the most suitable for your service. The profile adjusts as traffic changes over time.-- **Multi-Layered protection:** When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) as well as third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).-- **Extensive mitigation scale:** all L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.-- **Attack analytics:** Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack.-- **Attack metrics:** Summarized metrics from each attack are accessible through Azure Monitor.-- **Attack alerting:** Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal.-- **DDoS Rapid Response**: Engage the DDoS Protection Rapid Response (DRR) team for help with attack investigation and analysis. To learn more, see [DDoS Rapid Response](ddos-rapid-response.md).-- **Cost guarantee:** Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks.
+### Native platform integration
+ Natively integrated into Azure. Includes configuration through the Azure portal. DDoS Protection Standard understands your resources and resource configuration.
+### Turnkey protection
+Simplified configuration immediately protects all resources on a virtual network as soon as DDoS Protection Standard is enabled. No intervention or user definition is required.
+
+### Multi-Layered protection:
+When deployed with a web application firewall (WAF), DDoS Protection Standard protects both at the network layer (Layer 3 and 4, offered by Azure DDoS Protection Standard) and at the application layer (Layer 7, offered by a WAF). WAF offerings include Azure [Application Gateway WAF SKU](../web-application-firewall/ag/ag-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json) and third-party web application firewall offerings available in the [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=web%20application%20firewall).
+
+### Extensive mitigation scale
+ All L3/L4 attack vectors can be mitigated, with global capacity, to protect against the largest known DDoS attacks.
+### Attack analytics
+Get detailed reports in five-minute increments during an attack, and a complete summary after the attack ends. Stream mitigation flow logs to [Microsoft Sentinel](../sentinel/data-connectors-reference.md#azure-ddos-protection) or an offline security information and event management (SIEM) system for near real-time monitoring during an attack. See [View and configure DDoS diagnostic logging](diagnostic-logging.md) to learn more.
+
+### Attack metrics
+ Summarized metrics from each attack are accessible through Azure Monitor. See [View and configure DDoS protection telemetry](telemetry.md) to learn more.
+
+### Attack alerting
+ Alerts can be configured at the start and stop of an attack, and over the attack's duration, using built-in attack metrics. Alerts integrate into your operational software like Microsoft Azure Monitor logs, Splunk, Azure Storage, Email, and the Azure portal. See [View and configure DDoS protection alerts
+](alerts.md) to learn more.
+
+### Cost guarantee
+ Receive data-transfer and application scale-out service credit for resource costs incurred as a result of documented DDoS attacks.
+++
+## Architecture
+DDoS Protection Standard is designed for [services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default infrastructure-level DDoS protection applies, which defends against common network-layer attacks. To learn more about supported architectures, see [DDoS Protection reference architectures](./ddos-protection-reference-architectures.md).
## Pricing
-Under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there is no need to create more than one DDoS protection plan.
+Under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there's no need to create more than one DDoS protection plan.
To learn about Azure DDoS Protection Standard pricing, see [Azure DDoS Protection Standard pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
-## Reference architectures
+## DDoS Protection FAQ
-DDoS Protection Standard is designed for [services that are deployed in a virtual network](../virtual-network/virtual-network-for-azure-services.md). For other services, the default infrastructure-level DDoS protection applies, which defends against common network-layer attacks. To learn more about supported architectures, see [DDoS Protection reference architectures](./ddos-protection-reference-architectures.md).
+For frequently asked questions, see the [DDoS Protection FAQ](ddos-faq.yml).
## Next steps
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Deprecating three VM alerts](#deprecating-three-vm-alerts) | June 2022|
| [Change in pricing of Runtime protection for Arc-enabled Kubernetes clusters](#change-in-pricing-of-runtime-protection-for-arc-enabled-kubernetes-clusters) | August 2022 | | [Multiple changes to identity recommendations](#multiple-changes-to-identity-recommendations) | September 2022 | | [Removing security alerts for machines reporting to cross tenant Log Analytics workspaces](#removing-security-alerts-for-machines-reporting-to-cross-tenant-log-analytics-workspaces) | September 2022 | | [Legacy Assessments APIs deprecation](#legacy-assessments-apis-deprecation) | September 2022 |
-### Deprecating three VM alerts
-
-**Estimated date for change:** June 2022
-
-The following table lists the alerts that will be deprecated during June 2022.
-
-| Alert name | Description | Tactics | Severity |
-|--|--|--|--|
-| **Docker build operation detected on a Kubernetes node** <br>(VM_ImageBuildOnNode) | Machine logs indicate a build operation of a container image on a Kubernetes node. While this behavior might be legitimate, attackers might build their malicious images locally to avoid detection. | Defense Evasion | Low |
-| **Suspicious request to Kubernetes API** <br>(VM_KubernetesAPI) | Machine logs indicate that a suspicious request was made to the Kubernetes API. The request was sent from a Kubernetes node, possibly from one of the containers running in the node. Although this behavior can be intentional, it might indicate that the node is running a compromised container. | LateralMovement | Medium |
-| **SSH server is running inside a container** <br>(VM_ContainerSSH) | Machine logs indicate that an SSH server is running inside a Docker container. While this behavior can be intentional, it frequently indicates that a container is misconfigured or breached. | Execution | Medium |
-
-These alerts are used to notify a user about suspicious activity connected to a Kubernetes cluster. The alerts will be replaced with matching alerts that are part of the Microsoft Defender for Cloud Container alerts (`K8S.NODE_ImageBuildOnNode`, `K8S.NODE_ KubernetesAPI` and `K8S.NODE_ ContainerSSH`) which will provide improved fidelity and comprehensive context to investigate and act on the alerts. Learn more about alerts for [Kubernetes Clusters](alerts-reference.md).
- ### Change in pricing of runtime protection for Arc-enabled Kubernetes clusters **Estimated date for change:** August 2022
-Runtime protection is currently a preview feature for Arc-enabled Kubernetes clusters. In August, Arc-enabled Kubernetes clusters will be charged for runtime protection. You can view pricing details on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Subscriptions with Kubernetes clusters already onboarded to Arc, will begin to incur charges in August.
+Runtime protection is currently a preview feature for Arc-enabled Kubernetes clusters. In August, Arc-enabled Kubernetes clusters will be charged for runtime protection. You can view pricing details on the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/). Subscriptions with Kubernetes clusters already onboarded to Arc will begin to incur charges in August.
### Multiple changes to identity recommendations
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
Network connections determine the region into which dev boxes are deployed and a
To perform the steps in this section, you must have an existing virtual network (vnet) and subnet. If you don't have a vnet and subnet available, follow the instructions here: [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md) to create them.
+If your organization routes egress traffic through a firewall, you need to open certain ports to allow the Dev Box service to function. For more information, see [Network requirements](/windows-365/enterprise/requirements-network).
+ 1. Sign in to the [Azure portal](https://portal.azure.com). 1. In the search box, type *Network connections* and then select **Network connections** from the list.
event-grid Publish Iot Hub Events To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-iot-hub-events-to-logic-apps.md
Azure Event Grid enables you to react to events in IoT Hub by triggering actions in your downstream business applications.
-This article walks through a sample configuration that uses IoT Hub and Event Grid. At the end, you have an Azure logic app set up to send a notification email every time a device connects or disconnects to your IoT hub. Event Grid can be used to get timely notification about critical devices disconnecting. Metrics and Diagnostics can take several (i.e. 20 or more -- though we don't want to put a number on it) minutes to show up in logs/alerts. That might be unacceptable for critical infrastructure.
+This article walks through a sample configuration that uses IoT Hub and Event Grid. At the end, you have an Azure logic app set up to send a notification email every time a device connects or disconnects to your IoT hub. Event Grid can be used to get timely notification about critical devices disconnecting. Metrics and Diagnostics can take several minutes (such as 20 minutes or more) to show up in logs / alerts. Longer processing times might be unacceptable for critical infrastructure.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
You can quickly create a new IoT hub using the Azure Cloud Shell terminal in the
1. On the upper right of the page, select the Cloud Shell button.
- ![Cloud Shell button](./media/publish-iot-hub-events-to-logic-apps/portal-cloud-shell.png)
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/portal-cloud-shell.png" alt-text="Screenshot of how to open the Azure Cloud Shell from the Azure portal." lightbox="./media/publish-iot-hub-events-to-logic-apps/portal-cloud-shell.png":::
1. Run the following command to create a new resource group:
You can quickly create a new IoT hub using the Azure Cloud Shell terminal in the
az iot hub create --name {your iot hub name} --resource-group {your resource group name} --sku S1 ```
-1. Minimize the Cloud Shell terminal. You will return to the shell later in the tutorial.
+1. Minimize the Cloud Shell terminal. You'll return to the shell later in the tutorial.
## Create a logic app
-Next, create a logic app and add an HTTP event grid trigger that processes requests from IoT hub.
+Next, create a logic app and add an HTTP Event Grid trigger that processes requests from IoT hub.
### Create a logic app resource
A trigger is a specific event that starts your logic app. For this tutorial, the
"metadataVersion": "1" }] ```
+
+ > [!IMPORTANT]
+ > Be sure to paste the JSON snippet into the box provided by the **Use sample payload to generate schema** link and not directly into the **Request Body JSON Schema** box. The sample payload link provides a way to generate the JSON content based on the JSON snippet. The final JSON that ends up in the request body is different than the JSON snippet.
This event publishes when a device is connected to an IoT hub.
A trigger is a specific event that starts your logic app. For this tutorial, the
Actions are any steps that occur after the trigger starts the logic app workflow. For this tutorial, the action is to send an email notification from your email provider.
-1. Select **New step**. This opens a window to **Choose an action**.
+1. Select **New step**. A window appears, prompting you to **Choose an action**.
1. Search for **Outlook**.
-1. Based on your email provider, find and select the matching connector. This tutorial uses **Outlook.com**. The steps for other email providers are similar.
+1. Based on your email provider, find and select the matching connector. This tutorial uses **Outlook.com**. The steps for other email providers are similar. Alternatively, use Office 365 Outlook to skip the sign-in step.
![Select email provider connector](./media/publish-iot-hub-events-to-logic-apps/outlook-step.png)
Actions are any steps that occur after the trigger starts the logic app workflow
* **To**: Enter the email address to receive the notification emails. For this tutorial, use an email account that you can access for testing.
- * **Subject**: Fill in the text for the subject. When you click on the Subject text box, you can select dynamic content to include. For example, this tutorial uses `IoT Hub alert: {eventType}`. If you can't see Dynamic content, select the **Add dynamic content** hyperlink -- this toggles it on and off.
+ * **Subject**: Fill in the text for the subject. When you click on the Subject text box, you can select dynamic content to include. For example, this tutorial uses `IoT Hub alert: {eventType}`. If you can't see **Dynamic content**, select the **Add dynamic content** hyperlink to toggle the **Dynamic content** view on or off.
+
+ After selecting `eventType`, you'll see the email form output so far. Select the **Send and email (V2)** to edit the body of your email.
+
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/send-email.png" alt-text="Screenshot of the condensed body output form." lightbox="./media/publish-iot-hub-events-to-logic-apps/send-email.png":::
* **Body**: Write the text for your email. Select JSON properties from the selector tool to include dynamic content based on event data. If you can't see the Dynamic content, select the **Add dynamic content** hyperlink under the **Body** text box. If it doesn't show you the fields you want, click *more* in the Dynamic content screen to include the fields from the previous action.
Actions are any steps that occur after the trigger starts the logic app workflow
### Copy the HTTP URL
-Before you leave the Logic Apps Designer, copy the URL that your logic apps is listening to for a trigger. You use this URL to configure Event Grid.
+Before you leave the Logic Apps Designer, copy the URL that your logic app is listening to for a trigger. You use this URL to configure Event Grid.
1. Expand the **When a HTTP request is received** trigger configuration box by clicking on it.
Before you leave the Logic Apps Designer, copy the URL that your logic apps is l
In this section, you configure your IoT Hub to publish events as they occur.
-1. In the Azure portal, navigate to your IoT hub. You can do this by selecting **Resource groups**, then select the resource group for this tutorial, and then select your IoT hub from the list of resources.
+1. In the Azure portal, navigate to your IoT hub. You can find your IoT hub by selecting **IoT Hub** from your Azure dashboard, then select your IoT hub instance from the list of resources.
1. Select **Events**.
In this section, you configure your IoT Hub to publish events as they occur.
Test your logic app by quickly simulating a device connection using the Azure CLI.
-1. Select the Cloud Shell button to re-open your terminal.
+1. Select the Cloud Shell button to reopen your terminal.
1. Run the following command to create a simulated device identity:
Test your logic app by quickly simulating a device connection using the Azure CL
az iot hub device-identity create --device-id simDevice --hub-name {YourIoTHubName} ```
- This could take a minute. You'll see a `json` printout once it's created.
+ The processing could take a minute. You'll see a JSON printout in your console once it's created.
1. Run the following command to simulate connecting your device to IoT Hub and sending telemetry:
Test your logic app by quickly simulating a device connection using the Azure CL
az iot device simulate -d simDevice -n {YourIoTHubName} ```
-1. When the simulated device connects to IoT Hub, you will receive an email notifying you of a "DeviceConnected" event.
+1. When the simulated device connects to IoT Hub, you'll receive an email notifying you of a "DeviceConnected" event.
-1. When the simulation completes, you will receive an email notifying you of a "DeviceDisconnected" event.
+1. When the simulation completes, you'll receive an email notifying you of a "DeviceDisconnected" event.
- ![Example alert mail](./media/publish-iot-hub-events-to-logic-apps/alert-mail.png)
+ :::image type="content" source="./media/publish-iot-hub-events-to-logic-apps/alert-mail.png" alt-text="Screenshot of the email you should receive." lightbox="./media/publish-iot-hub-events-to-logic-apps/alert-mail.png":::
## Clean up resources
To delete all of the resources created in this tutorial, delete the resource gro
1. Select **Resource groups**, then select the resource group you created for this tutorial.
-2. On the Resource group pane, select **Delete resource group**. You are prompted to enter the resource group name, and then you can delete it. All of the resources contained therein are also removed.
+2. On the Resource group pane, select **Delete resource group**. You're prompted to enter the resource group name, and then you can delete it. All of the resources contained therein are also removed.
## Next steps
To delete all of the resources created in this tutorial, delete the resource gro
For a complete list of supported Logic App connectors, see the > [!div class="nextstepaction"]
-> [Connectors overview](/connectors/).
+> [Connectors overview](/connectors/).
expressroute Expressroute Locations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md
The following table shows locations by service provider. If you want to view ava
| **[BBIX](https://www.bbix.net/en/service/ix/)** | Supported | Supported | Osaka, Tokyo, Tokyo2 | | **[BCX](https://www.bcx.co.za/solutions/connectivity/data-networks)** |Supported |Supported | Cape Town, Johannesburg| | **[Bell Canada](https://business.bell.ca/shop/enterprise/cloud-connect-access-to-cloud-partner-services)** |Supported |Supported | Montreal, Toronto, Quebec City, Vancouver |
-| **[BICS](https://bics.com/bics-solutions-suite/cloud-connect/bics-cloud-connect-an-official-microsoft-azure-technology-partner/)** | Supported | Supported | Amsterdam2, London2 |
+| **[BICS](https://www.bics.com/cloud-connect/)** | Supported | Supported | Amsterdam2, London2 |
| **[British Telecom](https://www.globalservices.bt.com/en/solutions/products/cloud-connect-azure)** |Supported |Supported | Amsterdam, Amsterdam2, Chicago, Frankfurt, Hong Kong SAR, Johannesburg, London, London2, Newport(Wales), Paris, Sao Paulo, Silicon Valley, Singapore, Sydney, Tokyo, Washington DC | | **BSNL** |Supported |Supported | Chennai, Mumbai | | **[C3ntro](https://www.c3ntro.com/)** |Supported |Supported | Miami |
If you are remote and do not have fiber connectivity or you want to explore othe
| **[Gulf Bridge International](https://gbiinc.com/)** | Equinix | Amsterdam | | **[HSO](https://www.hso.co.uk/products/cloud-direct)** |Equinix | London, Slough | | **[IVedha Inc](https://ivedha.com/cloud-services)**| Equinix | Toronto |
-| **[Kaalam Telecom Bahrain B.S.C](http://www.kalaam-telecom.com/azure/)**| Level 3 Communications |Amsterdam |
+| **[Kaalam Telecom Bahrain B.S.C](https://kalaam-telecom.com/)**| Level 3 Communications |Amsterdam |
| **LGA Telecom** |Equinix |Singapore| | **[Macroview Telecom](http://www.macroview.com/en/scripts/catitem.php?catid=solution&sectionid=expressroute)** |Equinix |Hong Kong SAR | **[Macquarie Telecom Group](https://macquariegovernment.com/secure-cloud/secure-cloud-exchange/)** | Megaport | Sydney |
firewall Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/overview.md
Azure Firewall Standard has the following known issues:
|Unable to see Network Rule Name in Azure Firewall Logs|Azure Firewall network rule log data does not show the Rule name for network traffic.|Network rule name logging is in preview. For for information, see [Azure Firewall preview features](firewall-preview.md#network-rule-name-logging-preview).| |XFF header in HTTP/S|XFF headers are overwritten with the original source IP address as seen by the firewall. This is applicable for the following use cases:<br>- HTTP requests<br>- HTTPS requests with TLS termination|A fix is being investigated.| |Can't upgrade to Premium with Availability Zones in the Southeast Asia region|You can't currently upgrade to Azure Firewall Premium with Availability Zones in the Southeast Asia region.|Deploy a new Premium firewall in Southeast Asia without Availability Zones, or deploy in a region that supports Availability Zones.|
-|CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.
+|CanΓÇÖt deploy Firewall with Availability Zones with a newly created Public IP address|When you deploy a Firewall with Availability Zones, you canΓÇÖt use a newly created Public IP address.|First create a new zone redundant Public IP address, then assign this previously created IP address during the Firewall deployment.|
+|Azure private DNS zone isn't supported with Azure Firewall|Azure private DNS zone won't work with Azure Firewall regardless of Azure Firewall DNS settings.|To achieve the desire state of using a private DNS server, use Azure Firewall DNS proxy instead of an Azure private DNS zone.|
### Azure Firewall Premium
frontdoor Front Door How To Redirect Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-how-to-redirect-https.md
You can use the Azure portal to [create a Front Door](quickstart-create-front-do
1. Select **Create a resource** found on the upper left-hand corner of the Azure portal.
-1. Search for **Front Door** using the search bar and once you find the resource type, select **Create**.
+1. Search for **Front Door and CDN profiles** using the search bar and once you find the resource type, select **Create**.
+
+1. Select **Explore other offerings**, then select **Azure Front Door (classic)**. Select **Continue** to begin configuring the profile.
+
+ :::image type="content" source="./media/front-door-url-redirect/compare-offerings.png" alt-text="Screenshot of the compare offerings page.":::
1. Choose a *subscription* and then either use an existing resource group or create a new one. Select **Next** to enter the configuration tab.
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) | |[Gateway subnets should not be configured with a network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35f9c03a-cc27-418e-9c0c-539ff999d010) |This policy denies if a gateway subnet is configured with a network security group. Assigning a network security group to a gateway subnet will cause the gateway to stop functioning. |deny |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroupOnGatewaySubnet_Deny.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Container Registry should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4857be7-912a-4c75-87e6-e30292bcdf78) |This policy audits any Container Registry not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_ContainerRegistry_Audit.json) |
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Cosmos DB should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0a2b1a3-f7f9-4569-807f-2a9edebdf4d9) |This policy audits any Cosmos DB not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_CosmosDB_Audit.json) | |[Deploy network watcher when virtual networks are created](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa9b99dd8-06c5-4317-8629-9d86a3c6e7d9) |This policy creates a network watcher resource in regions with virtual networks. You need to ensure existence of a resource group named networkWatcherRG, which will be used to deploy network watcher instances. |DeployIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Deploy.json) | |[Event Hub should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd63edb4a-c612-454d-b47d-191a724fcbf0) |This policy audits any Event Hub not configured to use a virtual network service endpoint. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_EventHub_AuditIfNotExists.json) |
This built-in initiative is deployed as part of the
|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||
-|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aks.ms/appservice-vnet-service-endpoint](https://aks.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
+|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) |
|[Windows machines should meet requirements for 'Security Options - Network Access'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ff60f98-7fa4-410a-9f7f-0b00f5afdbdd) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Network Access' for including access for anonymous users, local accounts, and remote access to the registry. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](../../machine-configuration/overview.md). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsNetworkAccess_AINE.json) | ### The organization ensures information systems protect the confidentiality and integrity of transmitted information, including during preparation for transmission and during reception.
Additional articles about Azure Policy:
- See the [initiative definition structure](../concepts/initiative-definition-structure.md). - Review other examples at [Azure Policy samples](./index.md). - Review [Understanding policy effects](../concepts/effects.md).-- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance Guidance For Throttled Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/concepts/guidance-for-throttled-requests.md
Title: Guidance for throttled requests description: Learn to group, stagger, paginate, and query in parallel to avoid requests being throttled by Azure Resource Graph. Previously updated : 09/13/2021++ Last updated : 08/18/2022
looking for. However, some Azure Resource Graph clients handle pagination differ
```csharp var results = new List<object>(); var queryRequest = new QueryRequest(
- subscriptions: new[] { mySubscriptionId },
- query: "Resources | project id, name, type");
+ subscriptions: new[] { mySubscriptionId },
+ query: "Resources | project id, name, type");
var azureOperationResponse = await this.resourceGraphClient
- .ResourcesWithHttpMessagesAsync(queryRequest, header)
- .ConfigureAwait(false);
- while (!string.Empty(azureOperationResponse.Body.SkipToken))
+ .ResourcesWithHttpMessagesAsync(queryRequest, header)
+ .ConfigureAwait(false);
+ while (!string.IsNullOrEmpty(azureOperationResponse.Body.SkipToken))
{
- queryRequest.SkipToken = azureOperationResponse.Body.SkipToken;
- // Each post call to ResourceGraph consumes one query quota
- var azureOperationResponse = await this.resourceGraphClient
- .ResourcesWithHttpMessagesAsync(queryRequest, header)
- .ConfigureAwait(false);
- results.Add(azureOperationResponse.Body.Data.Rows);
+ queryRequest.Options ??= new QueryRequestOptions();
+ queryRequest.Options.SkipToken = azureOperationResponse.Body.SkipToken;
+ var azureOperationResponse = await this.resourceGraphClient
+ .ResourcesWithHttpMessagesAsync(queryRequest, header)
+ .ConfigureAwait(false);
+ results.Add(azureOperationResponse.Body.Data.Rows);
- // Inspect throttling headers in query response and delay the next call if needed.
+ // Inspect throttling headers in query response and delay the next call if needed.
} ```
hdinsight Apache Ambari Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-ambari-email.md
In this tutorial, you learn how to:
* An HDInsight cluster. See [Create Apache Hadoop clusters using the Azure portal](./hdinsight-hadoop-create-linux-clusters-portal.md).
-## Obtain SendGrid Username
+> [!NOTE]
+> Users can no logner set passwords for their SendGrid account, so we need use apikey to send email.
+
+## Obtain SendGrid apikey
1. From the [Azure portal](https://portal.azure.com), navigate to your SendGrid resource.
-1. From the Overview page, select **Manage**, to go the SendGrid webpage for your account.
+1. From the Overview page, click **Open SaaS Account on publisherΓÇÖs site**, to go the SendGrid webpage for your account.
:::image type="content" source="./media/apache-ambari-email/azure-portal-sendgrid-manage.png" alt-text="SendGrid overview in azure portal":::
-1. From the left menu, navigate to your account name and then **Account Details**.
+1. From the left menu, navigate to your **Settings** and then **API Keys**.
:::image type="content" source="./media/apache-ambari-email/sendgrid-dashboard-navigation.png" alt-text="SendGrid dashboard navigation":::
-1. From the **Account Details** page, record the **Username**.
+1. Click **Create API Key** to create an apikey and copy the apikey as smtp password in later use.
:::image type="content" source="./media/apache-ambari-email/sendgrid-account-details.png" alt-text="SendGrid account details":::
In this tutorial, you learn how to:
|SMTP Port|25 or 587 (for unencrypted/TLS connections).| |Email From|Provide an email address. The address doesn't need to be authentic.| |Use authentication|Select this check box.|
- |Username|Provide the SendGrid username.|
- |Password|Provide the password you used when you created the SendGrid resource in Azure.|
+ |Username|Use "apikey" directly if using SendGrid|
+ |Password|Provide the password you copied when you created the SendGrid apikey in Azure.|
|Password Confirmation|Reenter password.| |Start TLS|Select this check box|
In this tutorial, you learn how to:
1. From the **Manage Alert Notifications** window, select **Close**.
+## FAQ
+
+### No appropriate protocol error if the TLS checkbox is checked
+
+If you select **Start TLS** from the **Create Alert Notification** page, and you receive a *"No appropriate protocol"* exception in the Ambari server log:
+
+1. Go to the Apache Ambari UI.
+2. Go to **Alerts > ManageNotifications > Edit (Edit Notification)**.
+3. Select **Add Property**.
+4. Add the new property, `mail.smtp.ssl.protocol` with a value of `TLSv1.2`.
++++ ## Next steps In this tutorial, you learned how to configure Apache Ambari email notifications using SendGrid. Use the following to learn more about Apache Ambari:
hdinsight Enterprise Security Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/enterprise-security-package.md
Title: Enterprise Security Package for Azure HDInsight
description: Learn the Enterprise Security Package components and versions in Azure HDInsight. Previously updated : 08/16/2022 Last updated : 08/12/2022 # Enterprise Security Package for Azure HDInsight
For information on pricing and SLA for the Enterprise Security Package, see [HDI
* [Cluster setup for Apache Hadoop, Spark, and more on HDInsight](hdinsight-hadoop-provision-linux-clusters.md) * [Work in Apache Hadoop on HDInsight from a Windows PC](hdinsight-hadoop-windows-tools.md)
-* [Hortonworks release notes associated with Azure HDInsight versions](./hortonworks-release-notes.md)
+* [Azure HDInsight release notes](./hdinsight-release-notes.md)
* [Apache components on HDInsight](./hdinsight-component-versioning.md)
hdinsight Apache Hadoop Emulator Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-emulator-get-started.md
- Title: Learn to use an Apache Hadoop sandbox, emulator - Azure HDInsight
-description: 'To start learning about using the Apache Hadoop ecosystem, you can set up a Hadoop sandbox from Hortonworks on an Azure virtual machine. '
-keywords: hadoop emulator,hadoop sandbox
---- Previously updated : 04/28/2022--
-# Get started with an Apache Hadoop sandbox, an emulator on a virtual machine
-
-Learn how to install the Apache Hadoop sandbox from Hortonworks on a virtual machine to learn about the Hadoop ecosystem. The sandbox provides a local development environment to learn about Hadoop, Hadoop Distributed File System (HDFS), and job submission. Once you are familiar with Hadoop, you can start using Hadoop on Azure by creating an HDInsight cluster. For more information on how to get started, see [Get started with Hadoop on HDInsight](apache-hadoop-linux-tutorial-get-started.md).
-
-## Prerequisites
-
-* [Oracle VirtualBox](https://www.virtualbox.org/). Download and install it from [here](https://www.virtualbox.org/wiki/Downloads).
-
-## Download and install the virtual machine
-
-1. Browse to the [Cloudera downloads](https://www.cloudera.com/downloads/hortonworks-sandbox/hdp.html).
-
-1. Click **VIRTUALBOX** under **Choose Installation Type** to download the latest Hortonworks Sandbox on a VM. Sign in or complete the product interest form.
-
-1. Click the button **HDP SANDBOX (LATEST)** to begin the download.
-
-For instructions on setting up the sandbox, see [Sandbox Deployment and Install Guide](https://hortonworks.com/tutorial/sandbox-deployment-and-install-guide/section/1/).
-
-To download an older HDP version sandbox, see the links under **Older Versions**.
-
-## Start the virtual machine
-
-1. Open Oracle VM VirtualBox.
-1. From the **File** menu, click **Import Appliance**, and then specify the Hortonworks Sandbox image.
-1. Select the Hortonworks Sandbox, click **Start**, and then **Normal Start**. Once the virtual machine has finished the boot process, it displays login instructions.
-
- :::image type="content" source="./media/apache-hadoop-emulator-get-started/virtualbox-normal-start.png" alt-text="virtualbox manager normal start" border="true":::
-
-1. Open a web browser and navigate to the URL displayed (usually `http://127.0.0.1:8888`).
-
-## Set Sandbox passwords
-
-1. From the **get started** step of the Hortonworks Sandbox page, select **View Advanced Options**. Use the information on this page to log in to the sandbox using SSH. Use the name and password provided.
-
- > [!NOTE]
- > If you do not have an SSH client installed, you can use the web-based SSH provided at by the virtual machine at **http://localhost:4200/**.
-
- The first time you connect using SSH, you are prompted to change the password for the root account. Enter a new password, which you use when you log in using SSH.
-
-2. Once logged in, enter the following command:
-
- ```bash
- ambari-admin-password-reset
- ```
-
- When prompted, provide a password for the Ambari admin account. This is used when you access the Ambari Web UI.
-
-## Use Hive commands
-
-1. From an SSH connection to the sandbox, use the following command to start the Hive shell:
-
- ```bash
- hive
- ```
-
-2. Once the shell has started, use the following to view the tables that are provided with the sandbox:
-
- ```hiveql
- show tables;
- ```
-
-3. Use the following to retrieve 10 rows from the `sample_07` table:
-
- ```hiveql
- select * from sample_07 limit 10;
- ```
-
-## Next steps
-
-* [Learn how to use Visual Studio with the Hortonworks Sandbox](./apache-hadoop-visual-studio-tools-get-started.md)
-
-* [Learning the ropes of the Hortonworks Sandbox](https://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/)
-
-* [Hadoop tutorial - Getting started with HDP](https://hortonworks.com/hadoop-tutorial/hello-world-an-introduction-to-hadoop-hcatalog-hive-and-pig/)
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 07/30/2022 Last updated : 08/12/2022 # Archived release notes
HDI Hive 3.1 version is upgraded to OSS Hive 3.1.2. This version has all fixes a
> [!NOTE] > **Spark** >
-> * If you are using Azure User Interface to create Spark Cluster for HDInsight, you will see from the dropdown list an additional version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0). This is only an UI level change, which doesnΓÇÖt impact anything for the existing users and users who are already using the ARM template.
+> * If you are using Azure User Interface to create Spark Cluster for HDInsight, you will see from the dropdown list an other version Spark 3.1.(HDI 5.0) along with the older versions. This version is a renamed version of Spark 3.1.(HDI 4.0). This is only an UI level change, which doesnΓÇÖt impact anything for the existing users and users who are already using the ARM template.
![Screenshot_of spark 3.1 for HDI 5.0.](media/hdinsight-release-notes/spark-3-1-for-hdi-5-0.png) > [!NOTE] > **Interactive Query** >
-> * If you are creating an Interactive Query Cluster, you will see from the dropdown list an additional version as Interactive Query 3.1 (HDI 5.0).
+> * If you are creating an Interactive Query Cluster, you will see from the dropdown list an other version as Interactive Query 3.1 (HDI 5.0).
> * If you are going to use Spark 3.1 version along with Hive which require ACID support, you need to select this version Interactive Query 3.1 (HDI 5.0). ![Screenshot_of interactive query 3.1 for HDI 5.0.](media/hdinsight-release-notes/interactive-query-3-1-for-hdi-5-0.png)
Spark 3.1 is now Generally Available on HDInsight 4.0 release. This release inc
* Dynamic Partition Pruning, * Customers will be able to create new Spark 3.1 clusters and not Spark 3.0 (preview) clusters.
-For more details, see the [Apache Spark 3.1](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/spark-3-1-is-now-generally-available-on-hdinsight/ba-p/3253679) is now Generally Available on HDInsight - Microsoft Tech Community.
+For more information, see the [Apache Spark 3.1](https://techcommunity.microsoft.com/t5/analytics-on-azure-blog/spark-3-1-is-now-generally-available-on-hdinsight/ba-p/3253679) is now Generally Available on HDInsight - Microsoft Tech Community.
For a complete list of improvements, see the [Apache Spark 3.1 release notes.](https://spark.apache.org/releases/spark-release-3-1-2.html)
-For more details on migration, see the [migration guide.](https://spark.apache.org/docs/latest/migration-guide.html)
+For more information on migration, see the [migration guide.](https://spark.apache.org/docs/latest/migration-guide.html)
### Kafka 2.4 is now generally available
The OS versions for this release are:
- HDInsight 4.0: Ubuntu 18.04.5 LTS ### New features
-#### Azure HDInsight support for Restricted Public Connectivity is generally available on Oct 15 2021
+#### Azure HDInsight support for Restricted Public Connectivity is generally available on Oct 15, 2021
Azure HDInsight now supports restricted public connectivity in all regions. Below are some of the key highlights of this capability: - Ability to reverse resource provider to cluster communication such that it's outbound from the cluster to the resource provider -- Support for bringing your own Private Link enabled resources (e.g. storage, SQL, key vault) for HDinsight cluster to access the resources over private network only
+- Support for bringing your own Private Link enabled resources (For example, storage, SQL, key vault) for HDinsight cluster to access the resources over private network only
- No public IP addresses are resource provisioned By using this new capability, you can also skip the inbound network security group (NSG) service tag rules for HDInsight management IPs. Learn more about [restricting public connectivity](./hdinsight-restrict-public-connectivity.md) #### Azure HDInsight support for Azure Private Link is generally available on Oct 15 2021
-You can now use private endpoints to connect to your HDInsight clusters over private link. Private link can be leveraged in cross VNET scenarios where VNET peering is not available or enabled.
+You can now use private endpoints to connect to your HDInsight clusters over private link. Private link can be used in cross VNET scenarios where VNET peering isn't available or enabled.
Azure Private Link enables you to access Azure PaaS Services (for example, Azure Storage and SQL Database) and Azure hosted customer-owned/partner services over a [private endpoint](../private-link/private-endpoint-overview.md) in your virtual network.
Here are the back ported Apache JIRAs for this release:
### Price Correction for HDInsight Dv2 Virtual Machines
-A pricing error was corrected on April 25th, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25th, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
+A pricing error was corrected on April 25, 2021, for the Dv2 VM series on HDInsight. The pricing error resulted in a reduced charge on some customer's bills prior to April 25th, and with the correction, prices now match what had been advertised on the HDInsight pricing page and the HDInsight pricing calculator. The pricing error impacted customers in the following regions who used Dv2 VMs:
- Canada Central - Canada East
The following changes will happen in upcoming releases.
#### HDInsight Interactive Query only supports schedule-based Autoscale
-As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The affect on performance can outweigh the cost benefits of Autoscale.
+As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The effect on performance can outweigh the cost benefits of Autoscale.
Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
The following changes will happen in upcoming releases.
As customer scenarios grow more mature and diverse, we have identified some limitations with Interactive Query (LLAP) load-based Autoscale. These limitations are caused by the nature of LLAP query dynamics, future load prediction accuracy issues, and issues in the LLAP scheduler's task redistribution. Due to these limitations, users may see their queries run slower on LLAP clusters when Autoscale is enabled. The impact on performance can outweigh the cost benefits of Autoscale.
-Starting from July, 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
+Starting from July 2021, the Interactive Query workload in HDInsight only supports schedule-based Autoscale. You can no longer enable Autoscale on new Interactive Query clusters. Existing running clusters can continue to run with the known limitations described above.
Microsoft recommends that you move to a schedule-based Autoscale for LLAP. You can analyze your cluster's current usage pattern through the Grafana Hive dashboard. For more information, see [Automatically scale Azure HDInsight clusters](hdinsight-autoscale-clusters.md).
HDInsight clusters are currently running on Ubuntu 16.04 LTS. As referenced in [
HDInsight 3.6 will continue to run on Ubuntu 16.04. It will reach the end of standard support by 30 June 2021, and will change to Basic support starting on 1 July 2021. For more information about dates and support options, see [Azure HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions). Ubuntu 18.04 will not be supported for HDInsight 3.6. If youΓÇÖd like to use Ubuntu 18.04, youΓÇÖll need to migrate your clusters to HDInsight 4.0.
-You need to drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Please plan to create or recreate your cluster after Ubuntu 18.04 support becomes available. WeΓÇÖll send another notification after the new image becomes available in all regions.
+You need to drop and recreate your clusters if youΓÇÖd like to move existing clusters to Ubuntu 18.04. Plan to create or recreate your cluster after Ubuntu 18.04 support becomes available. WeΓÇÖll send another notification after the new image becomes available in all regions.
ItΓÇÖs highly recommended that you test your script actions and custom applications deployed on edge nodes on an Ubuntu 18.04 virtual machine (VM) in advance. You can [create a simple Ubuntu Linux VM on 18.04-LTS](https://azure.microsoft.com/resources/templates/vm-simple-linux/), then create and use a [secure shell (SSH) key pair](../virtual-machines/linux/mac-create-ssh-keys.md#ssh-into-your-vm) on your VM to run and test your script actions and custom applications deployed on edge nodes.
You can find the current component versions for HDInsight 4.0 ad HDInsight 3.6 i
### Known issues #### Hive Warehouse Connector issue
-There is an issue for Hive Warehouse Connector in this release. The fix will be included in the next release. Existing clusters created before this release are not impacted. Avoid dropping and recreating the cluster if possible. Please open support ticket if you need further help on this.
+There is an issue for Hive Warehouse Connector in this release. The fix will be included in the next release. Existing clusters created before this release are not impacted. Avoid dropping and recreating the cluster if possible. Open support ticket if you need further help on this.
## Release date: 01/09/2020
This release applies both for HDInsight 3.6 and 4.0. HDInsight release is made a
#### TLS 1.2 enforcement Transport Layer Security (TLS) and Secure Sockets Layer (SSL) are cryptographic protocols that provide communications security over a computer network. Learn more about [TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1.0.2C_2.0_and_3.0). HDInsight uses TLS 1.2 on public HTTP's endpoints but TLS 1.1 is still supported for backward compatibility.
-With this release, customers can opt into TLS 1.2 only for all connections through the public cluster endpoint. To support this, the new property **minSupportedTlsVersion** is introduced and can be specified during cluster creation. If the property is not set, the cluster still supports TLS 1.0, 1.1 and 1.2, which is the same as today's behavior. Customers can set the value for this property to "1.2", which means that the cluster only supports TLS 1.2 and above. For more information, see [Transport Layer Security](./transport-layer-security.md).
+With this release, customers can opt into TLS 1.2 only for all connections through the public cluster endpoint. To support this, the new property **minSupportedTlsVersion** is introduced and can be specified during cluster creation. If the property isn't set, the cluster still supports TLS 1.0, 1.1 and 1.2, which is the same as today's behavior. Customers can set the value for this property to "1.2", which means that the cluster only supports TLS 1.2 and above. For more information, see [Transport Layer Security](./transport-layer-security.md).
#### Bring your own key for disk encryption
-All managed disks in HDInsight are protected with Azure Storage Service Encryption (SSE). Data on those disks is encrypted by Microsoft-managed keys by default. Starting from this release, you can Bring Your Own Key (BYOK) for disk encryption and manage it using Azure Key Vault. BYOK encryption is a one-step configuration during cluster creation with no additional cost. Just register HDInsight as a managed identity with Azure Key Vault and add the encryption key when you create your cluster. For more information, see [Customer-managed key disk encryption](./disk-encryption.md).
+All managed disks in HDInsight are protected with Azure Storage Service Encryption (SSE). Data on those disks is encrypted by Microsoft-managed keys by default. Starting from this release, you can Bring Your Own Key (BYOK) for disk encryption and manage it using Azure Key Vault. BYOK encryption is a one-step configuration during cluster creation with no other cost. Just register HDInsight as a managed identity with Azure Key Vault and add the encryption key when you create your cluster. For more information, see [Customer-managed key disk encryption](./disk-encryption.md).
### Deprecation No deprecations for this release. To get ready for upcoming deprecations, see [Upcoming changes](#upcoming-changes).
For more information on patches available in HDInsight 4.0, see the patch listin
| Ambari | [Ambari patch information](https://docs.hortonworks.com/HDPDocuments/Ambari-2.7.1.0/bk_ambari-release-notes/content/ambari_relnotes-2.7.1.0-patch-information.html) | | Hadoop | [Hadoop patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_hadoop.html) | | HBase | [HBase patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_hbase.html) |
-| Hive | This release provides Hive 3.1.0 with no additional Apache patches. |
-| Kafka | This release provides Kafka 1.1.1 with no additional Apache patches. |
+| Hive | This release provides Hive 3.1.0 with no more Apache patches. |
+| Kafka | This release provides Kafka 1.1.1 with no more Apache patches. |
| Oozie | [Oozie patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_oozie.html) | | Phoenix | [Phoenix patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_phoenix.html) | | Pig | [Pig patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_pig.html) | | Ranger | [Ranger patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_ranger.html) | | Spark | [Spark patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_spark.html) |
-| Sqoop | This release provides Sqoop 1.4.7 with no additional Apache patches. |
-| Tez | This release provides Tez 0.9.1 with no additional Apache patches. |
-| Zeppelin | This release provides Zeppelin 0.8.0 with no additional Apache patches. |
+| Sqoop | This release provides Sqoop 1.4.7 with no more Apache patches. |
+| Tez | This release provides Tez 0.9.1 with no more Apache patches. |
+| Zeppelin | This release provides Zeppelin 0.8.0 with no more Apache patches. |
| Zookeeper | [Zookeeper patch information](https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.0.1/release-notes/content/patch_zookeeper.html) | ### Fixed Common Vulnerabilities and Exposures
For HDInsight 4.0, do the following steps:
``` sudo bash hdi_enable_replication.sh -m <hn*> -s <srclusterdns> -d <dstclusterdns> -sp <srcclusterpasswd> -dp <dstclusterpasswd> -copydata ```
-For HDInsight 3.6, do the following:
+For HDInsight 3.6
1. Sign in to active HMaster ZK. 1. Download a script to enable replication with the following command:
This release provides Hadoop Common 2.7.3 and the following Apache patches:
- [HDFS-11711](https://issues.apache.org/jira/browse/HDFS-11711): DN should not delete the block On "Too many open files" Exception. -- [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347): TestBalancerRPCDelay\#testBalancerRPCDelay fails very frequently.
+- [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347): TestBalancerRPCDelay\#testBalancerRPCDelay fails frequently.
- [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781): After Datanode down, In Namenode UI Datanode tab is throwing warning message.
This release provides HBase 1.1.2 and the following Apache patches.
- [HBASE-18083](https://issues.apache.org/jira/browse/HBASE-18083): Make large/small file clean thread number configurable in HFileCleaner. -- [HBASE-18084](https://issues.apache.org/jira/browse/HBASE-18084): Improve CleanerChore to clean from directory which consumes more disk space.
+- [HBASE-18084](https://issues.apache.org/jira/browse/HBASE-18084): Improve CleanerChore to clean from directory, which consumes more disk space.
- [HBASE-18164](https://issues.apache.org/jira/browse/HBASE-18164): Much faster locality cost function and candidate generator.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18258*](https://issues.apache.org/jira/browse/HIVE-18258): Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken. -- [*HIVE-18293*](https://issues.apache.org/jira/browse/HIVE-18293): Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore.
+- [*HIVE-18293*](https://issues.apache.org/jira/browse/HIVE-18293): Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore.
- [*HIVE-18327*](https://issues.apache.org/jira/browse/HIVE-18327): Remove the unnecessary HiveConf dependency for MiniHiveKdc.
This release provides Hive 1.2.1 and Hive 2.1.0 in addition to the following pat
- [*HIVE-18269*](https://issues.apache.org/jira/browse/HIVE-18269): LLAP: Fast llap io with slow processing pipeline can lead to OOM. -- [*HIVE-18293*](https://issues.apache.org/jira/browse/HIVE-18293): Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore.
+- [*HIVE-18293*](https://issues.apache.org/jira/browse/HIVE-18293): Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore.
- [*HIVE-18318*](https://issues.apache.org/jira/browse/HIVE-18318): LLAP record reader should check interrupt even when not blocking.
This release provides Kafka 1.0.0 and the following Apache patches.
- [KAFKA-6179](https://issues.apache.org/jira/browse/KAFKA-6179): RecordQueue.clear() does not clear MinTimestampTracker's maintained list. -- [KAFKA-6185](https://issues.apache.org/jira/browse/KAFKA-6185): Selector memory leak with high likelihood of OOM in case of down conversion.
+- [KAFKA-6185](https://issues.apache.org/jira/browse/KAFKA-6185): Selector memory leak with high likelihood of OOM if there is a down conversion.
- [KAFKA-6190](https://issues.apache.org/jira/browse/KAFKA-6190): GlobalKTable never finishes restoring when consuming transactional messages.
In HDP-2.5.x and 2.6.x, we removed the "commons-httpclient" library from Mahout
- There is a small possibility that some Mahout jobs may encounter "ClassNotFoundException" or "could not load class" errors related to "org.apache.commons.httpclient", "net.java.dev.jets3t", or related class name prefixes. If these errors happen, you may consider whether to manually install the needed jars in your classpath for the job, if the risk of security issues in the obsolete library is acceptable in your environment. -- There is an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there is no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be very unusual, and is unlikely to occur in any given Mahout job suite.
+- There is an even smaller possibility that some Mahout jobs may encounter crashes in Mahout's hbase-client code calls to the hadoop-common libraries, due to binary compatibility problems. Regrettably, there is no way to resolve this issue except revert to the HDP-2.4.2 version of Mahout, which may have security issues. Again, this should be unusual, and is unlikely to occur in any given Mahout job suite.
#### Oozie
This release provides Oozie 4.2.0 with the following Apache patches.
- [OOZIE-2787](https://issues.apache.org/jira/browse/OOZIE-2787): Oozie distributes application jar twice making the spark job fail. -- [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792): Hive2 action is not parsing Spark application ID from log file properly when Hive is on Spark.
+- [OOZIE-2792](https://issues.apache.org/jira/browse/OOZIE-2792): Hive2 action isn't parsing Spark application ID from log file properly when Hive is on Spark.
- [OOZIE-2799](https://issues.apache.org/jira/browse/OOZIE-2799): Setting log location for spark sql on hive.
This release provides Ranger 0.7.0 and the following Apache patches:
#### Slider
-This release provides Slider 0.92.0 with no additional Apache patches.
+This release provides Slider 0.92.0 with no more Apache patches.
#### Spark
This release provides Spark 2.3.0 and the following Apache patches:
#### Sqoop
-This release provides Sqoop 1.4.6 with no additional Apache patches.
+This release provides Sqoop 1.4.6 with no more Apache patches.
#### Storm
This release provides Storm 1.1.1 and the following Apache patches:
- [STORM-2854](https://issues.apache.org/jira/browse/STORM-2854): Expose IEventLogger to make event log pluggable. -- [STORM-2870](https://issues.apache.org/jira/browse/STORM-2870): FileBasedEventLogger leaks non-daemon ExecutorService which prevents process to be finished.
+- [STORM-2870](https://issues.apache.org/jira/browse/STORM-2870): FileBasedEventLogger leaks non-daemon ExecutorService, which prevents process to be finished.
- [STORM-2960](https://issues.apache.org/jira/browse/STORM-2960): Better to stress importance of setting up proper OS account for Storm processes.
This release provides Tez 0.7.0 and the following Apache patches:
#### Zeppelin
-This release provides Zeppelin 0.7.3 with no additionalApache patches.
+This release provides Zeppelin 0.7.3 with no more Apache patches.
- [ZEPPELIN-3072](https://issues.apache.org/jira/browse/ZEPPELIN-3072): Zeppelin UI becomes slow/unresponsive if there are too many notebooks.
Fixed issues represent selected issues that were previously logged via Hortonwor
**Incorrect Results**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
||--|| | BUG-100019 | [YARN-8145](https://issues.apache.org/jira/browse/YARN-8145) | yarn rmadmin -getGroups doesn't return updated groups for user | | BUG-100058 | [PHOENIX-2645](https://issues.apache.org/jira/browse/PHOENIX-2645) | Wildcard characters do not match newline characters |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-92345 | [ATLAS-2285](https://issues.apache.org/jira/browse/ATLAS-2285) | UI: Renamed saved search with date attribute. | | BUG-92563 | [HIVE-17495](https://issues.apache.org/jira/browse/HIVE-17495), [HIVE-18528](https://issues.apache.org/jira/browse/HIVE-18528) | Aggregate stats in ObjectStore get wrong result | | BUG-92957 | [HIVE-11266](https://issues.apache.org/jira/browse/HIVE-11266) | count(\*) wrong result based on table statistics for external tables |
-| BUG-93097 | [RANGER-1944](https://issues.apache.org/jira/browse/RANGER-1944) | Action filter for Admin Audit is not working |
+| BUG-93097 | [RANGER-1944](https://issues.apache.org/jira/browse/RANGER-1944) | Action filter for Admin Audit isn't working |
| BUG-93335 | [HIVE-12315](https://issues.apache.org/jira/browse/HIVE-12315) | vectorization\_short\_regress.q has a wrong result issue for a double calculation | | BUG-93415 | [HIVE-18258](https://issues.apache.org/jira/browse/HIVE-18258), [HIVE-18310](https://issues.apache.org/jira/browse/HIVE-18310) | Vectorization: Reduce-Side GROUP BY MERGEPARTIAL with duplicate columns is broken | | BUG-93939 | [ATLAS-2294](https://issues.apache.org/jira/browse/ATLAS-2294) | Extra parameter "description" added when creating a type |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Other**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
||-|--| | BUG-100267 | [HBASE-17170](https://issues.apache.org/jira/browse/HBASE-17170) | HBase is also retrying DoNotRetryIOException because of class loader differences. | | BUG-92367 | [YARN-7558](https://issues.apache.org/jira/browse/YARN-7558) | "yarn logs" command fails to get logs for running containers if UI authentication is enabled. |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Performance**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
|||-| | BUG-83282 | [HBASE-13376](https://issues.apache.org/jira/browse/HBASE-13376), [HBASE-14473](https://issues.apache.org/jira/browse/HBASE-14473), [HBASE-15210](https://issues.apache.org/jira/browse/HBASE-15210), [HBASE-15515](https://issues.apache.org/jira/browse/HBASE-15515), [HBASE-16570](https://issues.apache.org/jira/browse/HBASE-16570), [HBASE-16810](https://issues.apache.org/jira/browse/HBASE-16810), [HBASE-18164](https://issues.apache.org/jira/browse/HBASE-18164) | Fast locality computation in balancer | | BUG-91300 | [HBASE-17387](https://issues.apache.org/jira/browse/HBASE-17387) | Reduce the overhead of exception report in RegionActionResult for multi() | | BUG-91804 | [TEZ-1526](https://issues.apache.org/jira/browse/TEZ-1526) | LoadingCache for TezTaskID slow for large jobs | | BUG-92760 | [ACCUMULO-4578](https://issues.apache.org/jira/browse/ACCUMULO-4578) | Cancel compaction FATE operation does not release namespace lock | | BUG-93577 | [RANGER-1938](https://issues.apache.org/jira/browse/RANGER-1938) | Solr for Audit setup doesn't use DocValues effectively |
-| BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore |
+| BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore |
| BUG-94345 | [HIVE-18429](https://issues.apache.org/jira/browse/HIVE-18429) | Compaction should handle a case when it produces no output | | BUG-94381 | [HADOOP-13227](https://issues.apache.org/jira/browse/HADOOP-13227), [HDFS-13054](https://issues.apache.org/jira/browse/HDFS-13054) | Handling RequestHedgingProxyProvider RetryAction order: FAIL &lt; RETRY &lt; FAILOVER\_AND\_RETRY. | | BUG-94432 | [HIVE-18353](https://issues.apache.org/jira/browse/HIVE-18353) | CompactorMR should call jobclient.close() to trigger cleanup |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-94928 | [HDFS-11078](https://issues.apache.org/jira/browse/HDFS-11078) | Fix NPE in LazyPersistFileScrubber | | BUG-94964 | [HIVE-18269](https://issues.apache.org/jira/browse/HIVE-18269), [HIVE-18318](https://issues.apache.org/jira/browse/HIVE-18318), [HIVE-18326](https://issues.apache.org/jira/browse/HIVE-18326) | Multiple LLAP fixes | | BUG-95669 | [HIVE-18577](https://issues.apache.org/jira/browse/HIVE-18577), [HIVE-18643](https://issues.apache.org/jira/browse/HIVE-18643) | When run update/delete query on ACID partitioned table, HS2 read all each partitions. |
-| BUG-96390 | [HDFS-10453](https://issues.apache.org/jira/browse/HDFS-10453) | ReplicationMonitor thread could stuck for long time due to the race between replication and delete of same file in a large cluster. |
-| BUG-96625 | [HIVE-16110](https://issues.apache.org/jira/browse/HIVE-16110) | Revert of "Vectorization: Support 2 Value CASE WHEN instead of fall back to VectorUDFAdaptor" |
+| BUG-96390 | [HDFS-10453](https://issues.apache.org/jira/browse/HDFS-10453) | ReplicationMonitor thread could stuck for long time due to the race between replication and delete the same file in a large cluster. |
+| BUG-96625 | [HIVE-16110](https://issues.apache.org/jira/browse/HIVE-16110) | Revert of "Vectorization: Support 2 Value CASE WHEN instead of fallback to VectorUDFAdaptor" |
| BUG-97109 | [HIVE-16757](https://issues.apache.org/jira/browse/HIVE-16757) | Use of deprecated getRows() instead of new estimateRowCount(RelMetadataQuery...) has serious performance impact | | BUG-97110 | [PHOENIX-3789](https://issues.apache.org/jira/browse/PHOENIX-3789) | Execute cross region index maintenance calls in postBatchMutateIndispensably | | BUG-98833 | [YARN-6797](https://issues.apache.org/jira/browse/YARN-6797) | TimelineWriter does not fully consume the POST response |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Potential Data Loss**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
|||-|
-| BUG-95613 | [HBASE-18808](https://issues.apache.org/jira/browse/HBASE-18808) | Ineffective config check in BackupLogCleaner\#getDeletableFiles() |
+| BUG-95613 | [HBASE-18808](https://issues.apache.org/jira/browse/HBASE-18808) | Ineffective config check-in BackupLogCleaner\#getDeletableFiles() |
| BUG-97051 | [HIVE-17403](https://issues.apache.org/jira/browse/HIVE-17403) | Fail concatenation for unmanaged and transactional tables | | BUG-97787 | [HIVE-18460](https://issues.apache.org/jira/browse/HIVE-18460) | Compactor doesn't pass Table properties to the Orc writer | | BUG-97788 | [HIVE-18613](https://issues.apache.org/jira/browse/HIVE-18613) | Extend JsonSerDe to support BINARY type | **Query Failure**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
||-|--| | BUG-100180 | [CALCITE-2232](https://issues.apache.org/jira/browse/CALCITE-2232) | Assertion error on AggregatePullUpConstantsRule while adjusting Aggregate indices | | BUG-100422 | [HIVE-19085](https://issues.apache.org/jira/browse/HIVE-19085) | FastHiveDecimal abs(0) sets sign to +ve |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position does not work when cbo is disabled | | BUG-93595 | [HIVE-12378](https://issues.apache.org/jira/browse/HIVE-12378), [HIVE-15883](https://issues.apache.org/jira/browse/HIVE-15883) | HBase mapped table in Hive insert fail for decimal and binary columns | | BUG-94007 | [PHOENIX-1751](https://issues.apache.org/jira/browse/PHOENIX-1751), [PHOENIX-3112](https://issues.apache.org/jira/browse/PHOENIX-3112) | Phoenix Queries returns Null values due to HBase Partial rows |
-| BUG-94144 | [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063) | insert overwrite partition onto an external table fail when drop partition first |
+| BUG-94144 | [HIVE-17063](https://issues.apache.org/jira/browse/HIVE-17063) | insert overwrite partition into an external table fail when drop partition first |
| BUG-94280 | [HIVE-12785](https://issues.apache.org/jira/browse/HIVE-12785) | View with union type and UDF to \`cast\` the struct is broken | | BUG-94505 | [PHOENIX-4525](https://issues.apache.org/jira/browse/PHOENIX-4525) | Integer overflow in GroupBy execution | | BUG-95618 | [HIVE-18506](https://issues.apache.org/jira/browse/HIVE-18506) | LlapBaseInputFormat - negative array index |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Security**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
|||--|
-| BUG-100436 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso is not working for ranger |
+| BUG-100436 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
| BUG-101038 | [SPARK-24062](https://issues.apache.org/jira/browse/SPARK-24062) | Zeppelin %Spark interpreter "Connection refused" error, "A secret key must be specified..." error in HiveThriftServer | | BUG-101359 | [ACCUMULO-4056](https://issues.apache.org/jira/browse/ACCUMULO-4056) | Update version of commons-collection to 3.2.2 when released | | BUG-54240 | [HIVE-18879](https://issues.apache.org/jira/browse/HIVE-18879) | Disallow embedded element in UDFXPathUtil needs to work if xercesImpl.jar in classpath |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97178 | [ATLAS-2467](https://issues.apache.org/jira/browse/ATLAS-2467) | Dependency upgrade for Spring and nimbus-jose-jwt | | BUG-97180 | N/A | Upgrade Nimbus-jose-jwt | | BUG-98038 | [HIVE-18788](https://issues.apache.org/jira/browse/HIVE-18788) | Clean up inputs in JDBC PreparedStatement |
-| BUG-98353 | [HADOOP-13707](https://issues.apache.org/jira/browse/HADOOP-13707) | Revert of "If kerberos is enabled while HTTP SPNEGO is not configured, some links cannot be accessed" |
+| BUG-98353 | [HADOOP-13707](https://issues.apache.org/jira/browse/HADOOP-13707) | Revert of "If kerberos is enabled while HTTP SPNEGO isn't configured, some links cannot be accessed" |
| BUG-98372 | [HBASE-13848](https://issues.apache.org/jira/browse/HBASE-13848) | Access InfoServer SSL passwords through Credential Provider API |
-| BUG-98385 | [ATLAS-2500](https://issues.apache.org/jira/browse/ATLAS-2500) | Add additional headers to Atlas response. |
+| BUG-98385 | [ATLAS-2500](https://issues.apache.org/jira/browse/ATLAS-2500) | Add more headers to Atlas response. |
| BUG-98564 | [HADOOP-14651](https://issues.apache.org/jira/browse/HADOOP-14651) | Update okhttp version to 2.7.5 | | BUG-99440 | [RANGER-2045](https://issues.apache.org/jira/browse/RANGER-2045) | Hive table columns with no explicit allow policy are listed with 'desc table' command | | BUG-99803 | N/A | Oozie should disable HBase dynamic class loading | **Stability**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
|||| | BUG-100040 | [ATLAS-2536](https://issues.apache.org/jira/browse/ATLAS-2536) | NPE in Atlas Hive Hook | | BUG-100057 | [HIVE-19251](https://issues.apache.org/jira/browse/HIVE-19251) | ObjectStore.getNextNotification with LIMIT should use less memory |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-100073 | N/A | too many close\_wait connections from hiveserver to data node | | BUG-100319 | [HIVE-19248](https://issues.apache.org/jira/browse/HIVE-19248) | REPL LOAD doesn't throw error if file copy fails. | | BUG-100352 | N/A | CLONE - RM purging logic scans /registry znode too frequently |
-| BUG-100427 | [HIVE-19249](https://issues.apache.org/jira/browse/HIVE-19249) | Replication: WITH clause is not passing the configuration to Task correctly in all cases |
+| BUG-100427 | [HIVE-19249](https://issues.apache.org/jira/browse/HIVE-19249) | Replication: WITH clause isn't passing the configuration to Task correctly in all cases |
| BUG-100430 | [HIVE-14483](https://issues.apache.org/jira/browse/HIVE-14483) | java.lang.ArrayIndexOutOfBoundsException org.apache.orc.impl.TreeReaderFactory\$BytesColumnVectorUtil.commonReadByteArrays | | BUG-100432 | [HIVE-19219](https://issues.apache.org/jira/browse/HIVE-19219) | Incremental REPL DUMP should throw error if requested events are cleaned-up. | | BUG-100448 | [SPARK-23637](https://issues.apache.org/jira/browse/SPARK-23637), [SPARK-23802](https://issues.apache.org/jira/browse/SPARK-23802), [SPARK-23809](https://issues.apache.org/jira/browse/SPARK-23809), [SPARK-23816](https://issues.apache.org/jira/browse/SPARK-23816), [SPARK-23822](https://issues.apache.org/jira/browse/SPARK-23822), [SPARK-23823](https://issues.apache.org/jira/browse/SPARK-23823), [SPARK-23838](https://issues.apache.org/jira/browse/SPARK-23838), [SPARK-23881](https://issues.apache.org/jira/browse/SPARK-23881) | Update Spark2 to 2.3.0+ (4/11) |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-92813 | [FLUME-2973](https://issues.apache.org/jira/browse/FLUME-2973) | Deadlock in hdfs sink | | BUG-92957 | [HIVE-11266](https://issues.apache.org/jira/browse/HIVE-11266) | count(\*) wrong result based on table statistics for external tables | | BUG-93018 | [ATLAS-2310](https://issues.apache.org/jira/browse/ATLAS-2310) | In HA, the passive node redirects the request with wrong URL encoding |
-| BUG-93116 | [RANGER-1957](https://issues.apache.org/jira/browse/RANGER-1957) | Ranger Usersync is not syncing users or groups periodically when incremental sync is enabled. |
+| BUG-93116 | [RANGER-1957](https://issues.apache.org/jira/browse/RANGER-1957) | Ranger Usersync isn't syncing users or groups periodically when incremental sync is enabled. |
| BUG-93361 | [HIVE-12360](https://issues.apache.org/jira/browse/HIVE-12360) | Bad seek in uncompressed ORC with predicate pushdown | | BUG-93426 | [CALCITE-2086](https://issues.apache.org/jira/browse/CALCITE-2086) | HTTP/413 in certain circumstances due to large Authorization headers | | BUG-93429 | [PHOENIX-3240](https://issues.apache.org/jira/browse/PHOENIX-3240) | ClassCastException from Pig loader | | BUG-93485 | N/A | Cannot get table mytestorg.apache.hadoop.hive.ql.metadata.InvalidTableException: Table not found when running analyze table on columns in LLAP | | BUG-93512 | [PHOENIX-4466](https://issues.apache.org/jira/browse/PHOENIX-4466) | java.lang.RuntimeException: response code 500 - Executing a spark job to connect to phoenix query server and load data | | BUG-93550 | N/A | Zeppelin %spark.r does not work with spark1 due to scala version mismatch |
-| BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that is not owned by identity running HiveMetaStore |
+| BUG-93910 | [HIVE-18293](https://issues.apache.org/jira/browse/HIVE-18293) | Hive is failing to compact tables contained within a folder that isn't owned by identity running HiveMetaStore |
| BUG-93926 | [ZEPPELIN-3114](https://issues.apache.org/jira/browse/ZEPPELIN-3114) | Notebooks and interpreters aren't getting saved in zeppelin after &gt;1d stress testing | | BUG-93932 | [ATLAS-2320](https://issues.apache.org/jira/browse/ATLAS-2320) | classification "\*" with query throws 500 Internal server exception. | | BUG-93948 | [YARN-7697](https://issues.apache.org/jira/browse/YARN-7697) | NM goes down with OOM due to leak in log-aggregation (part\#1) |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-99239 | [ATLAS-2462](https://issues.apache.org/jira/browse/ATLAS-2462) | Sqoop import for all tables throws NPE for no table provided in command | | BUG-99301 | [ATLAS-2530](https://issues.apache.org/jira/browse/ATLAS-2530) | Newline at the beginning of the name attribute of a hive\_process and hive\_column\_lineage | | BUG-99453 | [HIVE-19065](https://issues.apache.org/jira/browse/HIVE-19065) | Metastore client compatibility check should include syncMetaStoreClient |
-| BUG-99521 | N/A | ServerCache for HashJoin is not re-created when iterators are re-instantiated |
+| BUG-99521 | N/A | ServerCache for HashJoin isn't re-created when iterators are reinstantiated |
| BUG-99590 | [PHOENIX-3518](https://issues.apache.org/jira/browse/PHOENIX-3518) | Memory Leak in RenewLeaseTask | | BUG-99618 | [SPARK-23599](https://issues.apache.org/jira/browse/SPARK-23599), [SPARK-23806](https://issues.apache.org/jira/browse/SPARK-23806) | Update Spark2 to 2.3.0+ (3/28) | | BUG-99672 | [ATLAS-2524](https://issues.apache.org/jira/browse/ATLAS-2524) | Hive hook with V2 notifications - incorrect handling of 'alter view as' operation |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Supportability**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
|||--| | BUG-87343 | [HIVE-18031](https://issues.apache.org/jira/browse/HIVE-18031) | Support replication for Alter Database operation. |
-| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso is not working for ranger |
-| BUG-93116 | [RANGER-1957](https://issues.apache.org/jira/browse/RANGER-1957) | Ranger Usersync is not syncing users or groups periodically when incremental sync is enabled. |
+| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
+| BUG-93116 | [RANGER-1957](https://issues.apache.org/jira/browse/RANGER-1957) | Ranger Usersync isn't syncing users or groups periodically when incremental sync is enabled. |
| BUG-93577 | [RANGER-1938](https://issues.apache.org/jira/browse/RANGER-1938) | Solr for Audit setup doesn't use DocValues effectively | | BUG-96082 | [RANGER-1982](https://issues.apache.org/jira/browse/RANGER-1982) | Error Improvement for Analytics Metric of Ranger Admin and Ranger Kms | | BUG-96479 | [HDFS-12781](https://issues.apache.org/jira/browse/HDFS-12781) | After Datanode down, In Namenode UI Datanode tab is throwing warning message. |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Upgrade**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
||--|--| | BUG-100134 | [SPARK-22919](https://issues.apache.org/jira/browse/SPARK-22919) | Revert of "Bump Apache httpclient versions" | | BUG-95823 | N/A | Knox: Upgrade Beanutils |
Fixed issues represent selected issues that were previously logged via Hortonwor
**Usability**
-| **Hortonworks Bug ID** | **Apache JIRA** | **Summary** |
+| **Bug ID** | **Apache JIRA** | **Summary** |
||--|--| | BUG-100045 | [HIVE-19056](https://issues.apache.org/jira/browse/HIVE-19056) | IllegalArgumentException in FixAcidKeyIndex when ORC file has 0 rows | | BUG-100139 | [KNOX-1243](https://issues.apache.org/jira/browse/KNOX-1243) | Normalize the required DNs that are Configured in KnoxToken Service |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-90570 | [HDFS-11384](https://issues.apache.org/jira/browse/HDFS-11384), [HDFS-12347](https://issues.apache.org/jira/browse/HDFS-12347) | Add option for balancer to disperse getBlocks calls to avoid NameNode's rpc.CallQueueLength spike | | BUG-90584 | [HBASE-19052](https://issues.apache.org/jira/browse/HBASE-19052) | FixedFileTrailer should recognize CellComparatorImpl class in branch-1.x | | BUG-90979 | [KNOX-1224](https://issues.apache.org/jira/browse/KNOX-1224) | Knox Proxy HADispatcher to support Atlas in HA. |
-| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso is not working for ranger |
+| BUG-91293 | [RANGER-2060](https://issues.apache.org/jira/browse/RANGER-2060) | Knox proxy with knox-sso isn't working for ranger |
| BUG-92236 | [ATLAS-2281](https://issues.apache.org/jira/browse/ATLAS-2281) | Saving Tag/Type attribute filter queries with null/not null filters. | | BUG-92238 | [ATLAS-2282](https://issues.apache.org/jira/browse/ATLAS-2282) | Saved favorite search appears only on refresh after creation when there are 25+ favorite searches. | | BUG-92333 | [ATLAS-2286](https://issues.apache.org/jira/browse/ATLAS-2286) | Pre-built type 'kafka\_topic' should not declare 'topic' attribute as unique | | BUG-92678 | [ATLAS-2276](https://issues.apache.org/jira/browse/ATLAS-2276) | Path value for hdfs\_path type entity is set to lower case from hive-bridge. |
-| BUG-93097 | [RANGER-1944](https://issues.apache.org/jira/browse/RANGER-1944) | Action filter for Admin Audit is not working |
+| BUG-93097 | [RANGER-1944](https://issues.apache.org/jira/browse/RANGER-1944) | Action filter for Admin Audit isn't working |
| BUG-93135 | [HIVE-15874](https://issues.apache.org/jira/browse/HIVE-15874), [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Hive query returning wrong results when set hive.groupby.orderby.position.alias to true | | BUG-93136 | [HIVE-18189](https://issues.apache.org/jira/browse/HIVE-18189) | Order by position doesn't work when cbo is disabled | | BUG-93387 | [HIVE-17600](https://issues.apache.org/jira/browse/HIVE-17600) | Make OrcFile's "enforceBufferSize" user-settable. |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-93932 | [ATLAS-2320](https://issues.apache.org/jira/browse/ATLAS-2320) | classification "\*" with query throws 500 Internal server exception. | | BUG-93933 | [ATLAS-2286](https://issues.apache.org/jira/browse/ATLAS-2286) | Pre-built type 'kafka\_topic' should not declare 'topic' attribute as unique | | BUG-93938 | [ATLAS-2283](https://issues.apache.org/jira/browse/ATLAS-2283), [ATLAS-2295](https://issues.apache.org/jira/browse/ATLAS-2295) | UI updates for classifications |
-| BUG-93941 | [ATLAS-2296](https://issues.apache.org/jira/browse/ATLAS-2296), [ATLAS-2307](https://issues.apache.org/jira/browse/ATLAS-2307) | Basic search enhancement to optionally exclude sub-type entities and sub-classification-types |
+| BUG-93941 | [ATLAS-2296](https://issues.apache.org/jira/browse/ATLAS-2296), [ATLAS-2307](https://issues.apache.org/jira/browse/ATLAS-2307) | Basic search enhancement to optionally exclude subtype entities and sub-classification-types |
| BUG-93944 | [ATLAS-2318](https://issues.apache.org/jira/browse/ATLAS-2318) | UI: Clicking on child tag twice , parent tag is selected |
-| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. |
+| BUG-93946 | [ATLAS-2319](https://issues.apache.org/jira/browse/ATLAS-2319) | UI: Deleting a tag, which at 25+ position in the tag list in both Flat and Tree structure needs a refresh to remove the tag from the list. |
| BUG-93977 | [HIVE-16232](https://issues.apache.org/jira/browse/HIVE-16232) | Support stats computation for column in QuotedIdentifier | | BUG-94030 | [ATLAS-2332](https://issues.apache.org/jira/browse/ATLAS-2332) | Creation of type with attributes having nested collection datatype fails | | BUG-94099 | [ATLAS-2352](https://issues.apache.org/jira/browse/ATLAS-2352) | Atlas server should provide configuration to specify validity for Kerberos DelegationToken |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-97899 | [HIVE-18808](https://issues.apache.org/jira/browse/HIVE-18808) | Make compaction more robust when stats update fails | | BUG-98038 | [HIVE-18788](https://issues.apache.org/jira/browse/HIVE-18788) | Clean up inputs in JDBC PreparedStatement | | BUG-98383 | [HIVE-18907](https://issues.apache.org/jira/browse/HIVE-18907) | Create utility to fix acid key index issue from HIVE-18817 |
-| BUG-98388 | [RANGER-1828](https://issues.apache.org/jira/browse/RANGER-1828) | Good coding practice-add additional headers in ranger |
+| BUG-98388 | [RANGER-1828](https://issues.apache.org/jira/browse/RANGER-1828) | Good coding practice-add more headers in ranger |
| BUG-98392 | [RANGER-2007](https://issues.apache.org/jira/browse/RANGER-2007) | ranger-tagsync's Kerberos ticket fails to renew | | BUG-98533 | [HBASE-19934](https://issues.apache.org/jira/browse/HBASE-19934), [HBASE-20008](https://issues.apache.org/jira/browse/HBASE-20008) | HBase snapshot restore is failing due to Null pointer exception | | BUG-98552 | [HBASE-18083](https://issues.apache.org/jira/browse/HBASE-18083), [HBASE-18084](https://issues.apache.org/jira/browse/HBASE-18084) | Make large/small file clean thread number configurable in HFileCleaner |
Fixed issues represent selected issues that were previously logged via Hortonwor
| BUG-99650 | [KNOX-1223](https://issues.apache.org/jira/browse/KNOX-1223) | Zeppelin's Knox proxy doesn't redirect /api/ticket as expected | | BUG-99804 | [OOZIE-2858](https://issues.apache.org/jira/browse/OOZIE-2858) | HiveMain, ShellMain and SparkMain should not overwrite properties and config files locally | | BUG-99805 | [OOZIE-2885](https://issues.apache.org/jira/browse/OOZIE-2885) | Running Spark actions should not need Hive on the classpath |
-| BUG-99806 | [OOZIE-2845](https://issues.apache.org/jira/browse/OOZIE-2845) | Replace reflection-based code which sets variable in HiveConf |
+| BUG-99806 | [OOZIE-2845](https://issues.apache.org/jira/browse/OOZIE-2845) | Replace reflection-based code, which sets variable in HiveConf |
| BUG-99807 | [OOZIE-2844](https://issues.apache.org/jira/browse/OOZIE-2844) | Increase stability of Oozie actions when log4j.properties is missing or not readable | | RMP-9995 | [AMBARI-22222](https://issues.apache.org/jira/browse/AMBARI-22222) | Switch druid to use /var/druid directory instead of /apps/druid on local disk |
Fixed issues represent selected issues that were previously logged via Hortonwor
|**Spark 2.3** |**N/A** |**Changes as documented in the Apache Spark release notes** |- There's a "Deprecation" document and a "Change of behavior" guide, https://spark.apache.org/releases/spark-release-2-3-0.html#deprecations<br /><br />- For SQL part, there's another detailed "Migration" guide (from 2.2 to 2.3), https://spark.apache.org/docs/latest/sql-programming-guide.html#upgrading-from-spark-sql-22-to-23| |Spark |[**HIVE-12505**](https://issues.apache.org/jira/browse/HIVE-12505) |Spark job completes successfully but there is an HDFS disk quota full error |**Scenario:** Running **insert overwrite** when a quota is set on the Trash folder of the user who runs the command.<br /><br />**Previous Behavior:** The job succeeds even though it fails to move the data to the Trash. The result can wrongly contain some of the data previously present in the table.<br /><br />**New Behavior:** When the move to the Trash folder fails, the files are permanently deleted.| |**Kafka 1.0**|**N/A**|**Changes as documented in the Apache Spark release notes** |https://kafka.apache.org/10/documentation.html#upgrade_100_notable|
-|**Hive/ Ranger** | |Additional ranger hive policies required for INSERT OVERWRITE |**Scenario:** Additional ranger hive policies required for **INSERT OVERWRITE**<br /><br />**Previous behavior:** Hive **INSERT OVERWRITE** queries succeed as usual.<br /><br />**New behavior:** Hive **INSERT OVERWRITE** queries are unexpectedly failing after upgrading to HDP-2.6.x with the error:<br /><br />Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user jdoe does not have WRITE privilege on /tmp/\*(state=42000,code=40000)<br /><br />As of HDP-2.6.0, Hive **INSERT OVERWRITE** queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy.<br /><br />**Workaround/Expected Customer Action:**<br /><br />1. Create a new policy under the Hive repository.<br />2. In the dropdown where you see Database, select URI.<br />3. Update the path (Example: /tmp/*)<br />4. Add the users and group and save.<br />5. Retry the insert query.|
+|**Hive/ Ranger** | |Another ranger hive policies required for INSERT OVERWRITE |**Scenario:** Another ranger hive policies required for **INSERT OVERWRITE**<br /><br />**Previous behavior:** Hive **INSERT OVERWRITE** queries succeed as usual.<br /><br />**New behavior:** Hive **INSERT OVERWRITE** queries are unexpectedly failing after upgrading to HDP-2.6.x with the error:<br /><br />Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user jdoe does not have WRITE privilege on /tmp/\*(state=42000,code=40000)<br /><br />As of HDP-2.6.0, Hive **INSERT OVERWRITE** queries require a Ranger URI policy to allow write operations, even if the user has write privilege granted through HDFS policy.<br /><br />**Workaround/Expected Customer Action:**<br /><br />1. Create a new policy under the Hive repository.<br />2. In the dropdown where you see Database, select URI.<br />3. Update the path (Example: /tmp/*)<br />4. Add the users and group and save.<br />5. Retry the insert query.|
|**HDFS**|**N/A** |HDFS should support for multiple KMS Uris |**Previous Behavior:** dfs.encryption.key.provider.uri property was used to configure the KMS provider path.<br /><br />**New Behavior:** dfs.encryption.key.provider.uri is now deprecated in favor of hadoop.security.key.provider.path to configure the KMS provider path.| |**Zeppelin**|[**ZEPPELIN-3271**](https://issues.apache.org/jira/browse/ZEPPELIN-3271)|Option for disabling scheduler |**Component Affected:** Zeppelin-Server<br /><br />**Previous Behavior:** In previous releases of Zeppelin, there was no option for disabling scheduler.<br /><br />**New Behavior:** By default, users will no longer see scheduler, as it is disabled by default.<br /><br />**Workaround/Expected Customer Action:** If you want to enable scheduler, you will need to add azeppelin.notebook.cron.enable with value of true under custom zeppelin site in Zeppelin settings from Ambari.|
Fixed issues represent selected issues that were previously logged via Hortonwor
1. Home directories for users are not getting created on Head Node 1. As a workaround, create the directories manually and change ownership to the respective userΓÇÖs UPN.
- 2. Permissions on /hdp directory is currently not set to 751. This needs to be set to
+ 2. Permissions on /hdp directory are currently not set to 751. This needs to be set to
```bash chmod 751 /hdp chmod ΓÇôR 755 /hdp/apps
Fixed issues represent selected issues that were previously logged via Hortonwor
### ΓÇïUpgrading
-All of these features are available in HDInsight 3.6. To get the latest version of Spark, Kafka and R Server (Machine Learning Services), please choose the Spark, Kafka, ML Services version when you [create a HDInsight 3.6 cluster](./hdinsight-hadoop-provision-linux-clusters.md). To get support for ADLS, you can choose the ADLS storage type as an option. Existing clusters won't be upgraded to these versions automatically.
+All of these features are available in HDInsight 3.6. To get the latest version of Spark, Kafka and R Server (Machine Learning Services), choose the Spark, Kafka, ML Services version when you [create a HDInsight 3.6 cluster](./hdinsight-hadoop-provision-linux-clusters.md). To get support for ADLS, you can choose the ADLS storage type as an option. Existing clusters won't be upgraded to these versions automatically.
-All new clusters created after June 2018 will automatically get the 1000+ bug fixes across all the open-source projects. Please follow [this](./hdinsight-upgrade-cluster.md) guide for best practices around upgrading to a newer HDInsight version.
+All new clusters created after June 2018 will automatically get across the 1000+ bug fixes across all the open-source projects. Follow [this](./hdinsight-upgrade-cluster.md) guide for best practices around upgrading to a newer HDInsight version.
hdinsight Interactive Query Troubleshoot Inaccessible Hive View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-inaccessible-hive-view.md
This article describes troubleshooting steps and possible resolutions for issues
The Hive View is inaccessible, and the logs in `/var/log/hive` show an error similar to the following: ```
-ERROR [Curator-Framework-0]: curator.ConnectionState (ConnectionState.java:checkTimeouts(200)) - Connection timed out for connection string (<zookeepername1>.cloud.wbmi.com:2181,<zookeepername2>.cloud.wbmi.com:2181,<zookeepername3>.cloud.wbmi.com:2181) and timeout (15000) / elapsed (21852)
+ERROR [Curator-Framework-0]: curator.ConnectionState (ConnectionState.java:checkTimeouts(200)) - Connection timed out for connection string (<zookeepername1>.contoso.com:2181,<zookeepername2>.contoso.com:2181,<zookeepername3>.contoso.com:2181) and timeout (15000) / elapsed (21852)
``` ## Cause
healthcare-apis Configure Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/configure-export-data.md
In this step, browse to your FHIR service in the Azure portal and select the **I
## Give permission in the storage account for FHIR service access
-1. Go to your ADLS Gen2 account in the Azure portal.
+1. Go to your [ADLS Gen2](../../storage/blobs/data-lake-storage-introduction.md) account in the Azure portal. If you don't already have an ADSL Gen2 account deployed, follow [these instructions](../../storage/common/storage-account-create.md) for creating an Azure storage account and upgrading to ADLS Gen2. Make sure to enable the hierarchical namespace option in the **Advanced** tab to create an ADLS Gen2 account.
-2. Select **Access control (IAM)**.
+2. In your ADLS Gen2 account, select **Access control (IAM)**.
3. Select **Add > Add role assignment**. If the **Add role assignment** option is grayed out, ask your Azure administrator for help with this step.
healthcare-apis Iot Metrics Diagnostics Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/iot-metrics-diagnostics-export.md
Previously updated : 07/22/2022 Last updated : 08/17/2022 # How to configure diagnostic settings for exporting the MedTech service metrics
-In this article, you'll learn how to configure the diagnostic setting for the MedTech service to export metrics to different destinations (for example: to Azure storage or an event hub) for audit, analysis, or backup.
+In this article, you'll learn how to configure diagnostic settings for the MedTech service to export metrics to different destinations (for example: to [Azure storage](/azure/storage/) or an [Azure event hub](/azure/event-hubs/)) for audit, analysis, or backup.
-## Create diagnostic setting for the MedTech service
-1. To enable metrics export for the MedTech service, select **MedTech service** in your workspace.
+## Create a diagnostic setting for the MedTech service
+1. To enable metrics export for your MedTech service, select **MedTech service** in your workspace under **Services**.
- :::image type="content" source="media/iot-metrics-export/iot-connector-logging-workspace.png" alt-text="Screenshot of select the MedTech service within workspace." lightbox="media/iot-metrics-export/iot-connector-logging-workspace.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-select-medtech-service-in-workspace.png" alt-text="Screenshot of select the MedTech service within workspace." lightbox="media/iot-metrics-export/iot-select-medtech-service-in-workspace.png":::
-2. Select the MedTech service that you want to configure metrics export for.
+2. Select the MedTech service that you want to configure for metrics exporting. For this example, we'll be using a MedTech service named **mt-azuredocsdemo**. You'll be selecting a MedTech service named by you within your Azure Health Data Services workspace.
- :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-connector.png" alt-text="Screenshot of select the MedTech service for exporting metrics" lightbox="media/iot-metrics-export/iot-connector-logging-select-connector.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-select-medtech-service.png" alt-text="Screenshot of select the MedTech service for exporting metrics." lightbox="media/iot-metrics-export/iot-select-medtech-service.png":::
-3. Select the **Diagnostic settings** button and then select the **+ Add diagnostic setting** button.
+3. Select the **Diagnostic settings** option under **Monitoring**.
- :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-diagnostic-settings.png" alt-text="Screenshot of select the Diagnostic settings and select the + Add diagnostic setting buttons." lightbox="media/iot-metrics-export/iot-connector-logging-select-diagnostic-settings.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-select-diagnostic-settings.png" alt-text="Screenshot of select the Diagnostic settings." lightbox="media/iot-metrics-export/iot-select-diagnostic-settings.png":::
-4. After the **+ Add diagnostic setting** page opens, enter a name in the **Diagnostic setting name** dialog box.
+4. Select the **+ Add diagnostic setting** option.
- :::image type="content" source="media/iot-metrics-export/iot-connector-logging-select-diagnostic-configuration.png" alt-text="Screenshot diagnostic setting and required fields." lightbox="media/iot-metrics-export/iot-connector-logging-select-diagnostic-configuration.png":::
+ :::image type="content" source="media/iot-metrics-export/iot-add-diagnostic-setting.png" alt-text="Screenshot of select the + Add diagnostic setting." lightbox="media/iot-metrics-export/iot-add-diagnostic-setting.png":::
-5. Under **Destination details**, select the destination you want to use to export your MedTech service metrics to. In the above example, we've selected an Azure storage account.
+5. The **+ Add diagnostic setting** page will open, requiring configuration inputs from you.
- Metrics can be exported to the following destinations:
+ :::image type="content" source="media/iot-metrics-export/iot-select-diagnostic-setting-options.png" alt-text="Screenshot of diagnostic setting and required fields." lightbox="media/iot-metrics-export/iot-select-diagnostic-setting-options.png":::
- |Destination|Description|
- |--|--|
- |Log Analytics workspace|Metrics are converted to log form. This option may not be available for all resource types. Sending them to the Azure Monitor Logs store (which is searchable via Log Analytics) helps you to integrate them into queries, alerts, and visualizations with existing log data.|
- |Azure storage account|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive and logs can be kept there indefinitely.|
- |Event Hubs|Sending logs and metrics to Event Hubs allows you to stream data to external systems such as third-party SIEMs and other Log Analytics solutions.|
- |Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.|
+ 1. Enter a display name in the **Diagnostic setting name** box. For this example, we'll name it **MedTech_service_All_Metrics**. You'll enter a display name of your own choosing.
- > [!Important]
- > Each **Destination details** selection requires that certain resources (for example, an existing Azure storage account) be created and available before the selection can be successfully configured. Choose each selection to get a list of the required resources.
+ 2. Under **Metrics**, select the **AllMetrics** option.
-6. Select **AllMetrics**.
+ > [!Note]
+ >
+ > The **AllMetrics** option is the only option available and will export all currently supported MedTech service metrics.
+ >
+ > To view a complete list of MedTech service metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
- > [!Note]
- > To view a complete list of MedTech service metrics associated with **AllMetrics**, see [Supported metrics with Azure Monitor](../../azure-monitor/essentials/metrics-supported.md#microsofthealthcareapisworkspacesiotconnectors).
+ 3. Under **Destination details**, select the destination you want to use for your exported MedTech service metrics. In this example, we've selected an Azure storage account. You'll select a destination of your own choosing.
+
+ > [!Important]
+ >
+ > Each **Destination details** selection requires that certain resources (for example, an existing Azure storage account) be created and available before the selection can be successfully configured. Choose each selection to see which resources are required.
+
+ Metrics can be exported to the following destinations:
+
+ |Destination|Description|
+ |--|--|
+ |Log Analytics workspace|Metrics are converted to log form. Sending the metrics to the Azure Monitor Logs store (which is searchable via Log Analytics) enables you to integrate them into queries, alerts, and visualizations with existing log data.|
+ |Azure storage account|Archiving logs and metrics to an Azure storage account is useful for audit, static analysis, or backup. Compared to Azure Monitor Logs and a Log Analytics workspace, Azure storage is less expensive, and logs can be kept there indefinitely.|
+ |Azure event hub|Sending logs and metrics to an event hub allows you to stream data to external systems such as third-party Security Information and Event Managements (SIEMs) and other Log Analytics solutions.|
+ |Azure Monitor partner integrations|Specialized integrations between Azure Monitor and other non-Microsoft monitoring platforms. Useful when you're already using one of the partners.|
-7. Select **Save**.
+ 4. Select the **Save** option to save your diagnostic setting selections.
+
+6. Once you've selected the **Save** option, the page will display a message that the diagnostic setting for your MedTech service has been saved successfully.
+
+ :::image type="content" source="media/iot-metrics-export/iot-successful-save-diagnostic-setting.png" alt-text="Screenshot of a successful diagnostic setting save." lightbox="media/iot-metrics-export/iot-successful-save-diagnostic-setting.png":::
> [!Note]
- > It might take up to 15 minutes for the first MedTech service metrics to display in the destination of your choice.
-
-For more information about how to work with diagnostics logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md).
+ >
+ > It might take up to 15 minutes for the first MedTech service metrics to display in the destination of your choice.
-## Conclusion
-Having access to the MedTech service metrics is essential for monitoring and troubleshooting. The MedTech service allows you to do these actions through the export of metrics.
+7. To view your saved diagnostic setting, select **Diagnostic settings**.
+
+ :::image type="content" source="media/iot-metrics-export/iot-navigate-to-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings option to view the saved diagnostic setting." lightbox="media/iot-metrics-export/iot-navigate-to-diagnostic-settings.png":::
+
+8. The **Diagnostic settings** page will open, displaying your newly created diagnostic setting for your MedTech service. You'll have the ability to:
+
+ 1. **Edit setting**: Edit or delete your saved MedTech service diagnostic setting.
+ 2. **+ Add diagnostic setting**: Create more diagnostic settings for your MedTech service (for example: you may also want to send your MedTech service metrics to another destination like a Logs Analytics workspace).
+
+ :::image type="content" source="media/iot-metrics-export/iot-view-diagnostic-settings.png" alt-text="Screenshot of Diagnostic settings options." lightbox="media/iot-metrics-export/iot-view-diagnostic-settings.png":::
+
+ > [!TIP]
+ >
+ > For more information about how to work with diagnostic logs, see the [Azure Resource Log documentation](../../azure-monitor/essentials/platform-logs-overview.md).
## Next steps
To view the frequently asked questions (FAQs) about the MedTech service, see
>[!div class="nextstepaction"] >[MedTech service FAQs](iot-connector-faqs.md)-
-(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
The DPS device SDKs provide implementations of the [Register](/rest/api/iot-dps/
| C|[apt-get, MBED, Arduino IDE or iOS](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#packages-and-libraries)|[GitHub](https://github.com/Azure/azure-iot-sdk-c/blob/master/provisioning\_client)|[Samples](https://github.com/Azure/azure-iot-sdk-c/tree/main/provisioning_client/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-ansi-c&tabs=windows)|[Reference](/azure/iot-hub/iot-c-sdk-ref/) | | Java|[Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot.provisioning/provisioning-device-client)|[GitHub](https://github.com/Azure/azure-iot-sdk-jav?pivots=programming-language-java&tabs=windows)|[Reference](/java/api/com.microsoft.azure.sdk.iot.provisioning.device) | | Node.js|[npm](https://www.npmjs.com/package/azure-iot-provisioning-device) |[GitHub](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning)|[Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/provisioning/device/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-nodejs&tabs=windows)|[Reference](/javascript/api/azure-iot-provisioning-device) |
-| Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-python&tabs=windows)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
+| Python|[pip](https://pypi.org/project/azure-iot-device/) |[GitHub](https://github.com/Azure/azure-iot-sdk-python)|[Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-python&tabs=windows)|[Reference](/python/api/azure-iot-device/azure.iot.device.provisioningdeviceclient) |
> [!WARNING] > The **C SDK** listed above is **not** suitable for embedded applications due to its memory management and threading model. For embedded devices, refer to the [Embedded device SDKs](#embedded-device-sdks).
iot-dps Quick Create Simulated Device X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-create-simulated-device-x509.md
In this section, you'll use your Windows command prompt.
:::image type="content" source="./media/quick-create-simulated-device-x509/copy-id-scope-and-global-device-endpoint.png" alt-text="Screenshot of the I D scope and global device endpoint on Azure portal.":::
-1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
+1. In your Windows command prompt, go to the directory of the [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py) sample. The path shown is relative to the location where you cloned the SDK.
```cmd cd ./azure-iot-sdk-python/azure-iot-device/samples/async-hub-scenarios
In this section, you'll use your Windows command prompt.
set PASS_PHRASE=1234 ```
-1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/azure-iot-device/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
+1. Review the code for [provision_x509.py](https://github.com/Azure/azure-iot-sdk-python/blob/main/samples/async-hub-scenarios/provision_x509.py). If you're not using **Python version 3.7** or later, make the [code change mentioned here](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples/async-hub-scenarios#advanced-iot-hub-scenario-samples-for-the-azure-iot-hub-device-sdk) to replace `asyncio.run(main())`.
1. Save your changes.
iot-dps Quick Setup Auto Provision Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-bicep.md
+
+ Title: Quickstart - Create an Azure IoT Hub Device Provisioning Service (DPS) using Bicep
+description: Azure quickstart - Learn how to create an Azure IoT Hub Device Provisioning Service (DPS) using Bicep.
+ Last updated : 08/17/2022+++++++
+# Quickstart: Set up the IoT Hub Device Provisioning Service (DPS) with Bicep
+
+You can use a [Bicep](../azure-resource-manager/bicep/overview.md) file to programmatically set up the Azure cloud resources necessary for provisioning your devices. These steps show how to create an IoT hub and a new IoT Hub Device Provisioning Service instance with a Bicep file. The IoT Hub is also linked to the DPS resource using the Bicep file. This linking allows the DPS resource to assign devices to the hub based on allocation policies you configure.
++
+This quickstart uses [Azure PowerShell](../azure-resource-manager/bicep/deploy-powershell.md), and the [Azure CLI](../azure-resource-manager/bicep/deploy-cli.md) to perform the programmatic steps necessary to create a resource group and deploy the Bicep file, but you can easily use .NET, Ruby, or other programming languages to perform these steps and deploy your Bicep file.
+
+## Prerequisites
++++
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/iothub-device-provisioning/).
+
+> [!NOTE]
+> Currently there is no Bicep file support for creating enrollments with new DPS resources. This is a common and understood request that is being considered for implementation.
++
+Two Azure resources are defined in the Bicep file above:
+
+* [**Microsoft.Devices/iothubs**](/azure/templates/microsoft.devices/iothubs): Creates a new Azure IoT Hub.
+* [**Microsoft.Devices/provisioningservices**](/azure/templates/microsoft.devices/provisioningservices): Creates a new Azure IoT Hub Device Provisioning Service with the new IoT Hub already linked to it.
+
+Save a copy of the Bicep file locally as **main.bicep**.
+
+## Deploy the Bicep file
+
+Sign in to your Azure account and select your subscription.
+
+1. To log in Azure at the command prompt:
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az login
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+
+
+ Follow the instructions to authenticate using the code and sign in to your Azure account through a web browser.
+
+1. If you have multiple Azure subscriptions, signing in to Azure grants you access to all the Azure accounts associated with your credentials.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az account list -o table
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Get-AzSubscription
+ ```
+
+
+
+ Use the following command to select the subscription that you want to use to run the commands to create your IoT hub and DPS resources. You can use either the subscription name or ID from the output of the previous command:
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az account set --subscription {your subscription name or id}
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Set-AzContext -Subscription {your subscription name or id}
+ ```
+
+
+
+1. Deploy the Bicep file with the following commands.
+
+ > [!TIP]
+ > The commands will prompt for a resource group location.
+ > You can view a list of available locations by first running the command:
+ >
+ > # [CLI](#tab/CLI)
+ >
+ > `az account list-locations -o table`
+ >
+ > # [PowerShell](#tab/PowerShell)
+ >
+ > Get-AzLocation
+ >
+ >
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters iotHubName={IoT-Hub-name} provisioningServiceName={DPS-name}
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -iotHubName "{IoT-Hub-name}" -provisioningServiceName "{DPS-name}"
+ ```
+
+
+
+ Replace **{IoT-Hub-name}** with a globally unique IoT Hub name, replace **{DPS-name}** with a globally unique Device Provisioning Service (DPS) resource name.
+
+ It takes a few moments to create the resources.
+
+## Review deployed resources
+
+1. To verify the deployment, run the following command and look for the new provisioning service and IoT hub in the output:
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az resource list -g exampleRg
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Get-AzResource -ResourceGroupName exampleRG
+ ```
+
+2. To verify that the hub is already linked to the DPS resource, run the following command.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az iot dps show --name <Your provisioningServiceName>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ Get-AzIoTDeviceProvisioningService -ResourceGroupName exampleRG -Name "{DPS-name}"
+ ```
+
+ Notice the hubs that are linked on the `iotHubs` member.
+
+## Clean up resources
+
+Other quickstarts in this collection build upon this quickstart. If you plan to continue on to work with subsequent quickstarts or with the tutorials, do not clean up the resources created in this quickstart. If you do not plan to continue, you can use the Azure PowerShell or Azure CLI to delete the resource group and all of its resources.
+
+To delete a resource group and all its resources from the Azure portal, just open the resource group and click **Delete resource group** and the top.
+
+To delete the resource group deployed:
+
+# [CLI](#tab/CLI)
+
+```azurecli
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell
+Remove-AzResourceGroup -name exampleRG
+```
++
+You can also delete resource groups and individual resources using the Azure portal, PowerShell, or REST APIs, as well as with supported platform SDKs published for Azure Resource Manager or IoT Hub Device Provisioning Service.
+
+## Next steps
+
+In this quickstart, you've deployed an IoT hub and a Device Provisioning Service instance, and linked the two resources. To learn how to use this setup to provision a device, continue to the quickstart for creating a device.
+
+> [!div class="nextstepaction"]
+> [Quickstart: Provision a simulated symmetric key device](./quick-create-simulated-device-symm-key.md)
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-node-module.md
IoT Edge does not support Node.js modules using Windows containers.
Use the following table to understand your options for developing and deploying Node.js modules:
-| Node.js | Visual Studio Code | Visual Studio 2017/2019 |
+| Node.js | Visual Studio Code | Visual Studio 2022 |
| - | | | | **Linux AMD64** | ![Use VS Code for Node.js modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | | | **Linux ARM32** | ![Use VS Code for Node.js modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM64** | ![Use VS Code for Node.js modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
Title: 'Configure package repository for package updates | Microsoft Docs' description: Follow an example to configure package repository for package updates.-+ Last updated 8/8/2022
Following this document, learn how to configure a package repository using [OSCo
You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started: - Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md). - Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](https://docs.microsoft.com/azure/iot-edge/how-to-provision-single-device-linux-symmetric?view=iotedge-2020-11&preserve-view=true&tabs=azure-portal%2Cubuntu#install-iot-edge) or higher is already installed on the device).-- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent#manually-prepare-a-device.md).
+- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent.md#manually-prepare-a-device).
- Install the OSConfig agent on the device. See [how to](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom). - Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](https://docs.microsoft.com/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure)
Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by co
1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](https://docs.microsoft.com/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device. 2. Upload your packages to the above configured repository. 3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository.
-4. Follow steps from [here](device-update-ubuntu-agent#prerequisites.md) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
-5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent#monitor-the-update-deployment.md).
+4. Follow steps from [here](device-update-ubuntu-agent.md#prerequisites) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
+5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
iot-hub Iot Hub Dev Guide Sas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-sas.md
Other reference topics in the IoT Hub developer guide include:
* [IoT Hub MQTT support](iot-hub-mqtt-support.md) provides more information about IoT Hub support for the MQTT protocol.
-* [RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2](https://tools.ietf.org/html/rfc5246/) provides more information about TLS authentication.
+* [RFC 5246 - The Transport Layer Security (TLS) Protocol Version 1.2](https://www.rfc-editor.org/rfc/rfc5246) provides more information about TLS authentication.
* For more information about authentication using certificate authority, see [Device Authentication using X.509 CA Certificates](iot-hub-x509ca-overview.md)
iot-hub Iot Hub Devguide Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-sdks.md
The SDKs are available in **multiple languages** providing the flexibility to ch
| Language | Package | Source | Quickstarts | Samples | Reference | | :-- | :-- | :-- | :-- | :-- | :-- | | **.NET** | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Client) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices.client) |
-| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples) | [Reference](/python/api/azure-iot-device) |
+| **Python** | [pip](https://pypi.org/project/azure-iot-device/) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-device) |
| **Node.js** | [npm](https://www.npmjs.com/package/azure-iot-device) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/device/samples) | [Reference](/javascript/api/azure-iot-device/) | | **Java** | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-device-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/master/device/iot-device-samples) | [Reference](/java/api/com.microsoft.azure.sdk.iot.device) | | **C** | [packages](https://github.com/Azure/azure-iot-sdk-c/blob/master/readme.md#getting-the-sdk) | [GitHub](https://github.com/Azure/azure-iot-sdk-c) | [Quickstart](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-ansi-c) | [Samples](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples) | [Reference](/azure/iot-hub/iot-c-sdk-ref/) |
The Azure IoT service SDKs contain code to facilitate building applications that
| .NET | [NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices ) | [GitHub](https://github.com/Azure/azure-iot-sdk-csharp) | [Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp) | [Reference](/dotnet/api/microsoft.azure.devices) | | Java | [Maven](https://mvnrepository.com/artifact/com.microsoft.azure.sdk.iot/iot-service-client) | [GitHub](https://github.com/Azure/azure-iot-sdk-java) | [Samples](https://github.com/Azure/azure-iot-sdk-java/tree/main/service/iot-service-samples/pnp-service-sample) | [Reference](/java/api/com.microsoft.azure.sdk.iot.service) | | Node | [npm](https://www.npmjs.com/package/azure-iothub) | [GitHub](https://github.com/Azure/azure-iot-sdk-node) | [Samples](https://github.com/Azure/azure-iot-sdk-node/tree/main/service/samples) | [Reference](/javascript/api/azure-iothub/) |
-| Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-hub/samples) | [Reference](/python/api/azure-iot-hub) |
+| Python | [pip](https://pypi.org/project/azure-iot-hub) | [GitHub](https://github.com/Azure/azure-iot-sdk-python) | [Samples](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | [Reference](/python/api/azure-iot-hub) |
## Azure IoT Hub management SDKs
iot-hub Iot Hub Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-mqtt-support.md
The following table contains links to code samples for each supported language a
| [Java](https://github.com/Azure/azure-iot-sdk-java/blob/main/device/iot-device-samples/send-receive-sample/src/main/java/samples/com/microsoft/azure/sdk/iot/SendReceive.java) |[IotHubClientProtocol](/java/api/com.microsoft.azure.sdk.iot.device.iothubclientprotocol).MQTT | IotHubClientProtocol.MQTT_WS | | [C](https://github.com/Azure/azure-iot-sdk-c/tree/master/iothub_client/samples/iothub_client_sample_mqtt_dm) | [MQTT_Protocol](/azure/iot-hub/iot-c-sdk-ref/iothubtransportmqtt-h/mqtt-protocol) | [MQTT_WebSocket_Protocol](/azure/iot-hub/iot-c-sdk-ref/iothubtransportmqtt-websockets-h/mqtt-websocket-protocol) | | [C#](https://github.com/Azure/azure-iot-sdk-csharp/tree/main/iothub/device/samples) | [TransportType](/dotnet/api/microsoft.azure.devices.client.transporttype).Mqtt | TransportType.Mqtt falls back to MQTT over Web Sockets if MQTT fails. To specify MQTT over Web Sockets only, use TransportType.Mqtt_WebSocket_Only |
-| [Python](https://github.com/Azure/azure-iot-sdk-python/tree/main/azure-iot-device/samples) | Supports MQTT by default | Add `websockets=True` in the call to create the client |
+| [Python](https://github.com/Azure/azure-iot-sdk-python/tree/main/samples) | Supports MQTT by default | Add `websockets=True` in the call to create the client |
The following fragment shows how to specify the MQTT over Web Sockets protocol when using the Azure IoT Node.js SDK:
key-vault Key Vault Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/key-vault-recovery.md
Previously updated : 09/30/2020 Last updated : 08/18/2022 # Azure Key Vault recovery management with soft delete and purge protection
For more information about Key Vault, see
|Microsoft.KeyVault/locations/operationResults/read| To check purging state of vault| |[Key Vault Contributor](../../role-based-access-control/built-in-roles.md#key-vault-contributor)|To recover soft-deleted vault| - ## What are soft-delete and purge protection [Soft delete](soft-delete-overview.md) and purge protection are two different key vault recovery features.
-> [!IMPORTANT]
-> Turning on soft delete is critical to ensuring that your key vaults and credentials are protected from accidental deletion. However, turning on soft delete is considered a breaking change because it may require you to change your application logic or provide additional permissions to your service principals. Before turning on soft delete using the instructions below, please make sure that your application is compatible with the change using this document [**here**.](soft-delete-change.md)
- **Soft delete** is designed to prevent accidental deletion of your key vault and keys, secrets, and certificates stored inside key vault. Think of soft-delete like a recycle bin. When you delete a key vault or a key vault object, it will remain recoverable for a user configurable retention period or a default of 90 days. Key vaults in the soft deleted state can also be **purged** which means they are permanently deleted. This allows you to recreate key vaults and key vault objects with the same name. Both recovering and deleting key vaults and objects require elevated access policy permissions. **Once soft delete has been enabled, it cannot be disabled.**
+> [!IMPORTANT]
+> You must enable soft-delete on your key vaults immediately. The ability to opt out of soft-delete is deprecated and will be removed in February 2025. See full details [here](soft-delete-change.md)
+ It is important to note that **key vault names are globally unique**, so you won't be able to create a key vault with the same name as a key vault in the soft deleted state. Similarly, the names of keys, secrets, and certificates are unique within a key vault. You won't be able to create a secret, key, or certificate with the same name as another in the soft deleted state. **Purge protection** is designed to prevent the deletion of your key vault, keys, secrets, and certificates by a malicious insider. Think of this as a recycle bin with a time based lock. You can recover items at any point during the configurable retention period. **You will not be able to permanently delete or purge a key vault until the retention period elapses.** Once the retention period elapses the key vault or key vault object will be purged automatically. > [!NOTE]
-> Purge Protection is designed so that no administrator role or permission can override, disable, or circumvent purge protection. **Once purge protection is enabled, it cannot be disabled or overridden by anyone including Microsoft.** This means you must recover a deleted key vault or wait for the retention period to elapse before reusing the key vault name.
+> Purge Protection is designed so that no administrator role or permission can override, disable, or circumvent purge protection. **Once purge protection is enabled, it cannot be disabled or overridden by anyone including Microsoft.** This means you must recover a deleted key vault or wait for the retention period to elapse before reusing the key vault name.
For more information about soft-delete, see [Azure Key Vault soft-delete overview](soft-delete-overview.md)
logic-apps Edit App Settings Host Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/edit-app-settings-host-settings.md
ms.suite: integration Previously updated : 03/22/2022 Last updated : 08/16/2022
These settings affect the throughput and capacity for single-tenant Azure Logic
| Setting | Default value | Description | |||-| | `Runtime.FlowRunActionJob.MaximumActionResultSize` | `209715200` bytes | Sets the maximum size in bytes that the combined inputs and outputs can have in an action. |
-| `Runtime.ContentLink.MaximumContentSizeInBytes` | `104857600` characters | Sets the maximum size in characters that an input or output can have in a trigger or action. |
+| `Runtime.ContentLink.MaximumContentSizeInBytes` | `104857600` bytes | Sets the maximum size in bytes that an input or output can have in a trigger or action. |
|||| <a name="pagination"></a>
logic-apps Logic Apps Enterprise Integration Liquid Transform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-liquid-transform.md
This article shows you how to complete these tasks:
1. In the Azure portal, follow these steps to [create a blank logic app](../logic-apps/quickstart-create-first-logic-app-workflow.md).
-1. In the Logic App Designer, add the [Request trigger](../connectors/connectors-native-reqres.md#add-request) to your logic app.
+1. In the Logic App Designer, add the [Request trigger](../connectors/connectors-native-reqres.md#add-request-trigger) to your logic app.
1. Under the trigger, choose **New step**. In the search box, enter `liquid` as your filter, and select this action: **Transform JSON to JSON - Liquid**
machine-learning Concept Causal Inference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-causal-inference.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # Make data-driven policies and influence decision making (preview)
-While machine learning models are powerful in identifying patterns in data and making predictions, they offer little support for estimating how the real-world outcome changes in the presence of an intervention. Practitioners have become increasingly focused on using historical data to inform their future decisions and business interventions. For example, how would revenue be affected if a corporation pursues a new pricing strategy? Would a new medication improve a patientΓÇÖs condition, all else equal?
+While machine learning models are powerful in identifying patterns in data and making predictions, they offer little support for estimating how the real-world outcome changes in the presence of an intervention. Practitioners have become increasingly focused on using historical data to inform their future decisions and business interventions. For example, how would the revenue be affected if a corporation pursues a new pricing strategy? Would a new medication improve a patientΓÇÖs condition, all else equal?
-The Causal Inference component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort and on an individual level. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow decision makers to apply new policies and affect real-world change.
+The Causal Inference component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) addresses these questions by estimating the effect of a feature on an outcome of interest on average, across a population or a cohort, and on an individual level. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow decision-makers to apply new policies and affect real-world change.
-The capabilities of this component are founded by [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via [double machine learning](https://econml.azurewebsites.net/spec/estimation/dml.html) technique.
+The capabilities of this component are founded by the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via [double machine learning](https://econml.azurewebsites.net/spec/estimation/dml.html) technique.
Use Causal Inference when you need to: - Identify the features that have the most direct effect on your outcome of interest. - Decide what overall treatment policy to take to maximize real-world impact on an outcome of interest. - Understand how individuals with certain feature values would respond to a particular treatment policy.-- The causal effects computed based on the treatment features is purely a data property. Hence, a trained model is optional when computing the causal effects.
-## How are causal inference insights generated?
-> [!NOTE]
-> Only historic data is required to generate causal insights.
+## How are causal inference insights generated?
+>[!NOTE]
+> Only historic data is required to generate causal insights. The causal effects computed based on the treatment features are purely a data property. Hence, a trained model is optional when computing the causal effects.
-Double Machine Learning is a method for estimating (heterogeneous) treatment effects when all potential confounders/controls (factors that simultaneously had a direct effect on the treatment decision in the collected data and the observed outcome) are observed but are either too many (high-dimensional) for classical statistical approaches to be applicable or their effect on the treatment and outcome can't be satisfactorily modeled by parametric functions (non-parametric). Both latter problems can be addressed via machine learning techniques (for an example, see [Chernozhukov2016](https://econml.azurewebsites.net/spec/references.html#chernozhukov2016)).
+Double Machine Learning is a method for estimating (heterogeneous) treatment effects when all potential confounders/controls (factors that simultaneously had a direct effect on the treatment decision in the collected data and the observed outcome) are observed but are either too many (high-dimensional) for classical statistical approaches to be applicable or their effect on the treatment and outcome can't be satisfactorily modeled by parametric functions (non-parametric). Both latter problems can be addressed via machine learning techniques (to see an example, check out [Chernozhukov2016](https://econml.azurewebsites.net/spec/references.html#chernozhukov2016)).
-The method reduces the problem to first estimating two predictive tasks:
+The method reduces the problem by first estimating two predictive tasks:
- Predicting the outcome from the controls - Predicting the treatment from the controls
-Then the method combines these two predictive models in a final stage estimation to create a model of the heterogeneous treatment effect. The approach allows for arbitrary machine learning algorithms to be used for the two predictive tasks, while maintaining many favorable statistical properties related to the final model (for example, small mean squared error, asymptotic normality, construction of confidence intervals).
+Then the method combines these two predictive models in a final stage estimation to create a model of the heterogeneous treatment effect. The approach allows for arbitrary machine learning algorithms to be used for the two predictive tasks while maintaining many favorable statistical properties related to the final model (for example, small mean squared error, asymptotic normality, and construction of confidence intervals).
## What other tools does Microsoft provide for causal inference?
-[Project Azua](https://www.microsoft.com/research/project/project_azua/) provides a novel framework focusing on end-to-end causal inference. AzuaΓÇÖs technology DECI (deep end-to-end causal inference) is a single model that can simultaneously do causal discovery and causal inference. We only require the user to provide data, and the model can output the causal relationships among all different variables. By itself, this can provide insights into the data and enables metrics such as individual treatment effect (ITE), average treatment effect (ATE) and conditional average treatment effect (CATE) to be calculated, which can then be used to make optimal decisions. The framework is scalable for large data, both in terms of the number of variables and the number of data points; it can also handle missing data entries with mixed statistical types.
+[Project Azua](https://www.microsoft.com/research/project/project_azua/) provides a novel framework focusing on end-to-end causal inference. AzuaΓÇÖs technology DECI (deep end-to-end causal inference) is a single model that can simultaneously do causal discovery and causal inference. We only require the user to provide data, and the model can output the causal relationships among all different variables. By itself, this can provide insights into the data and enables metrics such as individual treatment effect (ITE), average treatment effect (ATE), and conditional average treatment effect (CATE) to be calculated, which can then be used to make optimal decisions. The framework is scalable for large data, both in terms of the number of variables and the number of data points; it can also handle missing data entries with mixed statistical types.
-[EconML](https://www.microsoft.com/research/project/econml/) (powering the backend of the Responsible AI dashboard) is a Python package that applies the power of machine learning techniques to estimate individualized causal responses from observational or experimental data. The suite of estimation methods provided in EconML represents the latest advances in causal machine learning. By incorporating individual machine learning steps into interpretable causal models, these methods improve the reliability of what-if predictions and make causal analysis quicker and easier for a broad set of users.
+[EconML](https://www.microsoft.com/research/project/econml/) (powering the backend of the Responsible AI dashboard's causal inference component) is a Python package that applies the power of machine learning techniques to estimate individualized causal responses from observational or experimental data. The suite of estimation methods provided in EconML represents the latest advances in causal machine learning. By incorporating individual machine learning steps into interpretable causal models, these methods improve the reliability of what-if predictions and make causal analysis quicker and easier for a broad set of users.
-[DoWhy](https://py-why.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, front-door, instrumental variable and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
+[DoWhy](https://py-why.github.io/dowhy/) is a Python library that aims to spark causal thinking and analysis. DoWhy provides a principled four-step interface for causal inference that focuses on explicitly modeling causal assumptions and validating them as much as possible. The key feature of DoWhy is its state-of-the-art refutation API that can automatically test causal assumptions for any estimation method, thus making inference more robust and accessible to non-experts. DoWhy supports estimation of the average causal effect for backdoor, front-door, instrumental variable, and other identification methods, and estimation of the conditional effect (CATE) through an integration with the EconML library.
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported causal inference visualizations](how-to-responsible-ai-dashboard.md#causal-analysis) of the Responsible AI dashboard.
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Counterfactual Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-counterfactual-analysis.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # Counterfactuals analysis and what-if (preview)
-What-if counterfactuals address the question of ΓÇ£what would the model predict if the action input is changedΓÇ¥, enables understanding and debugging of a machine learning model in terms of how it reacts to input (feature) changes. Compared with approximating a machine learning model or ranking features by their predictive importance (which standard interpretability techniques do), counterfactual analysis ΓÇ£interrogatesΓÇ¥ a model to determine what changes to a particular datapoint would flip the model decision. Such an analysis helps in disentangling the impact of different correlated features in isolation or for acquiring a more nuanced understanding on how much of a feature change is needed to see a model decision flip for classification models and decision change for regression models.
+What-if counterfactuals address the question of ΓÇ£what would the model predict if the action input is changedΓÇ¥, enabling understanding and debugging of a machine learning model in terms of how it reacts to input (feature) changes. Compared with approximating a machine learning model or ranking features by their predictive importance (which standard interpretability techniques do), counterfactual analysis ΓÇ£interrogatesΓÇ¥ a model to determine what changes to a particular datapoint would flip the model decision. Such an analysis helps in disentangling the impact of different correlated features in isolation or for acquiring a more nuanced understanding of how much of a feature change is needed to see a model decision flip for classification models and decision change for regression models.
The Counterfactual Analysis and what-if component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) consists of two functionalities: -- Generating a set of examples with minimal changes to a given point such that they change the model's prediction (showing the closest datapoints with opposite model precisions)
+- Generating a set of examples with minimal changes to a given point such that they change the model's prediction (showing the closest data points with opposite model predictions)
- Enabling users to generate their own what-if perturbations to understand how the model reacts to featuresΓÇÖ changes.
-The capabilities of this component are founded by the [DiCE](https://github.com/interpretml/DiCE) package, which implements counterfactual explanations that provide this information by showing feature-perturbed versions of the same datapoint who would have received a different model prediction (for example, Taylor would have received the loan if their income was higher by $10,000). The counterfactual analysis component enables you to identify which features to vary and their permissible ranges for valid and logical counterfactual examples.
+One of the top differentiators of the Responsible AI dashboard's counterfactual analysis component is the fact that you can identify which features to vary and their permissible ranges for valid and logical counterfactual examples.
+++
+The capabilities of this component are founded by the [DiCE](https://github.com/interpretml/DiCE) package.
+ Use What-If Counterfactuals when you need to: - Examine fairness and reliability criteria as a decision evaluator (by perturbing sensitive attributes such as gender, ethnicity, etc., and observing whether model predictions change). - Debug specific input instances in depth.-- Provide solutions to end users and determining what they can do to get a desirable outcome from the model next time.
+- Provide solutions to end users and determine what they can do to get a desirable outcome from the model next time.
## How are counterfactual examples generated?
To generate counterfactuals, DiCE implements a few model-agnostic techniques. Th
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported counterfactual analysis and what-if perturbation visualizations](how-to-responsible-ai-dashboard.md#counterfactual-what-if) of the Responsible AI dashboard.
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Data Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-analysis.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # Understand your datasets (preview)
-Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, this can cause a model to incorrectly predict datapoints belonging to an underrepresented group or to be optimized along an inappropriate metric. For example, while training a housing price prediction AI, the training set was representing 75% of newer houses that have less than median prices. As a result, it was much less successful in successfully identifying more expensive historic houses. The fix was to add older and expensive houses to the training data and augment the features to include insights about the historic value of the house. Upon incorporating that data augmentation, results improved.
+Machine learning models "learn" from historical decisions and actions captured in training data. As a result, their performance in real-world scenarios is heavily influenced by the data they're trained on. When feature distribution in a dataset is skewed, it can cause a model to incorrectly predict data points belonging to an underrepresented group or to be optimized along an inappropriate metric. For example, while training a housing price prediction AI, the training set was representing 75% of newer houses that have less than median prices. As a result, it was much less accurate in successfully identifying more expensive historic houses. The fix was to add older and expensive houses to the training data and augment the features to include insights about the historic value of the house. Upon incorporating that data augmentation, results improved.
-The Data Explorer component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. This enables you to identify issues of over- and underrepresentation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual datapoints.
+The Data Explorer component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) helps visualize datasets based on predicted and actual outcomes, error groups, and specific features. This enables you to identify issues of over- and under-representation and to see how data is clustered in the dataset. Data visualizations consist of aggregate plots or individual data points.
-## When to use Data Explorer
+## When to use data explorer?
Use Data Explorer when you need to:
Use Data Explorer when you need to:
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported data explorer visualizations](how-to-responsible-ai-dashboard.md#data-explorer) of the Responsible AI dashboard.
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Data Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-data-encryption.md
Each workspace has an associated system-assigned managed identity that has the s
* [Connect to Azure storage](how-to-access-data.md) * [Get data from a datastore](how-to-create-register-datasets.md)
-* [Connect to data](how-to-connect-data-ui.md)
-* [Train with datasets](how-to-train-with-datasets.md)
+* [Connect to data](v1/how-to-connect-data-ui.md)
+* [Train with datasets](v1/how-to-train-with-datasets.md)
* [Customer-managed keys](concept-customer-managed-keys.md).
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
Previously updated : 10/21/2021 Last updated : 08/03/2022
Azure Machine Learning designer is a drag-and-drop interface used to train and d
The designer uses your Azure Machine Learning [workspace](concept-workspace.md) to organize shared resources such as: + [Pipelines](#pipeline)
-+ [Datasets](#datasets)
++ [Data](#data) + [Compute resources](#compute) + [Registered models](v1/concept-azure-machine-learning-architecture.md#models) + [Published pipelines](#publish)
The designer uses your Azure Machine Learning [workspace](concept-workspace.md)
Use a visual canvas to build an end-to-end machine learning workflow. Train, test, and deploy models all in the designer:
-+ Drag-and-drop [datasets](#datasets) and [components](#component) onto the canvas.
++ Drag-and-drop [data assets](#data) and [components](#component) onto the canvas. + Connect the components to create a [pipeline draft](#pipeline-draft). + Submit a [pipeline run](#pipeline-job) using the compute resources in your Azure Machine Learning workspace. + Convert your **training pipelines** to **inference pipelines**.
-+ [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit a new pipeline that runs with different parameters and datasets.
- + Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and datasets.
++ [Publish](#publish) your pipelines to a REST **pipeline endpoint** to submit a new pipeline that runs with different parameters and data assets.
+ + Publish a **training pipeline** to reuse a single pipeline to train multiple models while changing parameters and data assets.
+ Publish a **batch inference pipeline** to make predictions on new data by using a previously trained model. + [Deploy](#deploy) a **real-time inference pipeline** to an online endpoint to make predictions on new data in real time.
Use a visual canvas to build an end-to-end machine learning workflow. Train, tes
## Pipeline
-A [pipeline](v1/concept-azure-machine-learning-architecture.md#ml-pipelines) consists of datasets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
+A [pipeline](v1/concept-azure-machine-learning-architecture.md#ml-pipelines) consists of data assets and analytical components, which you connect. Pipelines have many uses: you can make a pipeline that trains a single model, or one that trains multiple models. You can create a pipeline that makes predictions in real time or in batch, or make a pipeline that only cleans data. Pipelines let you reuse your work and organize your projects.
### Pipeline draft
As you edit a pipeline in the designer, your progress is saved as a **pipeline d
A valid pipeline has these characteristics:
-* Datasets can only connect to components.
-* components can only connect to either datasets or other components.
+* Data assets can only connect to components.
+* components can only connect to either data assets or other components.
* All input ports for components must have some connection to the data flow. * All required parameters for each component must be set.
Each time you run a pipeline, the configuration of the pipeline and its results
Pipeline jobs are grouped into [experiments](v1/concept-azure-machine-learning-architecture.md#experiments) to organize job history. You can set the experiment for every pipeline job.
-## Datasets
+## Data
-A machine learning dataset makes it easy to access and work with your data. Several [sample datasets](samples-designer.md#datasets) are included in the designer for you to experiment with. You can [register](how-to-create-register-datasets.md) more datasets as you need them.
+A machine learning data asset makes it easy to access and work with your data. Several [sample data assets](samples-designer.md#datasets) are included in the designer for you to experiment with. You can [register](how-to-create-register-datasets.md) more data assets as you need them.
## Component
machine-learning Concept Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-differential-privacy.md
The system library provides the following tools and services for working with ta
Learn more about differential privacy in machine learning:
+ - [How to build a differentially private system](v1/how-to-differential-privacy.md) in Azure Machine Learning with SDK v1.
- To learn more about the components of SmartNoise, check out the GitHub repositories for [SmartNoise Core](https://github.com/opendifferentialprivacy/smartnoise-core), [SmartNoise SDK](https://github.com/opendifferentialprivacy/smartnoise-sdk), and [SmartNoise samples](https://github.com/opendifferentialprivacy/smartnoise-samples).
machine-learning Concept Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md
You can [override compute resource settings](how-to-use-batch-endpoint.md#config
You can use the following options for input data when invoking a batch endpoint: -- Cloud data - Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md)
+- Cloud data - Either a path on Azure Machine Learning registered datastore, a reference to Azure Machine Learning registered V2 data asset, or a public URI. For more information, see [Connect to data with the Azure Machine Learning studio](v1/how-to-connect-data-ui.md)
- Data stored locally - it will be automatically uploaded to the Azure ML registered datastore and passed to the batch endpoint. > [!NOTE]
machine-learning Concept Error Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-error-analysis.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # Assess errors in ML models (preview)
-One of the most apparent challenges with current model debugging practices is using aggregate metrics to score models on a benchmark. Model accuracy may not be uniform across subgroups of data, and there might exist input cohorts for which the model fails more often. The direct consequences of these failures are a lack of reliability and safety, unfairness, and a loss of trust in machine learning altogether.
+One of the most apparent challenges with current model debugging practices is using aggregate metrics to score models on a benchmark dataset. Model accuracy may not be uniform across subgroups of data, and there might exist input cohorts for which the model fails more often. The direct consequences of these failures are a lack of reliability and safety, appearance of fairness issues, and a loss of trust in machine learning altogether.
:::image type="content" source="./media/concept-error-analysis/error-analysis.png" alt-text="Diagram showing benchmark and machine learning model point to accurate then to different regions fail for different reasons."::: Error Analysis moves away from aggregate accuracy metrics, exposes the distribution of errors to developers in a transparent way, and enables them to identify & diagnose errors efficiently.
-The Error Analysis component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) provides machine learning practitioners with a deeper understanding of model failure distribution and assists them with quickly identifying erroneous cohorts of data. It contributes to the ΓÇ£identifyΓÇ¥ stage of the model lifecycle workflow through a decision tree that reveals cohorts with high error rates and a heatmap that visualizes how a few input features impact the error rate across cohorts. Discrepancies in error might occur when the system underperforms for specific demographic groups or infrequently observed input cohorts in the training data.
+The Error Analysis component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) provides machine learning practitioners with a deeper understanding of model failure distribution and assists them with quickly identifying erroneous cohorts of data. It contributes to the ΓÇ£identifyΓÇ¥ stage of the model lifecycle workflow through a decision tree that reveals cohorts with high error rates and a heatmap that visualizes how input features impact the error rate across cohorts. Discrepancies in error might occur when the system underperforms for specific demographic groups or infrequently observed input cohorts in the training data.
-The capabilities of this component are founded by [Error Analysis](https://erroranalysis.ai/)) capabilities on generating model error profiles.
+The capabilities of this component are founded by [Error Analysis](https://erroranalysis.ai/)) package, generating model error profiles.
Use Error Analysis when you need to: - Gain a deep understanding of how model failures are distributed across a given dataset and across several input and feature dimensions.-- Break down the aggregate performance metrics to automatically discover erroneous cohorts and take targeted mitigation steps.
+- Break down the aggregate performance metrics to automatically discover erroneous cohorts in order to inform your targeted mitigation steps.
-## How are error analyses generated
+## How are error analyses generated?
Error Analysis identifies the cohorts of data with a higher error rate versus the overall benchmark error rate. The dashboard allows for error exploration by using either a decision tree or a heatmap guided by errors.
Often, error patterns may be complex and involve more than one or two features.
- **Error rate**: a portion of instances in the node for which the model is incorrect. This is shown through the intensity of the red color. - **Error coverage**: a portion of all errors that fall into the node. This is shown through the fill rate of the node.-- **Data representation**: number of instances in the node. This is shown through the thickness of the incoming edge to the node along with the actual total number of instances in the node.
+- **Data representation**: number of instances in each node of the error tree. This is shown through the thickness of the incoming edge to the node along with the actual total number of instances in the node.
+ ## Error Heatmap The view slices the data based on a one- or two-dimensional grid of input features. Users can choose the input features of interest for analysis. The heatmap visualizes cells with higher error with a darker red color to bring the userΓÇÖs attention to regions with high error discrepancy. This is beneficial especially when the error themes are different in different partitions, which happen frequently in practice. In this error identification view, the analysis is highly guided by the users and their knowledge or hypotheses of what features might be most important for understanding failure. + ## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported Error Analysis visualizations](how-to-responsible-ai-dashboard.md#error-analysis).
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Fairness Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-fairness-ml.md
-- Previously updated : 10/21/2021++ Last updated : 08/17/2022 #Customer intent: As a data scientist, I want to learn about machine learning fairness and how to assess and mitigate unfairness in machine learning models.
-# Machine learning fairness (preview)
+# Model performance and fairness (preview)
-Learn about machine learning fairness and how the [Fairlearn](https://fairlearn.github.io/) open-source Python package can help you assess and mitigate unfairness issues in machine learning models.
+This article describes methods you can use for understanding your model performance and fairness in Azure Machine Learning.
## What is machine learning fairness?
-Artificial intelligence and machine learning systems can display unfair behavior. One way to define unfair behavior is by its harm, or impact on people. There are many types of harm that AI systems can give rise to. See the [NeurIPS 2017 keynote by Kate Crawford](https://www.youtube.com/watch?v=fMym_BKWQzk) to learn more.
+Artificial intelligence and machine learning systems can display unfair behavior. One way to define unfair behavior is by its harm, or impact on people. There are many types of harm that AI systems can give rise to. To learn more, [NeurIPS 2017 keynote by Kate Crawford](https://www.youtube.com/watch?v=fMym_BKWQzk).
Two common types of AI-caused harms are: - Harm of allocation: An AI system extends or withholds opportunities, resources, or information for certain groups. Examples include hiring, school admissions, and lending where a model might be much better at picking good candidates among a specific group of people than among other groups. -- Harm of quality-of-service: An AI system doesn’t work as well for one group of people as it does for another. As an example, a voice recognition system might fail to work as well for women as it does for men.
+- Harm of quality-of-service: An AI system doesn't work as well for one group of people as it does for another. As an example, a voice recognition system might fail to work as well for women as it does for men.
-To reduce unfair behavior in AI systems, you have to assess and mitigate these harms.
-
-## Fairness assessment and mitigation with Fairlearn
-
-Fairlearn is an open-source Python package that allows machine learning systems developers to assess their systems' fairness and mitigate unfairness.
+To reduce unfair behavior in AI systems, you have to assess and mitigate these harms. The model overview component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the ΓÇ£identifyΓÇ¥ stage of the model lifecycle by generating various model performance metrics for your entire dataset, your identified cohorts of data, and across subgroups identified in terms of **sensitive features** or sensitive attributes.
>[!NOTE]
-> Fairness is a socio-technical challenge. Many aspects of fairness, such as justice and due process, are not captured in quantitative fairness metrics. Also, many quantitative fairness metrics can't all be satisfied simultaneously. The goal with the Fairlearn open-source package is to enable humans to assess different impact and mitigation strategies. Ultimately, it is up to the human users building artificial intelligence and machine learning models to make trade-offs that are appropriate to their scenario.
-
-The Fairlearn open-source package has two components:
--- Assessment Dashboard: A Jupyter notebook widget for assessing how a model's predictions affect different groups. It also enables comparing multiple models by using fairness and performance metrics.-- Mitigation Algorithms: A set of algorithms to mitigate unfairness in binary classification and regression.
+> Fairness is a socio-technical challenge. Many aspects of fairness, such as justice and due process, are not captured in quantitative fairness metrics. Also, many quantitative fairness metrics can't all be satisfied simultaneously. The goal of the Fairlearn open-source package is to enable humans to assess the different impact and mitigation strategies. Ultimately, it is up to the human users building artificial intelligence and machine learning models to make trade-offs that are appropriate to their scenario.
-Together, these components enable data scientists and business leaders to navigate any trade-offs between fairness and performance, and to select the mitigation strategy that best fits their needs.
+In this component of the Responsible AI dashboard, fairness is conceptualized through an approach known as **group fairness**, which asks: Which groups of individuals are at risk for experiencing harm? The term **sensitive features** suggests that the system designer should be sensitive to these features when assessing group fairness.
-## Assess fairness in machine learning models
+During the assessment phase, fairness is quantified through disparity metrics. **Disparity metrics** can evaluate and compare model behavior across different groups either as ratios or as differences. The Responsible AI dashboard supports two classes of disparity metrics:
-In the Fairlearn open-source package, fairness is conceptualized through an approach known as **group fairness**, which asks: Which groups of individuals are at risk for experiencing harms? The relevant groups, also known as subpopulations, are defined through **sensitive features** or sensitive attributes. Sensitive features are passed to an estimator in the Fairlearn open-source package as a vector or a matrix called `sensitive_features`. The term suggests that the system designer should be sensitive to these features when assessing group fairness.
-
-Something to be mindful of is whether these features contain privacy implications due to private data. But the word "sensitive" doesn't imply that these features shouldn't be used to make predictions.
-
->[!NOTE]
-> A fairness assessment is not a purely technical exercise. The Fairlearn open-source package can help you assess the fairness of a model, but it will not perform the assessment for you. The Fairlearn open-source package helps identify quantitative metrics to assess fairness, but developers must also perform a qualitative analysis to evaluate the fairness of their own models. The sensitive features noted above is an example of this kind of qualitative analysis.
-
-During assessment phase, fairness is quantified through disparity metrics. **Disparity metrics** can evaluate and compare model's behavior across different groups either as ratios or as differences. The Fairlearn open-source package supports two classes of disparity metrics:
---- Disparity in model performance: These sets of metrics calculate the disparity (difference) in the values of the selected performance metric across different subgroups. Some examples include:
+- Disparity in model performance: These sets of metrics calculate the disparity (difference) in the values of the selected performance metric across different subgroups of data. Some examples include:
- disparity in accuracy rate - disparity in error rate
During assessment phase, fairness is quantified through disparity metrics. **Dis
- disparity in MAE - many others -- Disparity in selection rate: This metric contains the difference in selection rate among different subgroups. An example of this is disparity in loan approval rate. Selection rate means the fraction of datapoints in each class classified as 1 (in binary classification) or distribution of prediction values (in regression).
+- Disparity in selection rate: This metric contains the difference in selection rate (favorable prediction) among different subgroups. An example of this is disparity in loan approval rate. Selection rate means the fraction of data points in each class classified as 1 (in binary classification) or distribution of prediction values (in regression).
+
+The fairness assessment capabilities of this component are founded by the [Fairlearn](https://fairlearn.org/) package, providing a collection of model fairness assessment metrics and unfairness mitigation algorithms.
+
+>[!NOTE]
+> A fairness assessment is not a purely technical exercise. The Fairlearn open-source package can help you assess the fairness of a model, but it will not perform the assessment for you. The Fairlearn open-source package helps identify quantitative metrics to assess fairness, but developers must also perform a qualitative analysis to evaluate the fairness of their own models. The sensitive features noted above is an example of this kind of qualitative analysis.
## Mitigate unfairness in machine learning models
-### Parity constraints
+Upon understanding your model's fairness issues, you can use [Fairlearn](https://fairlearn.org/)'s mitigation algorithms to mitigate your observed fairness issues.
-The Fairlearn open-source package includes a variety of unfairness mitigation algorithms. These algorithms support a set of constraints on the predictor's behavior called **parity constraints** or criteria. Parity constraints require some aspects of the predictor behavior to be comparable across the groups that sensitive features define (for example, different races). The mitigation algorithms in the Fairlearn open-source package use such parity constraints to mitigate the observed fairness issues.
+The Fairlearn open-source package includes various unfairness mitigation algorithms. These algorithms support a set of constraints on the predictor's behavior called **parity constraints** or criteria. Parity constraints require some aspects of the predictor behavior to be comparable across the groups that sensitive features define (for example, different races). The mitigation algorithms in the Fairlearn open-source package use such parity constraints to mitigate the observed fairness issues.
>[!NOTE] > Mitigating unfairness in a model means reducing the unfairness, but this technical mitigation cannot eliminate this unfairness completely. The unfairness mitigation algorithms in the Fairlearn open-source package can provide suggested mitigation strategies to help reduce unfairness in a machine learning model, but they are not solutions to eliminate unfairness completely. There may be other parity constraints or criteria that should be considered for each particular developer's machine learning model. Developers using Azure Machine Learning must determine for themselves if the mitigation sufficiently eliminates any unfairness in their intended use and deployment of machine learning models.
-The Fairlearn open-source package supports the following types of parity constraints:
+The Fairlearn open-source package supports the following types of parity constraints:
|Parity constraint | Purpose |Machine learning task | ||||
The Fairlearn open-source package supports the following types of parity constra
The Fairlearn open-source package provides postprocessing and reduction unfairness mitigation algorithms: -- Reduction: These algorithms take a standard black-box machine learning estimator (for example, a LightGBM model) and generate a set of retrained models using a sequence of re-weighted training datasets. For example, applicants of a certain gender might be up-weighted or down-weighted to retrain models and reduce disparities across different gender groups. Users can then pick a model that provides the best trade-off between accuracy (or other performance metric) and disparity, which generally would need to be based on business rules and cost calculations.
+- Reduction: These algorithms take a standard black-box machine learning estimator (for example, a LightGBM model) and generate a set of retrained models using a sequence of reweighted training datasets. For example, applicants of a certain gender might be up-weighted or down-weighted to retrain models and reduce disparities across different gender groups. Users can then pick a model that provides the best trade-off between accuracy (or other performance metric) and disparity, which generally would need to be based on business rules and cost calculations.
- Post-processing: These algorithms take an existing classifier and the sensitive feature as input. Then, they derive a transformation of the classifier's prediction to enforce the specified fairness constraints. The biggest advantage of threshold optimization is its simplicity and flexibility as it doesnΓÇÖt need to retrain the model. | Algorithm | Description | Machine learning task | Sensitive features | Supported parity constraints | Algorithm Type | | | | | | | |
-| `ExponentiatedGradient` | Black-box approach to fair classification described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453) | Binary classification | Categorical | [Demographic parity](#parity-constraints), [equalized odds](#parity-constraints) | Reduction |
-| `GridSearch` | Black-box approach described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453)| Binary classification | Binary | [Demographic parity](#parity-constraints), [equalized odds](#parity-constraints) | Reduction |
-| `GridSearch` | Black-box approach that implements a grid-search variant of Fair Regression with the algorithm for bounded group loss described in [Fair Regression: Quantitative Definitions and Reduction-based Algorithms](https://arxiv.org/abs/1905.12843) | Regression | Binary | [Bounded group loss](#parity-constraints) | Reduction |
-| `ThresholdOptimizer` | Postprocessing algorithm based on the paper [Equality of Opportunity in Supervised Learning](https://arxiv.org/abs/1610.02413). This technique takes as input an existing classifier and the sensitive feature, and derives a monotone transformation of the classifier's prediction to enforce the specified parity constraints. | Binary classification | Categorical | [Demographic parity](#parity-constraints), [equalized odds](#parity-constraints) | Post-processing |
+| `ExponentiatedGradient` | Black-box approach to fair classification described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453) | Binary classification | Categorical | Demographic parity, equalized odds| Reduction |
+| `GridSearch` | Black-box approach described in [A Reductions Approach to Fair Classification](https://arxiv.org/abs/1803.02453)| Binary classification | Binary | Demographic parity, equalized odds | Reduction |
+| `GridSearch` | Black-box approach that implements a grid-search variant of Fair Regression with the algorithm for bounded group loss described in [Fair Regression: Quantitative Definitions and Reduction-based Algorithms](https://arxiv.org/abs/1905.12843) | Regression | Binary | Bounded group loss| Reduction |
+| `ThresholdOptimizer` | Postprocessing algorithm based on the paper [Equality of Opportunity in Supervised Learning](https://arxiv.org/abs/1610.02413). This technique takes as input an existing classifier and the sensitive feature, and derives a monotone transformation of the classifier's prediction to enforce the specified parity constraints. | Binary classification | Categorical | Demographic parity, equalized odds| Post-processing |
## Next steps -- Learn how to use the different components by checking out the Fairlearn's [GitHub](https://github.com/fairlearn/fairlearn/), [user guide](https://fairlearn.github.io/main/user_guide/https://docsupdatetracker.net/index.html), [examples](https://fairlearn.github.io/main/auto_examples/https://docsupdatetracker.net/index.html), and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).-- Learn [how to](how-to-machine-learning-fairness-aml.md) enable fairness assessment of machine learning models in Azure Machine Learning.-- See the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/contrib/fairness) for additional fairness assessment scenarios in Azure Machine Learning.
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported model overview and fairness assessment visualizations](how-to-responsible-ai-dashboard.md#model-overview) of the Responsible AI dashboard.
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
+- Learn how to use the different components by checking out the [Fairlearn's GitHub](https://github.com/fairlearn/fairlearn/), [user guide](https://fairlearn.github.io/main/user_guide/https://docsupdatetracker.net/index.html), [examples](https://fairlearn.github.io/main/auto_examples/https://docsupdatetracker.net/index.html), and [sample notebooks](https://github.com/fairlearn/fairlearn/tree/master/notebooks).
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
Once the teams get familiar with pipelines and want to do more machine learning
Once a team has built a collection of machine learnings pipelines and reusable components, they could start to build the machine learning pipeline from cloning previous pipeline or tie existing reusable component together. At this stage, the teamΓÇÖs overall productivity will be improved significantly.
-Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure ML SDK](how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
+Azure Machine Learning offers different methods to build a pipeline. For users who are familiar with DevOps practices, we recommend using [CLI](how-to-create-component-pipelines-cli.md). For data scientists who are familiar with python, we recommend writing pipeline using the [Azure ML SDK v1](v1/how-to-create-machine-learning-pipelines.md). For users who prefer to use UI, they could use the [designer to build pipeline by using registered components](how-to-create-component-pipelines-ui.md).
<a name="compare"></a> ## Which Azure pipeline technology should I use?
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-management-and-deployment.md
For more information, see [Enable model data collection](v1/how-to-enable-data-c
## Retrain your model on new data
-Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](how-to-monitor-datasets.md), model performance can degrade because of:
+Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](v1/how-to-monitor-datasets.md), model performance can degrade because of:
- Changes to a particular sensor. - Natural data changes such as seasonal effects.
For more information on using Azure Pipelines with Machine Learning, see:
* [Machine Learning MLOps](https://aka.ms/mlops) repository * [Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
-You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](how-to-cicd-data-ingestion.md).
+You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](v1/how-to-cicd-data-ingestion.md).
## Next steps
machine-learning Concept Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ai-dashboard.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # Assess AI systems and make data-driven decisions with Azure Machine Learning Responsible AI dashboard (preview)
-Responsible AI requires rigorous engineering. Rigorous engineering, however, can be tedious, manual, and time-consuming without the right tooling and infrastructure. Data scientists need tools to implement responsible AI in practice effectively and efficiently.
+Implementing Responsible AI in practice requires rigorous engineering. Rigorous engineering, however, can be tedious, manual, and time-consuming without the right tooling and infrastructure. Machine learning professionals need tools to implement responsible AI in practice effectively and efficiently.
-The Responsible AI dashboard provides a single interface that makes responsible machine learning engineering efficient and interoperable across the larger model development and assessment lifecycle. The tool brings together several mature Responsible AI tools in the areas of model statistics assessment, data exploration, [machine learning interpretability](https://interpret.ml/), [unfairness assessment](http://fairlearn.org/), [error analysis](https://erroranalysis.ai/), [causal inference](https://github.com/microsoft/EconML), and [counterfactual analysis](https://github.com/interpretml/DiCE) for a holistic assessment and debugging of models and making informed business decisions. With a single command or simple UI wizard, the dashboard addresses the fragmentation issues of multiple tools and enables you to:
+The Responsible AI dashboard provides a single pane of glass that brings together several mature Responsible AI tools in the areas of model [performance and fairness assessment](http://fairlearn.org/), data exploration, [machine learning interpretability](https://interpret.ml/), [error analysis](https://erroranalysis.ai/), [counterfactual analysis and perturbations](https://github.com/interpretml/DiCE), and [causal inference](https://github.com/microsoft/EconML) for a holistic assessment and debugging of models and making informed data-driven decisions. Having access to all of these tools in one interface empowers you to:
-1. Evaluate and debug your machine learning models by identifying model errors, diagnosing why those errors are happening, and informing your mitigation steps.
-2. Boost your data-driven decision-making abilities by addressing questions such as *ΓÇ£what is the minimum change the end user could apply to their features to get a different outcome from the model?ΓÇ¥ and/or ΓÇ£what is the causal effect of reducing red meat consumption on diabetes progression?ΓÇ¥*
-3. Export Responsible AI metadata of your data and models for sharing offline with product and compliance stakeholders.
+1. Evaluate and debug your machine learning models by identifying model errors and fairness issues, diagnosing why those errors are happening, and informing your mitigation steps.
+2. Boost your data-driven decision-making abilities by addressing questions such as *ΓÇ£what is the minimum change the end user could apply to their features to get a different outcome from the model?ΓÇ¥ and/or ΓÇ£what is the causal effect of reducing or increasing a feature (for example, red meat consumption) on a real-world outcome (for example, diabetes progression)?ΓÇ¥*
+
+The dashboard could be customized to include the only subset of tools that are relevant to your use case.
+
+Responsible AI dashboard is also accompanied by a [PDF scorecard](how-to-responsible-ai-scorecard.md), which enables you to export Responsible AI metadata and insights of your data and models for sharing offline with the product and compliance stakeholders.
## Responsible AI dashboard components
-The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools, integrating them with the Azure Machine Learning CLIv2, Python SDKv2 and studio. These tools include:
+The Responsible AI dashboard brings together, in a comprehensive view, various new and pre-existing tools, integrating them with the Azure Machine Learning [CLIv2, Python SDKv2](concept-v2.md) and [studio](overview-what-is-machine-learning-studio.md). These tools include:
1. [Data explorer](concept-data-analysis.md) to understand and explore your dataset distributions and statistics. 2. [Model overview and fairness assessment](concept-fairness-ml.md) to evaluate the performance of your model and evaluate your modelΓÇÖs group fairness issues (how diverse groups of people are impacted by your modelΓÇÖs predictions).
-3. [Error Analysis](concept-error-analysis.md) to view and understand the error distributions of your model in a dataset via a decision tree map or a heat map visualization.
-4. [Model interpretability](how-to-machine-learning-interpretability.md) (aggregate/individual feature importance values) to understand you modelΓÇÖs predictions and how those overall and individual predictions are made.
-5. [Counterfactual What-If's](concept-counterfactual-analysis.md) to observe how feature perturbations would impact your model predictions and provide you with the closest datapoints with opposing or different model predictions.
-6. [Causal analysis](concept-causal-inference.md) to use historical data to view the causal effects of treatment features on the real-world outcome.
+3. [Error Analysis](concept-error-analysis.md) to view and understand how errors are distributed in your dataset.
+4. [Model interpretability](how-to-machine-learning-interpretability.md) (aggregate/individual feature importance values) to understand your modelΓÇÖs predictions and how those overall and individual predictions are made.
+5. [Counterfactual What-If](concept-counterfactual-analysis.md) to observe how feature perturbations would impact your model predictions while providing you with the closest data points with opposing or different model predictions.
+6. [Causal analysis](concept-causal-inference.md) to use historical data to view the causal effects of treatment features on real-world outcomes.
-Together, these components will enable you to debug machine learning models, while informing your data-driven and model-driven decisions.
+Together, these components will enable you to debug machine learning models, while informing your data-driven and model-driven business decisions. The following diagram and two sections explain how these tools could be incorporated into your AI lifecycle to achieve improved models and solid data insights.
:::image type="content" source="./media/concept-responsible-ai-dashboard/dashboard.png" alt-text=" Diagram of Responsible A I dashboard components for model debugging and responsible decision making.":::
Together, these components will enable you to debug machine learning models, whi
Assessing and debugging machine learning models is critical for model reliability, interpretability, fairness, and compliance. It helps determine how and why AI systems behave the way they do. You can then use this knowledge to improve model performance. Conceptually, model debugging consists of three stages: -- **Identify**, to understand and recognize model errors by addressing the following questions:
+- **Identify**, to understand and recognize model errors and/or fairness issues by addressing the following questions:
- *What kinds of errors does my model have?* - *In what areas are errors most prevalent?* - **Diagnose**, to explore the reasons behind the identified errors by addressing:
Below are the components of the Responsible AI dashboard supporting model debugg
| Stage | Component | Description | |-|--|-|
-| Identify | Error Analysis | The Error Analysis component provides machine learning practitioners with a deeper understanding of model failure distribution and assists you with quickly identifying erroneous cohorts of data. <br><br> The capabilities of this component in the dashboard are founded by [Error Analysis](https://erroranalysis.ai/) capabilities on generating model error profiles.|
-| Identify | Fairness Analysis | The Fairness component assesses how different groups, defined in terms of sensitive attributes such as sex, race, age, etc., are affected by your model predictions and how the observed disparities may be mitigated. It evaluates the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics across different sensitive subgroups. The capabilities of this component in the dashboard are founded by [Fairlearn](https://fairlearn.org/) capabilities on generating model fairness assessments. |
-| Identify | Model Overview | The Model Statistics component aggregates various model assessment metrics, showing a high-level view of model prediction distribution for better investigation of its performance. It also enables group fairness assessment, highlighting the breakdown of model performance across different sensitive groups. |
+| Identify | Error Analysis | The Error Analysis component provides machine learning practitioners with a deeper understanding of model failure distribution and assists you with quickly identifying erroneous cohorts of data. <br><br> The capabilities of this component in the dashboard are founded by the [Error Analysis](https://erroranalysis.ai/) package.|
+| Identify | Fairness Analysis | The Fairness component assesses how different groups, defined in terms of sensitive attributes such as sex, race, age, etc., are affected by your model predictions and how the observed disparities may be mitigated. It evaluates the performance of your model by exploring the distribution of your prediction values and the values of your model performance metrics across different sensitive subgroups. The capabilities of this component in the dashboard are founded by the [Fairlearn](https://fairlearn.org/) package. |
+| Identify | Model Overview | The Model Overview component aggregates various model assessment metrics, showing a high-level view of model prediction distribution for better investigation of its performance. It also enables group fairness assessment, highlighting the breakdown of model performance across different sensitive groups. |
| Diagnose | Data Explorer | The Data Explorer component helps to visualize datasets based on predicted and actual outcomes, error groups, and specific features. This helps to identify issues of over- and underrepresentation and to see how data is clustered in the dataset. |
-| Diagnose | Model Interpretability | The Interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, which features affect the overall behavior of a loan allocation model) and local explanations (for example, why an applicantΓÇÖs loan application was approved or rejected). <br><br> The capabilities of this component in the dashboard are founded by [InterpretML](https://interpret.ml/) capabilities on generating model explanations. |
-| Diagnose | Counterfactual Analysis and What-If| The Counterfactual Analysis and what-if component consists of two functionalities for better error diagnosis: <br> - Generating a set of examples with minimal changes to a given point such that they change the model's prediction (showing the closest datapoints with opposite model precisions). <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard are founded by the [DiCE](https://github.com/interpretml/DiCE) package, which provides this information by showing feature-perturbed versions of the same datapoint, which would have received a different model prediction (for example, Taylor would have received the loan approval prediction if their yearly income was higher by $10,000). |
+| Diagnose | Model Interpretability | The Interpretability component generates human-understandable explanations of the predictions of a machine learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, which features affect the overall behavior of a loan allocation model) and local explanations (for example, why an applicantΓÇÖs loan application was approved or rejected). <br><br> The capabilities of this component in the dashboard are founded by the [InterpretML](https://interpret.ml/) package. |
+| Diagnose | Counterfactual Analysis and What-If| The Counterfactual Analysis and what-if component consist of two functionalities for better error diagnosis: <br> - Generating a set of examples with minimal changes to a given point such that those changes alter the model's prediction (showing the closest data points with opposite model predictions). <br> - Enabling interactive and custom what-if perturbations for individual data points to understand how the model reacts to feature changes. <br> <br> The capabilities of this component in the dashboard are founded by the [DiCE](https://github.com/interpretml/DiCE) package. |
-Mitigation steps are available via stand-alone tools such as Fairlearn (for unfairness mitigation).
+Mitigation steps are available via standalone tools such as [Fairlearn](https://fairlearn.org/) (see [unfairness mitigation algorithms](https://fairlearn.org/v0.7.0/user_guide/mitigation.html)).
### Responsible decision-making Decision-making is one of the biggest promises of machine learning. The Responsible AI dashboard helps you inform your model-driven and data-driven business decisions. -- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, *ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?"*. Such insights are provided through the "Causal Inference" component of the dashboard.-- Model-driven insights, to answer end-usersΓÇÖ questions such as *ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥* to inform their actions. Such insights are provided to data scientists through the "Counterfactual Analysis and What-If" component described above.
+- Data-driven insights to further understand causal treatment effects on an outcome, using historic data only. For example, *ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?"* or *ΓÇ£how would providing promotional values to certain customers impact revenue?"*. Such insights are provided through the [Causal inference](concept-causal-inference.md) component of the dashboard.
+- Model-driven insights, to answer end-users questions such as *ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥* to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component described above.
:::image type="content" source="./media/concept-responsible-ai-dashboard/decision-making.png" alt-text="Responsible A I dashboard capabilities for responsible business decision making.":::
-Exploratory data analysis, counterfactual analysis, and causal inference capabilities can assist you make informed model-driven and data-driven decisions responsibly.
+Exploratory data analysis, counterfactual analysis, and causal inference capabilities can assist you to make informed model-driven and data-driven decisions responsibly.
-Below are the components of the Responsible AI dashboard supporting responsible decision making:
+Below are the components of the Responsible AI dashboard supporting responsible decision-making:
- **Data Explorer** - The component could be reused here to understand data distributions and identify over- and underrepresentation. Data exploration is a critical part of decision making as one can conclude that it isn't feasible to make informed decisions about a cohort that is underrepresented within data. - **Causal Inference** - The Causal Inference component estimates how a real-world outcome changes in the presence of an intervention. It also helps to construct promising interventions by simulating different feature responses to various interventions and creating rules to determine which population cohorts would benefit from a particular intervention. Collectively, these functionalities allow you to apply new policies and effect real-world change.
- - The capabilities of this component are founded by [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
+ - The capabilities of this component are founded by the [EconML](https://github.com/Microsoft/EconML) package, which estimates heterogeneous treatment effects from observational data via machine learning.
- **Counterfactual Analysis**
- - The Counterfactual Analysis component described above could be reused here to help data scientists generate a set of similar datapoints with opposite prediction outcomes (showing minimum changes applied to a datapoint's features leading to opposite model predictions). Providing counterfactual examples to the end users inform their perspective, educating them on how they can take action to get the desired outcome from the model in the future.
- - The capabilities of this component are founded by [DiCE](https://github.com/interpretml/DiCE) package.
+ - The Counterfactual Analysis component described above could be reused here to help data scientists generate minimum changes applied to a data point's features leading to opposite model predictions (Taylor would have gotten the loan approval from the AI if they earned 10,000 more annual income and had two fewer credit cards open). Providing such information to the end users informs their perspective, educating them on how they can take action to get the desired outcome from the AI in the future.
+ - The capabilities of this component are founded by the [DiCE](https://github.com/interpretml/DiCE) package.
## Why should you use the Responsible AI dashboard?
-While Responsible AI is about rigorous engineering, its operationalization is tedious, manual, and time-consuming without the right tooling and infrastructure. There are minimal instructions, and few disjointed frameworks and tools available to empower data scientists to explore and evaluate their models holistically.
+### Challenges with the status quo
+
+While progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various tools (for example, performance assessment and model interpretability and fairness assessment) together, to holistically evaluate their models and data. For example, if a data scientist discovers a fairness issue with one tool, they then need to jump to a different tool to understand what data or model factors lie at the root of the issue before taking any steps on mitigation. This highly challenging process is further complicated for the following reasons.
+
+- First, there's no central location to discover and learn about the tools, extending the time it takes to research and learn new techniques.
+- Second, the different tools don't exactly communicate with each other. Data scientists must wrangle the datasets, models, and other metadata as they pass them between the different tools. - Third, the metrics and visualizations aren't easily comparable, and the results are hard to share.
+
+### Responsible AI dashboard challenging the status quo
+
+The Responsible AI dashboard is the first comprehensive yet customizable tool, bringing together fragmented experiences under one roof, enabling you to seamlessly onboard to a single customizable framework for model debugging and data-driven decision making.
+
+Using the Responsible AI dashboard, you can create dataset cohorts (subgroups of data), pass those cohorts to all of the supported components (for example, model interpretability, data explorer, model performance, etc.) and observe your model health for your identified cohorts. You can further compare insights from all supported components across a variety of pre-built cohorts to perform disaggregated analysis and find the blind spots of your model.
-While progress has been made on individual tools for specific areas of Responsible AI, data scientists often need to use various such tools together, to holistically evaluate their models and data. For example, if a data scientist discovers a fairness issue with one tool, they then need to jump to a different tool to understand what data or model factors lie at the root of the issue before taking any steps on mitigation. This highly challenging process is further complicated by the following reasons. First, there's no central location to discover and learn about the tools, extending the time it takes to research and learn new techniques. Second, the different tools don't exactly communicate with each other. Data scientists must wrangle the datasets, models, and other metadata as they pass them between the different tools. Third, the metrics and visualizations aren't easily comparable, and the results are hard to share.
+Whenever you're ready to share those insights with other stakeholders, you can extract them easily via our [Responsible AI PDF scorecard](how-to-responsible-ai-scorecard.md)) and attach the PDF report to your compliance reports or share it with other colleagues to build trust and get their approval.
-The Responsible AI dashboard is the first comprehensive tool, bringing together fragmented experiences under one roof, enabling you to seamlessly onboard to a single customizable framework for model debugging and data-driven decision making.
-## How to customize the Responsible AI dashboard
+## How to customize the Responsible AI dashboard?
-The Responsible AI dashboardΓÇÖs strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs. Need some inspiration? Here are some examples of how Toolbox components can be put together to analyze scenarios in diverse ways:
+The Responsible AI dashboardΓÇÖs strength lies in its customizability. It empowers users to design tailored, end-to-end model debugging and decision-making workflows that address their particular needs. Need some inspiration? Here are some examples of how its components can be put together to analyze scenarios in diverse ways:
| Responsible AI Dashboard Flow | Use Case | |-|-|
The Responsible AI dashboardΓÇÖs strength lies in its customizability. It empowe
| Model Overview -> Data Explorer | To understand the root cause of errors and fairness issues introduced via data imbalances or lack of representation of a particular data cohort | | Model Overview -> Interpretability | To diagnose model errors through understanding how the model has made its predictions | | Data Explorer -> Causal Inference | To distinguish between correlations and causations in the data or decide the best treatments to apply to see a positive outcome |
-| Interpretability -> Causal Inference | To learn whether the factors that model has used for decision making has any causal effect on the real-world outcome. |
-| Data Explorer -> Counterfactuals Analysis and What-If | To address customer questions about what they can do next time to get a different outcome from an AI. |
+| Interpretability -> Causal Inference | To learn whether the factors that the model has used for prediction making have any causal effect on the real-world outcome|
+| Data Explorer -> Counterfactuals Analysis and What-If | To address customer questions about what they can do next time to get a different outcome from an AI|
## Who should use the Responsible AI dashboard? The Responsible AI dashboard, and its corresponding [Responsible AI scorecard](how-to-responsible-ai-scorecard.md), could be incorporated by the following personas to build trust with AI systems. -- Machine learning model engineers and data scientists who are interested in debugging and improving their machine learning models pre-deployment.--Machine learning model engineers and data scientists who are interested in sharing their model health records with product manager and business stakeholders to build trust and receive deployment permissions.
+- Machine learning professionals and data scientists who are interested in debugging and improving their machine learning models pre-deployment.
+- Machine learning professionals and data scientists who are interested in sharing their model health records with product managers and business stakeholders to build trust and receive deployment permissions.
- Product managers and business stakeholders who are reviewing machine learning models pre-deployment. - Risk officers who are reviewing machine learning models for understanding fairness and reliability issues.-- Providers of solution to end users who would like to explain model decisions to the end users.-- Business stakeholders who need to review machine learning models with regulators and auditors.
+- Providers of solutions to end users who would like to explain model decisions to the end users and/or help them improve the outcome next time.
+- Those professionals in heavily regulated spaces who need to review machine learning models with regulators and auditors.
-## Supported machine learning models and scenarios
+## Supported scenarios and limitations
-We support scikit-learn models for counterfactual generation and explanations. The scikit-learn models should implement `predict()/predict_proba()` methods or the model should be wrapped within a class, which implements `predict()/predict_proba()` methods.
+- The Responsible AI dashboard currently supports regression and classification (binary and multi-class) models trained on tabular structured data.
+- The Responsible AI dashboard currently supports MLFlow models that are registered in the Azure Machine Learning with a sklearn flavor only. The scikit-learn models should implement `predict()/predict_proba()` methods or the model should be wrapped within a class, which implements `predict()/predict_proba()` methods. The models must be loadable in the component environment and must be pickleable.
+- The Responsible AI dashboard currently visualizes up to 5K of your data points in the dashboard UI. You should downsample your dataset to 5K or less before passing it to the dashboard.
+- The dataset inputs to the Responsible AI dashboard must be pandas DataFrames in Parquet format. Numpy and Scipy sparse data are currently not supported.
+- The Responsible AI dashboard currently supports numeric or categorical features. For categorical features, currently the user has to explicitly specify the feature names.
+- The Responsible AI dashboard currently doesn't support datasets with more than 10K columns.
-Currently, we support counterfactual generation and explanations for tabular datasets having numerical and categorical data types. Counterfactual generation and explanations are supported for free formed text data, images and historical data.
## Next steps -- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)-- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md)) based on the insights observed in the Responsible AI dashboard.
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
machine-learning Concept Responsible Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-responsible-ml.md
Previously updated : 05/06/2022 Last updated : 08/05/2022
-#Customer intent: As a data scientist, I want to know learn what responsible AI is and how I can use it in Azure Machine Learning.
+#Customer intent: As a data scientist, I want to learn what responsible AI is and how I can use it in Azure Machine Learning.
-# What is responsible AI? (preview)
+# What is Responsible AI? (preview)
[!INCLUDE [dev v1](../../includes/machine-learning-dev-v1.md)] [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-The societal implications of AI and the responsibility of organizations to anticipate and mitigate unintended consequences of AI technology are significant. Organizations are finding the need to create internal policies, practices, and tools to guide their AI efforts, whether they're deploying third-party AI solutions or developing their own. At Microsoft, we've recognized six principles that we believe should guide AI development and use: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day. Azure Machine Learning currently supports various tools for these principles, making it seamless for ML developers and data scientists to implement Responsible AI in practice.
+Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy and ethical manner. AI systems are the product of many different decisions made by those who develop and deploy them. From system purpose to how people interact with AI systems, we need to proactively guide these decisions toward more beneficial and equitable outcomes. That means keeping people and their goals at the center of system design decisions and respecting enduring values like fairness, reliability, and transparency.
+At Microsoft, we've developed a [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf), a framework to guide how we build AI systems, according to our six principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. For us, these principles are the cornerstone of a responsible and trustworthy approach to AI, especially as intelligent technology becomes more prevalent in the products and services we use every day.
++
+This article explains the six principles and demonstrates how Azure Machine Learning supports tools for making it seamless for ML developers and data scientists to implement and operationalize them in practice.
+ ## Fairness and inclusiveness
To build trust, it's critical that AI systems operate reliably, safely, and cons
When AI systems are used to help inform decisions that have tremendous impacts on people's lives, it's critical that people understand how those decisions were made. For example, a bank might use an AI system to decide whether a person is creditworthy, or a company might use an AI system to determine the most qualified candidates to hire.
-A crucial part of transparency is what we refer to as interpretability, or the useful explanation of the behavior of AI systems and their components. Improving interpretability requires that stakeholders comprehend how and why they function so that they can identify potential performance issues, safety and privacy concerns, fairness issues, exclusionary practices, or unintended outcomes.
+A crucial part of transparency is what we refer to as interpretability or the useful explanation of the behavior of AI systems and their components. Improving interpretability requires that stakeholders comprehend how and why AI systems function the way they do so that they can identify potential performance issues, fairness issues, exclusionary practices, or unintended outcomes.
-**Transparency in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Model Interpretability](how-to-machine-learning-interpretability.md) and [Counterfactual What-If](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enables data scientists and ML developers to generate human-understandable descriptions of the predictions of a model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, what features affect the overall behavior of a loan allocation model) and local explanations (for example, why a customerΓÇÖs loan application was approved or rejected). One can also observe model explanations for a selected cohort as a subgroup of data points. Moreover, the Counterfactual What-If component enables understanding and debugging a machine learning model in terms of how it reacts to input (feature) changes. Azure Machine Learning also supports a Responsible AI scorecard, a customizable report which machine learning developers can easily configure, download, and share with their technical and non-technical stakeholders to educate them about data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of machine learning models.
+**Transparency in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Model Interpretability](how-to-machine-learning-interpretability.md) and [Counterfactual What-If](./concept-counterfactual-analysis.md) components of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) enable data scientists and ML developers to generate human-understandable descriptions of the predictions of a model. The Model Interpretability component provides multiple views into their modelΓÇÖs behavior: global explanations (for example, what features affect the overall behavior of a loan allocation model?) and local explanations (for example, why a customerΓÇÖs loan application was approved or rejected?). One can also observe model explanations for a selected cohort of data points (for example, what features affect the overall behavior of a loan allocation model for low-income applicants?). Moreover, the Counterfactual What-If component enables understanding and debugging a machine learning model in terms of how it reacts to feature changes and perturbations. Azure Machine Learning also supports a [Responsible AI scorecard](./how-to-responsible-ai-scorecard.md), a customizable PDF report that machine learning developers can easily configure, generate, download, and share with their technical and non-technical stakeholders to educate them about their datasets and models health, achieve compliance, and build trust. This scorecard could also be used in audit reviews to uncover the characteristics of machine learning models.
-## Privacy and Security
+## Privacy and Security
-As AI becomes more prevalent, protecting privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data and mandate that consumers have appropriate controls to choose how their data is used.
+As AI becomes more prevalent, protecting the privacy and securing important personal and business information is becoming more critical and complex. With AI, privacy and data security issues require especially close attention because access to data is essential for AI systems to make accurate and informed predictions and decisions about people. AI systems must comply with privacy laws that require transparency about the collection, use, and storage of data and mandate that consumers have appropriate controls to choose how their data is used.
-**Privacy and Security in Azure Machine Learning**: Implementing differentially private systems is difficult. [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core) is an open-source project (co-developed by Microsoft) that contains different components for building global differentially private systems. To learn more about differential privacy and the SmartNoise project, see the preserve [data privacy by using differential privacy and SmartNoise article](concept-differential-privacy.md). Azure Machine Learning is also enabling administrators, DevOps, and MLOps to [create a secure configuration that is compliant](concept-enterprise-security.md) with your companies policies. With Azure Machine Learning and the Azure platform, you can:
+**Privacy and Security in Azure Machine Learning**: Azure Machine Learning is enabling administrators, DevOps, and MLOps developers to [create a secure configuration that is compliant](concept-enterprise-security.md) with their company's policies. With Azure Machine Learning and the Azure platform, users can:
- Restrict access to resources and operations by user account or groups - Restrict incoming and outgoing network communications
As AI becomes more prevalent, protecting privacy and securing important personal
- Scan for vulnerabilities - Apply and audit configuration policies
-Besides SmartNoise, Microsoft released [Counterfit](https://github.com/Azure/counterfit/), an open-source project that comprises a command-line tool and generic automation layer to allow developers to simulate cyber-attacks against AI systems. Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. The tool is agnostic to AI models and supports various data types, including text, images, or generic input.
+Microsoft has also created two open source packages that could enable further implementation of privacy and security principles:
+
+- SmartNoise: Differential privacy is a set of systems and practices that help keep the data of individuals safe and private. In machine learning solutions, differential privacy may be required for regulatory compliance. Implementing differentially private systems, however, is difficult. [SmartNoise](https://github.com/opendifferentialprivacy/smartnoise-core) is an open-source project (co-developed by Microsoft) that contains different components for building global differentially private systems.
++
+- Counterfit: [Counterfit](https://github.com/Azure/counterfit/) is an open-source project that comprises a command-line tool and generic automation layer to allow developers to simulate cyber-attacks against AI systems. Anyone can download the tool and deploy it through Azure Shell, to run in-browser, or locally in an Anaconda Python environment. It can assess AI models hosted in various cloud environments, on-premises, or in the edge. The tool is agnostic to AI models and supports various data types, including text, images, or generic input.
## Accountability The people who design and deploy AI systems must be accountable for how their systems operate. Organizations should draw upon industry standards to develop accountability norms. These norms can ensure that AI systems aren't the final authority on any decision that impacts people's lives and that humans maintain meaningful control over otherwise highly autonomous AI systems.
-**Accountability in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Machine Learning Operations (MLOps)](concept-model-management-and-deployment.md) is based on DevOps principles and practices that increase the efficiency of workflows. It specifically supports quality assurance and end-to-end lineage tracking to capture the governance data for the end-to-end ML lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.
+**Accountability in Azure Machine Learning**: Azure Machine LearningΓÇÖs [Machine Learning Operations (MLOps)](concept-model-management-and-deployment.md) is based on DevOps principles and practices that increase the efficiency of AI workflows. Azure Machine Learning provides the following MLOps capabilities for better accountability of your AI systems:
+
+- Register, package, and deploy models from anywhere. You can also track the associated metadata required to use the model.
+- Capture the governance data for the end-to-end ML lifecycle. The logged lineage information can include who is publishing models, why changes were made, and when models were deployed or used in production.
+- Notify and alert on events in the ML lifecycle. For example, experiment completion, model registration, model deployment, and data drift detection.
+- Monitor ML applications for operational and ML-related issues. Compare model inputs between training and inference, explore model-specific metrics, and provide monitoring and alerts on your ML infrastructure.
-Azure Machine LearningΓÇÖs [Responsible AI scorecard](./how-to-responsible-ai-scorecard.md) creates accountability by enabling cross-stakeholders communications and by empowering machine learning developers to easily configure, download, and share their model health insights with their technical and non-technical stakeholders to educate them about data and model health and compliance and build trust.
+Besides the MLOps capabilities, Azure Machine LearningΓÇÖs [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) creates accountability by enabling cross-stakeholders communications and by empowering machine learning developers to easily configure, download, and share their model health insights with their technical and non-technical stakeholders to educate them about their AI's data and model health, and build trust.
The ML platform also enables decision-making by informing model-driven and data-driven business decisions: -- Data-driven insights to further understand heterogeneous treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the [Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).-- Model-driven insights, to answer end-usersΓÇÖ questions such as ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥ to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Data-driven insights to help stakeholders understand causal treatment effects on an outcome, using historic data only. For example, ΓÇ£how would a medicine impact a patientΓÇÖs blood pressure?". Such insights are provided through the [Causal Inference](concept-causal-inference.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Model-driven insights, to answer end-users questions such as ΓÇ£what can I do to get a different outcome from your AI next time?ΓÇ¥ to inform their actions. Such insights are provided to data scientists through the [Counterfactual What-If](concept-counterfactual-analysis.md) component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
## Next steps -- For more information on how to implement Responsible AI in Azure Machine Learning, see [Responsible AI dashboard](concept-responsible-ai-dashboard.md). -- Learn more about the [ABOUT ML](https://www.partnershiponai.org/about-ml/) set of guidelines for machine learning system documentation.
+- For more information on how to implement Responsible AI in Azure Machine Learning, see [Responsible AI dashboard](concept-responsible-ai-dashboard.md).
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [Azure Machine Learning studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in your Responsible AI dashboard.
+- Learn about Microsoft's [Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf), a framework to guide how to build AI systems, according to Microsoft's six principles of fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.
machine-learning Concept Sourcing Human Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-sourcing-human-data.md
For more information on how to work with your data:
- [Secure data access in Azure Machine Learning](concept-data.md) - [Data ingestion options for Azure Machine Learning workflows](concept-data-ingestion.md) - [Optimize data processing with Azure Machine Learning](concept-optimize-data-processing.md)-- [Use differential privacy in Azure Machine Learning](how-to-differential-privacy.md)
+- [Use differential privacy with Azure Machine Learning SDK](v1/how-to-differential-privacy.md)
Follow these how-to guides to work with your data after you've collected it:
machine-learning Concept Train Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-train-machine-learning-model.md
Define the iterations, hyperparameter settings, featurization, and other setting
Machine learning pipelines can use the previously mentioned training methods. Pipelines are more about creating a workflow, so they encompass more than just the training of models. In a pipeline, you can train a model using automated machine learning or run configurations. * [What are ML pipelines in Azure Machine Learning?](concept-ml-pipelines.md)
-* [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
+* [Create and run machine learning pipelines with Azure Machine Learning SDK](v1/how-to-create-machine-learning-pipelines.md)
* [Tutorial: Use Azure Machine Learning Pipelines for batch scoring](tutorial-pipeline-batch-scoring-classification.md) * [Examples: Jupyter Notebook examples for machine learning pipelines](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines) * [Examples: Pipeline with automated machine learning](https://aka.ms/pl-automl)
The Azure training lifecycle consists of:
1. Saving logs, model files, and other files written to `./outputs` to the storage account associated with the workspace 1. Scaling down compute, including removing temporary storage
-If you choose to train on your local machine ("configure as local run"), you do not need to use Docker. You may use Docker locally if you choose (see the section [Configure ML pipeline](./how-to-debug-pipelines.md) for an example).
+If you choose to train on your local machine ("configure as local run"), you do not need to use Docker. You may use Docker locally if you choose (see the section [Configure ML pipeline](v1/how-to-debug-pipelines.md) for an example).
## Azure Machine Learning designer
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
When you create a new workspace, it automatically creates several Azure resource
+ [Azure Container Registry](https://azure.microsoft.com/services/container-registry/): Registers docker containers that are used for the following components: * [Azure Machine Learning environments](concept-environments.md) when training and deploying models * [AutoML](concept-automated-ml.md) when deploying
- * [Data profiling](how-to-connect-data-ui.md#data-profile-and-preview)
+ * [Data profiling](v1/how-to-connect-data-ui.md#data-profile-and-preview)
To minimize costs, ACR is **lazy-loaded** until images are needed.
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
For more examples on how to do include AutoML in your pipelines, please check ou
## Next steps
-+ Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
++ Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
Try it out:
![Select Import](./media/how-to-configure-environment/azure-db-screenshot.png) ![Import Panel](./media/how-to-configure-environment/azure-db-import.png)
-+ Learn how to [create a pipeline with Databricks as the training compute](./how-to-create-machine-learning-pipelines.md).
++ Learn how to [create a pipeline with Databricks as the training compute](v1/how-to-create-machine-learning-pipelines.md). ## Troubleshooting
machine-learning How To Create Image Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-image-labeling-projects.md
Use the **Export** button on the **Project details** page of your labeling proje
* Image labels can be exported as: * [COCO format](http://cocodataset.org/#format-data).The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/coco*.
- * An [Azure Machine Learning dataset with labels](how-to-use-labeled-dataset.md).
+ * An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
Access exported Azure Machine Learning datasets in the **Datasets** section of Machine Learning. The dataset details page also provides sample code to access your labels from Python.
machine-learning How To Create Text Labeling Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-text-labeling-projects.md
Use the **Export** button on the **Project details** page of your labeling proje
For all project types other than **Text Named Entity Recognition**, you can export: * A CSV file. The CSV file is created in the default blob store of the Azure Machine Learning workspace in a folder within *Labeling/export/csv*.
-* An [Azure Machine Learning dataset with labels](how-to-use-labeled-dataset.md).
+* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
For **Text Named Entity Recognition** projects, you can export:
-* An [Azure Machine Learning dataset with labels](how-to-use-labeled-dataset.md).
+* An [Azure Machine Learning dataset with labels](v1/how-to-use-labeled-dataset.md).
* A CoNLL file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. When the file is ready to download, you'll see a notification on the top right. Select this to open the notification, which includes the link to the file. :::image type="content" source="media/how-to-create-text-labeling-projects/notification-bar.png" alt-text="Notification for file download.":::
machine-learning How To Debug Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-debug-visual-studio-code.md
Learn more about troubleshooting:
* [Local model deployment](./v1/how-to-troubleshoot-deployment-local.md) * [Remote model deployment](./v1/how-to-troubleshoot-deployment.md)
-* [Machine learning pipelines](how-to-debug-pipelines.md)
-* [ParallelRunStep](how-to-debug-parallel-run-step.md)
+* [Machine learning pipelines](v1/how-to-debug-pipelines.md)
+* [ParallelRunStep](v1/how-to-debug-parallel-run-step.md)
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
When deploying a model for use with Azure Cognitive Search, the deployment must
* A registered model. If you do not have a model, use the example notebook at [https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill](https://github.com/Azure-Samples/azure-search-python-samples/tree/master/AzureML-Custom-Skill).
-* A general understanding of [How and where to deploy models](how-to-deploy-and-where.md).
+* A general understanding of [How and where to deploy models](v1/how-to-deploy-and-where.md).
## Connect to your workspace
def run(raw_data):
return json.dumps({"error": result, "tb": traceback.format_exc()}) ```
-For more information on entry scripts, see [How and where to deploy](how-to-deploy-and-where.md).
+For more information on entry scripts, see [How and where to deploy](v1/how-to-deploy-and-where.md).
## Define the software environment
For more information on environments, see [Create and manage environments for tr
The deployment configuration defines the Azure Kubernetes Service hosting environment used to run the web service. > [!TIP]
-> If you aren't sure about the memory, CPU, or GPU needs of your deployment, you can use profiling to learn these. For more information, see [How and where to deploy a model](how-to-deploy-and-where.md).
+> If you aren't sure about the memory, CPU, or GPU needs of your deployment, you can use profiling to learn these. For more information, see [How and where to deploy a model](v1/how-to-deploy-and-where.md).
```python from azureml.core.model import Model
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-python.md
Now, you have a dataset with the new feature **Dollars/HP**, which could be usef
## Next steps
-Learn how to [import your own data](how-to-designer-import-data.md) in Azure Machine Learning designer.
+Learn how to [import your own data](v1/how-to-designer-import-data.md) in Azure Machine Learning designer.
machine-learning How To Designer Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-transform-data.md
This how-to is a prerequisite for the [how to retrain designer models](how-to-re
## Transform a dataset
-In this section, you learn how to import the sample dataset and split the data into US and non-US datasets. For more information on how to import your own data into the designer, see [how to import data](how-to-designer-import-data.md).
+In this section, you learn how to import the sample dataset and split the data into US and non-US datasets. For more information on how to import your own data into the designer, see [how to import data](v1/how-to-designer-import-data.md).
### Import data
Now that your pipeline is set up to split the data, you need to specify where to
**File format**: csv > [!NOTE]
- > This article assumes that you have access to a datastore registered to the current Azure Machine Learning workspace. For instructions on how to setup a datastore, see [Connect to Azure storage services](how-to-connect-data-ui.md#create-datastores).
+ > This article assumes that you have access to a datastore registered to the current Azure Machine Learning workspace. For instructions on how to setup a datastore, see [Connect to Azure storage services](v1/how-to-connect-data-ui.md#create-datastores).
If you don't have a datastore, you can create one now. For example purposes, this article will save the datasets to the default blob storage account associated with the workspace. It will save the datasets into the `azureml` container in a new folder called `data`.
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
However, in order to load that model in a notebook in your custom local Conda en
## Next steps
-* Learn more about [how and where to deploy a model](how-to-deploy-and-where.md).
+* Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
* See how to [enable interpretability features](how-to-machine-learning-interpretability-automl.md) specifically within automated ML experiments.
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
When your resource group and repository are no longer needed, clean up the resou
> [!div class="nextstepaction"] > [Learning path: End-to-end MLOps with Azure Machine Learning](/learn/paths/build-first-machine-operations-workflow/)
-> [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
+> [Create and run machine learning pipelines with Azure Machine Learning SDK v1](v1/how-to-create-machine-learning-pipelines.md)
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-identity-based-data-access.md
In this article, you learn how to connect to storage services on Azure by using
Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
-To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md#create-datastores).
+To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](v1/how-to-connect-data-ui.md#create-datastores).
To create datastores that use **credential-based** authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
We recommend that you use [Azure Machine Learning datasets](./v1/how-to-create-r
> [!IMPORTANT] > Datasets using identity-based data access are not supported for [automated ML experiments](how-to-configure-auto-train.md).
-Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
+Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](v1/how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
To create a dataset, you can reference paths from datastores that also use identity-based data access.
identity:
## Next steps * [Create an Azure Machine Learning dataset](./v1/how-to-create-register-datasets.md)
-* [Train with datasets](how-to-train-with-datasets.md)
-* [Create a datastore with key-based data access](how-to-access-data.md)
+* [Train with datasets](v1/how-to-train-with-datasets.md)
+* [Create a datastore with key-based data access](v1/how-to-access-data.md)
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
ws.compute_targets['Synapse Spark pool alias']
## Next steps
-* [How to data wrangle with Azure Synapse (preview)](how-to-data-prep-synapse-spark-pool.md).
-* [How to use Apache Spark in your machine learning pipeline with Azure Synapse (preview)](how-to-use-synapsesparkstep.md)
+* [How to data wrangle with Azure Synapse (preview)](v1/how-to-data-prep-synapse-spark-pool.md).
+* [How to use Apache Spark in your machine learning pipeline with Azure Synapse (preview)](v1/how-to-use-synapsesparkstep.md)
* [Train a model](how-to-set-up-training-targets.md). * [How to securely integrate Azure Synapse and Azure Machine Learning workspaces](how-to-private-endpoint-integration-synapse.md).
machine-learning How To Machine Learning Interpretability Aml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability-aml.md
Follow one of these paths to access the explanations dashboard in Azure Machine
[![Visualization Dashboard with Aggregate Feature Importance in AzureML studio in experiments](./media/how-to-machine-learning-interpretability-aml/model-explanation-dashboard-aml-studio.png)](./media/how-to-machine-learning-interpretability-aml/model-explanation-dashboard-aml-studio.png#lightbox) * **Models** pane
- 1. If you registered your original model by following the steps in [Deploy models with Azure Machine Learning](./how-to-deploy-and-where.md), you can select **Models** in the left pane to view it.
+
+ 1. If you registered your original model by following the steps in [Deploy models with Azure Machine Learning](./how-to-deploy-managed-online-endpoints.md), you can select **Models** in the left pane to view it.
1. Select a model, and then the **Explanations** tab to view the explanations dashboard. ## Interpretability at inference time
You can deploy the explainer along with the original model and use it at inferen
1. Deploy the image to a compute target, by following these steps:
- 1. If needed, register your original prediction model by following the steps in [Deploy models with Azure Machine Learning](./how-to-deploy-and-where.md).
+ 1. If needed, register your original prediction model by following the steps in [Deploy models with Azure Machine Learning](./how-to-deploy-managed-online-endpoints.md).
1. Create a scoring file.
machine-learning How To Machine Learning Interpretability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-machine-learning-interpretability.md
---+++ Previously updated : 05/10/2022 Last updated : 08/17/2022
-# Model interpretablity (preview)
+# Model interpretability (preview)
This article describes methods you can use for model interpretability in Azure Machine Learning.
This article describes methods you can use for model interpretability in Azure M
## Why is model interpretability important to model debugging?
-When machine learning models are used in ways that impact peopleΓÇÖs lives, it is critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as model debugging (Why did my model make this mistake? How can I improve my model?), human-AI collaboration (How can I understand and trust the modelΓÇÖs decisions?), and regulatory compliance (Does my model satisfy legal requirements?).
+When machine learning models are used in ways that impact peopleΓÇÖs lives, it's critically important to understand what influences the behavior of models. Interpretability helps answer questions in scenarios such as model debugging (Why did my model make this mistake? How can I improve my model?), human-AI collaboration (How can I understand and trust the modelΓÇÖs decisions?), and regulatory compliance (Does my model satisfy legal requirements?).
-The interpretability component of the [Responsible AI dashboard](LINK TO CONCEPT DOC RESPONSIBLE AI DASHBOARD) contributes to the ΓÇ£diagnoseΓÇ¥ stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a Machine Learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (e.g., what features affect the overall behavior of a loan allocation model) and local explanations (e.g., why a customerΓÇÖs loan application was approved or rejected). One can also observe model explanations for a selected cohort as a subgroup of data points. This is valuable when, for example, assessing fairness in model predictions for individuals in a particular demographic group. The local explanation tab of this component also represents a full data visualization which is great for general eyeballing the data and looking at differences between correct and incorrect predictions of each cohort.
+The interpretability component of the [Responsible AI dashboard](concept-responsible-ai-dashboard.md) contributes to the ΓÇ£diagnoseΓÇ¥ stage of the model lifecycle workflow by generating human-understandable descriptions of the predictions of a Machine Learning model. It provides multiple views into a modelΓÇÖs behavior: global explanations (for example, what features affect the overall behavior of a loan allocation model) and local explanations (for example, why a customerΓÇÖs loan application was approved or rejected). One can also observe model explanations for a selected cohort as a subgroup of data points. This is valuable when, for example, assessing fairness in model predictions for individuals in a particular demographic group. The local explanation tab of this component also represents a full data visualization, which is great for general eyeballing the data and looking at differences between correct and incorrect predictions of each cohort.
-The capabilities of this component are founded by [InterpretML](https://interpret.ml/) capabilities on generating model explanations.
+The capabilities of this component are founded by the [InterpretML](https://interpret.ml/) package, generating model explanations.
Use interpretability when you need to...
-+ Determine how trustworthy your AI systemΓÇÖs predictions are by understanding what features are most important for the predictions.
-+ Approach the debugging of your model by understanding it first and identifying if the model is using healthy features or merely spurious correlations.
-+ Uncover potential sources of unfairness by understanding whether the model is predicting based on sensitive features or features highly correlated with them.
-+ Build end user trust in your modelΓÇÖs decisions by generating local explanations to illustrate their outcomes.
-+ Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans.
+
+- Determine how trustworthy your AI systemΓÇÖs predictions are by understanding what features are most important for the predictions.
+- Approach the debugging of your model by understanding it first and identifying if the model is using healthy features or merely spurious correlations.
+- Uncover potential sources of unfairness by understanding whether the model is predicting based on sensitive features or features highly correlated with them.
+- Build trust with end-users in your modelΓÇÖs decisions by generating local explanations to illustrate their outcomes.
+- Complete a regulatory audit of an AI system to validate models and monitor the impact of model decisions on humans.
## How to interpret your model?
-In machine learning, **features** are the data fields used to predict a target data point. For example, to predict credit risk, data fields for age, account size, and account age might be used. In this case, age, account size, and account age are **features**. Feature importance tells you how each data field affected the model's predictions. For example, age may be heavily used in the prediction while account size and account age do not affect the prediction values significantly. This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what features are most important in the model.
+
+In machine learning, **features** are the data fields used to predict a target data point. For example, to predict credit risk, data fields for age, account size, and account age might be used. In this case, age, account size, and account age are **features**. Feature importance tells you how each data field affected the model's predictions. For example, age may be heavily used in the prediction while account size and account age don't affect the prediction values significantly. This process allows data scientists to explain resulting predictions, so that stakeholders have visibility into what features are most important in the model.
Using the classes and methods in the Responsible AI dashboard using SDK v2 and CLI v2, you can:
-+ Explain model prediction by generating feature importance values for the entire model (global explanation) and/or individual datapoints (local explanation).
-+ Achieve model interpretability on real-world datasets at scale
-+ Use an interactive visualization dashboard to discover patterns in data and explanations at training time
+
+- Explain model prediction by generating feature importance values for the entire model (global explanation) and/or individual data points (local explanation).
+- Achieve model interpretability on real-world datasets at scale.
+- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
Using the classes and methods in the SDK v1, you can:
-+ Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.
-+ Achieve model interpretability on real-world datasets at scale, during training and inference.
-+ Use an interactive visualization dashboard to discover patterns in data and explanations at training time
+
+- Explain model prediction by generating feature importance values for the entire model and/or individual data points.
+- Achieve model interpretability on real-world datasets at scale, during training and inference.
+- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
The model interpretability classes are made available through the following SDK v1 package: (Learn how to [install SDK packages for Azure Machine Learning](/python/api/overview/azure/ml/install))
The model interpretability classes are made available through the following SDK
Use `pip install azureml-interpret` for general use. ## Supported model interpretability techniques
-The Responsible AI dashboard and `azureml-interpret` use the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open source Python package for training interpretable models and helping to explain opaque-box AI systems. Opaque-box models are those for which we have no information about their internal workings. interpret-Community serves as the host for this SDK's supported explainers.
+The Responsible AI dashboard and `azureml-interpret` use the interpretability techniques developed in [Interpret-Community](https://github.com/interpretml/interpret-community/), an open-source Python package for training interpretable models and helping to explain opaque-box AI systems. Opaque-box models are those for which we have no information about their internal workings. interpret-Community serves as the host for this SDK's supported explainers.
[Interpret-Community](https://github.com/interpretml/interpret-community/) serves as the host for the following supported explainers, and currently supports the following interpretability techniques:
The Responsible AI dashboard and `azureml-interpret` use the interpretability te
|Interpretability Technique|Description|Type| |--|--|--| |SHAP Tree Explainer| [SHAP](https://github.com/slundberg/shap)'s tree explainer, which focuses on polynomial time fast SHAP value estimation algorithm specific to **trees and ensembles of trees**.|Model-specific|
-|SHAP Deep Explainer| Based on the explanation from SHAP, Deep Explainer "is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the [SHAP NIPS paper](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions). **TensorFlow** models and **Keras** models using the TensorFlow backend are supported (there is also preliminary support for PyTorch)".|Model-specific|
+|SHAP Deep Explainer| Based on the explanation from SHAP, Deep Explainer "is a high-speed approximation algorithm for SHAP values in deep learning models that builds on a connection with DeepLIFT described in the [SHAP NIPS paper](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions). **TensorFlow** models and **Keras** models using the TensorFlow backend are supported (there's also preliminary support for PyTorch)".|Model-specific|
|SHAP Linear Explainer| SHAP's Linear explainer computes SHAP values for a **linear model**, optionally accounting for inter-feature correlations.|Model-specific| |SHAP Kernel Explainer| SHAP's Kernel explainer uses a specially weighted local linear regression to estimate SHAP values for **any model**.|Model-agnostic| |Mimic Explainer (Global Surrogate)| Mimic explainer is based on the idea of training [global surrogate models](https://christophm.github.io/interpretable-ml-book/global.html) to mimic opaque-box models. A global surrogate model is an intrinsically interpretable model that is trained to approximate the predictions of **any opaque-box model** as accurately as possible. Data scientists can interpret the surrogate model to draw conclusions about the opaque-box model. You can use one of the following interpretable models as your surrogate model: LightGBM (LGBMExplainableModel), Linear Regression (LinearExplainableModel), Stochastic Gradient Descent explainable model (SGDExplainableModel), and Decision Tree (DecisionTreeExplainableModel).|Model-agnostic|
-|Permutation Feature Importance Explainer (PFI)| Permutation Feature Importance is a technique used to explain classification and regression models that is inspired by [Breiman's Random Forests paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) (see section 10). At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of **any underlying model** but does not explain individual predictions. |Model-agnostic|
+|Permutation Feature Importance Explainer (PFI)| Permutation Feature Importance is a technique used to explain classification and regression models that is inspired by [Breiman's Random Forests paper](https://www.stat.berkeley.edu/~breiman/randomforest2001.pdf) (see section 10). At a high level, the way it works is by randomly shuffling data one feature at a time for the entire dataset and calculating how much the performance metric of interest changes. The larger the change, the more important that feature is. PFI can explain the overall behavior of **any underlying model** but doesn't explain individual predictions. |Model-agnostic|
Besides the interpretability techniques described above, we support another SHAP-based explainer, called `TabularExplainer`. Depending on the model, `TabularExplainer` uses one of the supported SHAP explainers:
Besides the interpretability techniques described above, we support another SHAP
`TabularExplainer` has also made significant feature and performance enhancements over the direct SHAP Explainers: * **Summarization of the initialization dataset**. In cases where speed of explanation is most important, we summarize the initialization dataset and generate a small set of representative samples, which speeds up the generation of overall and individual feature importance values.
-* **Sampling the evaluation data set**. If the user passes in a large set of evaluation samples but does not actually need all of them to be evaluated, the sampling parameter can be set to true to speed up the calculation of overall model explanations.
+* **Sampling the evaluation data set**. If the user passes in a large set of evaluation samples but doesn't actually need all of them to be evaluated, the sampling parameter can be set to true to speed up the calculation of overall model explanations.
The following diagram shows the current structure of supported explainers.
-[![Machine Learning Interpretability Architecture](./media/how-to-machine-learning-interpretability/interpretability-architecture.png)](./media/how-to-machine-learning-interpretability/interpretability-architecture.png#lightbox)
- ## Supported machine learning models
The `azureml.interpret` package of the SDK supports models trained with the foll
- `iml.datatypes.DenseData` - `scipy.sparse.csr_matrix`
-The explanation functions accept both models and pipelines as input. If a model is provided, the model must implement the prediction function `predict` or `predict_proba` that conforms to the Scikit convention. If your model does not support this, you can wrap your model in a function that generates the same outcome as `predict` or `predict_proba` in Scikit and use that wrapper function with the selected explainer. If a pipeline is provided, the explanation function assumes that the running pipeline script returns a prediction. Using this wrapping technique, `azureml.interpret` can support models trained via PyTorch, TensorFlow, and Keras deep learning frameworks as well as classic machine learning models.
+The explanation functions accept both models and pipelines as input. If a model is provided, the model must implement the prediction function `predict` or `predict_proba` that conforms to the Scikit convention. If your model doesn't support this, you can wrap your model in a function that generates the same outcome as `predict` or `predict_proba` in Scikit and use that wrapper function with the selected explainer. If a pipeline is provided, the explanation function assumes that the running pipeline script returns a prediction. Using this wrapping technique, `azureml.interpret` can support models trained via PyTorch, TensorFlow, and Keras deep learning frameworks as well as classic machine learning models.
## Local and remote compute target
-The `azureml.interpret` package is designed to work with both local and remote compute targets. If run locally, The SDK functions will not contact any Azure services.
+The `azureml.interpret` package is designed to work with both local and remote compute targets. If run locally, The SDK functions won't contact any Azure services.
You can run explanation remotely on Azure Machine Learning Compute and log the explanation info into the Azure Machine Learning Run History Service. Once this information is logged, reports and visualizations from the explanation are readily available on Azure Machine Learning studio for analysis. ## Next steps -- See the how-to guide for generating a Responsible AI dashboard with model interpretability via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md)-- See the [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) generate a Responsible AI scorecard based on the insights observed in the Responsible AI dashboard.-- See the [how-to](how-to-machine-learning-interpretability-aml.md) for enabling interpretability for models training both locally and on Azure Machine Learning remote compute resources.-- Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).-- See the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/explain-model) for additional scenarios. -- If you're interested in interpretability for text scenarios, see [Interpret-text](https://github.com/interpretml/interpret-text), a related open source repo to [Interpret-Community](https://github.com/interpretml/interpret-community/), for interpretability techniques for NLP. `azureml.interpret` package does not currently support these techniques but you can get started with an [example notebook on text classification](https://github.com/interpretml/interpret-text/blob/master/notebooks/text_classification/text_classification_classical_text_explainer.ipynb).
+- Learn how to generate the Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI](how-to-responsible-ai-dashboard-ui.md).
+- Explore the [supported interpretability visualizations](how-to-responsible-ai-dashboard.md#feature-importances-model-explanations) of the Responsible AI dashboard.
+- Learn how to generate a [Responsible AI scorecard](how-to-responsible-ai-scorecard.md) based on the insights observed in the Responsible AI dashboard.
+- Learn how to enable [interpretability for automated machine learning models](how-to-machine-learning-interpretability-automl.md).
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace.md
The Workspace.list(..) method does not return the full workspace object. It incl
-## Search for assets across a workspace (preview)
-
-With the public preview search capability, you can search for machine learning assets such as jobs, models, components, environments, and data across all workspaces, resource groups, and subscriptions in your organization through a unified global view.
-
-1. Start from [Azure Machine Learning studio](https://ml.azure.com).
-1. If a workspace is open, select either the **Microsoft** menu item or the **Microsoft** link in the breadcrumb at the top of the page.
--
-### Free text search
-
-Type search text into the global search bar on the top of the studio **Microsoft** page and hit enter to trigger a 'contains' search.
-A contains search scans across all metadata fields for the given asset and sorts results relevance.
--
-You can use the asset quick links to navigate to search results for jobs, models, components, environments, and data assets that you created.
-
-Also, you can change the scope of applicable subscriptions and workspaces via the 'Change' link in the search bar drop down.
--
-### Structured search
-
-Select any number of filters to create more specific search queries. The following filters are supported:
-
-* Job:
-* Model:
-* Component:
-* Tags:
-* SubmittedBy:
-* Environment:
-* Data:
-
-If an asset filter (job, model, component, environment, data) is present, results are scoped to those tabs. Other filters apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters, but are scoped to the tabs chosen by asset filters, if present.
-
-> [!TIP]
-> * Filters search for exact matches of text. Use free text queries for a contains search.
-> * Quotations are required around values that include spaces or other special characters.
-> * If duplicate filters are provided, only the first will be recognized in search results.
-> * Input text of any language is supported but filter strings must match the provided options (ex. submittedBy:).
-> * The tags filter can accept multiple key:value pairs separated by a comma (ex. tags:"key1:value1, key2:value2").
-
-### View search results
-
-You can view your search results in the individual **Jobs**, **Models**, **Components**, **Environments**, and **Data** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view are not displayed.
--
-If you've used this feature in a previous update, a search result error may occur. Reselect your preferred workspaces in the Directory + Subscription + Workspace tab.
-
-> [!IMPORTANT]
-> Search results may be unexpected for multiword terms in other languages (ex. Chinese characters).
## Delete a workspace
Once you have a workspace, learn how to [Train and deploy a model](tutorial-trai
To learn more about planning a workspace for your organization's requirements, see [Organize and set up Azure Machine Learning](/azure/cloud-adoption-framework/ready/azure-best-practices/ai-machine-learning-resource-organization).
-To check for problems with your workspace, see [How to use workspace diagnostics](how-to-workspace-diagnostic-api.md).
+* If you need to move a workspace to another Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
+
+* To find a workspace, see [Search for Azure Machine Learning assets (preview)](how-to-search-assets.md).
-If you need to move a workspace to another Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
+* If you need to move a workspace to another Azure subscription, see [How to move a workspace](how-to-move-workspace.md).
For information on how to keep your Azure ML up to date with the latest security updates, see [Vulnerability management](concept-vulnerability-management.md).
machine-learning How To Migrate From Estimators To Scriptrunconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-migrate-from-estimators-to-scriptrunconfig.md
src.run_config.data_references = {data_ref.data_reference_name: data_ref.to_conf
``` For more information on using data for training, see:
-* [Train with datasets in Azure ML](./how-to-train-with-datasets.md)
+* [Train with datasets in Azure ML](v1/how-to-train-with-datasets.md)
## Distributed training If you need to configure a distributed job for training, do so by specifying the `distributed_job_config` parameter in the ScriptRunConfig constructor. Pass in an [MpiConfiguration](/python/api/azureml-core/azureml.core.runconfig.mpiconfiguration), [PyTorchConfiguration](/python/api/azureml-core/azureml.core.runconfig.pytorchconfiguration), or [TensorflowConfiguration](/python/api/azureml-core/azureml.core.runconfig.tensorflowconfiguration) for distributed jobs of the respective types.
machine-learning How To Responsible Ai Dashboard Sdk Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-sdk-cli.md
Previously updated : 05/10/2022 Last updated : 08/17/2022
The ` RAI Insights Dashboard Constructor` and `Gather RAI Insights Dashboard ` c
Below are specifications of the Responsible AI components and examples of code snippets in YAML and Python. To view the full code, see [sample YAML and Python notebook](https://aka.ms/RAIsamplesProgrammer) ### Limitations+ The current set of components have a number of limitations on their use: - All models must be in registered in AzureML in MLFlow format with a sklearn flavor. - The models must be loadable in the component environment. - The models must be pickleable.-- The models must be supplied to the RAI components using the 'Fetch Registered Model' component which we provide.
+- The models must be supplied to the RAI components using the 'Fetch Registered Model' component that we provide.
- The dataset inputs must be `pandas` DataFrames in Parquet format. - A model must still be supplied even if only a causal analysis of the data is performed. The `DummyClassifier` and `DummyRegressor` estimators from SciKit-Learn can be used for this purpose.
This component has a single output port, which can be connected to one of the `i
### Add Error Analysis to RAI Insights Dashboard
-This component generates an error analysis for the model. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It also accepts the following parameters:
+This component generates an error analysis for the model. It has a single input port, which accepts the output of the RAI Insights Dashboard Constructor. It also accepts the following parameters:
| Parameter Name | Description | Type | |-|-||
The supplied datasets should be file datasets (uri_file type) in Parquet format.
- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - Learn more about how to [collect data responsibly](concept-sourcing-human-data.md) - View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.-- Learn more about how the Responsible AI Dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn about how the Responsible AI Dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)-- Explore the features of the Responsible AI Dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn about how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)
+- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
machine-learning How To Responsible Ai Dashboard Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard-ui.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # Generate Responsible AI dashboard in the studio UI (preview)
-You can create a Responsible AI dashboard with a no-code experience in the Azure Machine Learning studio UI. To start the wizard, navigate to the registered model youΓÇÖd like to create Responsible AI insights for and select the **Details** tab. Then select the **Create Responsible AI dashboard (preview)** button.
+You can create a Responsible AI dashboard with a no-code experience in the [Azure Machine Learning studio UI](https://ml.azure.com/). Use the following steps to access the dashboard generation wizard:
+
+- [Register your model](how-to-manage-models.md) in Azure Machine Learning before being able to access the no-code experience.
+- Navigate to the **Models** tab from the left navigation bar in Azure Machine Learning studio.
+- Select the registered model youΓÇÖd like to create Responsible AI insights for and select the **Details** tab.
+- Select the **Create Responsible AI dashboard (preview)** button from the top panel.
+
+To learn more, see the Responsible AI dashboard's [supported model types, and limitations](concept-responsible-ai-dashboard.md#supported-scenarios-and-limitations)
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard-ui/model-page.png" alt-text="Screenshot of the wizard details tab with create responsible AI dashboard tab highlighted." lightbox ="./media/how-to-responsible-ai-dashboard-ui/model-page.png":::
Finally, configure your experiment to kick off a job to generate your Responsibl
1. **Name**: Give your dashboard a unique name so that you can differentiate it when youΓÇÖre viewing the list of dashboards for a given model. 2. **Experiment name**: Select an existing experiment to run the job in, or create a new experiment. 3. **Existing experiment**: Select an existing experiment from drop-down.
-4. **Select compute type**: Specify which compute type youΓÇÖd like to use to execute your job.
+4. **Select compute type**: Specify which compute type youΓÇÖd like to use to execute your job.
5. **Select compute**: Select from a drop-down that compute youΓÇÖd like to use. If there are no existing compute resources, select the ΓÇ£+ΓÇ¥ to create a new compute resource and refresh the list. 6. **Description**: Add a more verbose description for your Responsible AI dashboard. 7. **Tags**: Add any tags to this Responsible AI dashboard.
After youΓÇÖve finished your experiment configuration, select **Create** to star
- Summarize and share your Responsible AI insights with the [Responsible AI scorecard as a PDF export](how-to-responsible-ai-scorecard.md). - Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - Learn more about how to [collect data responsibly](concept-sourcing-human-data.md)-- Learn more about how the Responsible AI Dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn about how the Responsible AI Dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)-- Explore the features of the Responsible AI Dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn about how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)
+- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
machine-learning How To Responsible Ai Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-dashboard.md
Previously updated : 05/10/2022 Last updated : 08/17/2022 # How to use the Responsible AI dashboard in studio (preview)
-Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Once you select into your model, select the **Responsible AI (preview)** tab to view a list of generated dashboards.
+Responsible AI dashboards are linked to your registered models. To view your Responsible AI dashboard, go into your model registry and select the registered model you've generated a Responsible AI dashboard for. Once you select your model, select the **Responsible AI (preview)** tab to view a list of generated dashboards.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/dashboard-model-details-tab.png" alt-text="Screenshot of model details tab in studio with Responsible A I tab highlighted." lightbox= "./media/how-to-responsible-ai-dashboard/dashboard-model-details-tab.png":::
-Multiple dashboards can be configured and attached to your registered model. Different combinations of components (explainers, causal analysis, etc.) can be attached to each Responsible AI dashboard. The list below only shows whether a component was generated for your dashboard, but different components can be viewed or hidden within the dashboard itself.
+Multiple dashboards can be configured and attached to your registered model. Different combinations of components (interpretability, error analysis, causal analysis, etc.) can be attached to each Responsible AI dashboard. The list below reminds you of your dashboard(s)' customization and what components were generated within the Responsible AI dashboard. However, once opening each dashboard, different components can be viewed or hidden within the dashboard UI itself.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/dashboard-page.png" alt-text="Screenshot of Responsible A I tab with a dashboard name highlighted." lightbox = "./media/how-to-responsible-ai-dashboard/dashboard-page.png":::
Selecting the name of the dashboard will open up your dashboard into a full view
## Full functionality with integrated compute resource
-Some features of the Responsible AI dashboard require dynamic, real-time computation. Without connecting a compute resource to the dashboard, you may find some functionality missing. Connecting to a compute resource will enable full functionality of your Responsible AI dashboard for the following components:
+Some features of the Responsible AI dashboard require dynamic, on-the-fly, and real-time computation (for example, what if analysis). Without connecting a compute resource to the dashboard, you may find some functionality missing. Connecting to a compute resource will enable full functionality of your Responsible AI dashboard for the following components:
- **Error analysis** - Setting your global data cohort to any cohort of interest will update the error tree instead of disabling it.
Some features of the Responsible AI dashboard require dynamic, real-time computa
- Dynamically updating the heatmap for up to two features is supported. - **Feature importance** - An individual conditional expectation (ICE) plot in the individual feature importance tab is supported.-- Counterfactual what-if
+- **Counterfactual what-if**
- Generating a new what-if counterfactual datapoint to understand the minimum change required for a desired outcome is supported. - **Causal analysis** - Selecting any individual datapoint, perturbing its treatment features, and seeing the expected causal outcome of causal what-if is supported (only for regression ML scenarios).
The information above can also be found on the Responsible AI dashboard page by
### How to enable full functionality of Responsible AI dashboard
-1. Select a running compute instance from compute dropdown above your dashboard. If you donΓÇÖt have a running compute, create a new compute instance by selecting ΓÇ£+ ΓÇ¥ button next to the compute dropdown, or ΓÇ£Start computeΓÇ¥ button to start a stopped compute instance. Creating or starting a compute instance may take few minutes.
+1. Select a running compute instance from compute drop-down above your dashboard. If you donΓÇÖt have a running compute, create a new compute instance by selecting ΓÇ£+ ΓÇ¥ button next to the compute dropdown, or ΓÇ£Start computeΓÇ¥ button to start a stopped compute instance. Creating or starting a compute instance may take few minutes.
- :::image type="content" source="./media/how-to-responsible-ai-dashboard/select-compute.png" alt-text="Screenshot showing how to selecting a compute." lightbox = "./media/how-to-responsible-ai-dashboard/select-compute.png":::
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/select-compute.png" alt-text="Screenshot showing how to select a compute." lightbox = "./media/how-to-responsible-ai-dashboard/select-compute.png":::
2. Once compute is in ΓÇ£RunningΓÇ¥ state, your Responsible AI dashboard will start to connect to the compute instance. To achieve this, a terminal process will be created on the selected compute instance, and Responsible AI endpoint will be started on the terminal. Select **View terminal outputs** to view current terminal process.
Selecting the "Feature list" button opens a side panel, which allows you to retr
#### Error heat map
-Selecting the **Heat map** tab switches to a different view of the error in the dataset. You can select on one or many heat map cells and create new cohorts. You can choose up to two features to create a heatmap.
+Selecting the **Heat map** tab switches to a different view of the error in the dataset. You can select one or many heat map cells and create new cohorts. You can choose up to two features to create a heatmap.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/error-analysis-heat-map.png" alt-text="Screenshot of the dashboard showing error analysis heat map feature list." lightbox= "./media/how-to-responsible-ai-dashboard/error-analysis-heat-map.png":::
Selecting the **Heat map** tab switches to a different view of the error in the
### Model overview
-The model overview component provides a set of commonly used model performance metrics and a box plot visualization to explore the distribution of your prediction values and errors.
+The Model overview component provides a comprehensive set of performance and fairness metrics to evaluate your model, along with key performance disparity metrics along specified features and dataset cohorts.
+
+#### Dataset cohorts
+
+The **Dataset cohorts** tab allows you to investigate your model by comparing the model performance of different user-specified dataset cohorts (accessible via the Cohort settings icon on the top right corner of the dashboard).
+
+> [!NOTE]
+> You can create new dataset cohorts from the UI experience or pass your pre-built cohorts to the dashboard via the SDK experience.
++
+1. **Help me choose metrics**: Selecting this icon will open a panel with more information about what model performance metrics are available to be shown in the table below. Easily adjust which metrics you can view by using the multi-select drop down to select and deselect performance metrics. (see more below)
+2. **Show heatmap**: Toggle on and off to see heatmap visualization in the table below. The gradient of the heatmap corresponds to the range normalized between the lowest value and the highest value in each column.
+3. **Table of metrics for each dataset cohort**: Table with columns for dataset cohorts, sample size of each cohort, and the selected model performance metrics for each cohort.
+4. **Bar chart visualizing individual metric**(mean absolute error) across the cohorts for easy comparison.
+5. **Choose metric (x-axis)**: Selecting this will allow you to select which metric to view in the bar chart.
+6. **Choose cohorts (y-axis)**: Selecting this will allow you to select which cohorts you want to view in the bar chart. You may see ΓÇ£Feature cohortΓÇ¥ selection disabled unless you specify your desired features in the ΓÇ£Feature cohort tabΓÇ¥ of the component first.
+
+Selecting ΓÇ£Help me choose metricsΓÇ¥ will open a panel with the list of model performance metrics and the corresponding metrics definition to aid users in selecting the right metric to view.
| ML scenario | Metrics | |-|-| | Regression | Mean absolute error, Mean squared error, R,<sup>2</sup>, Mean prediction. | | Classification | Accuracy, Precision, Recall, F1 score, False positive rate, False negative rate, Selection rate |
-You can further investigate your model by looking at a comparative analysis of its performance across different cohorts or subgroups of your dataset, including automatically created ΓÇ£temporary cohortsΓÇ¥ based on selected nodes from the Error analysis component. Select filters along y-value and x-value to cut across different dimensions.
+Classification scenarios will support accuracy, F1 score, precision score, recall score, false positive rate, false negative rate and selection rate (the percentage of predictions with label 1):
++++
+Regression scenarios will support mean absolute error, mean squared error, and mean prediction:
+++
+#### Feature cohorts
+
+The **Feature cohorts** tab allows you to investigate your model by comparing model performance across user-specified sensitive/non-sensitive features (for example, performance across different gender, race, income level cohorts).
++
+1. **Help me choose metrics**: Selecting this icon will open a panel with more information about what metrics are available to be shown in the table below. Easily adjust which metrics you can view by using the multi-select drop down to select and deselect performance metrics.
+2. **Help me choose features**: Selecting this icon will open a panel with more information about what features are available to be shown in the table below with descriptors of each feature and binning capability (see below). Easily adjust which features you can view by using the multi-select drop-down to select and deselect features.
+
+ Selecting ΓÇ£Help me choose featuresΓÇ¥ will open a panel with the list of features and their properties:
+
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-features.png" alt-text="Screenshot of the dashboard's model overview tab showing how to choose features." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-features.png":::
+3. **Show heatmap**: toggle on and off to see heatmap visualization in the table below. The gradient of the heatmap corresponds to the range normalized between the lowest value and the highest value in each column.
+4. **Table of metrics for each feature cohort**: Table with columns for feature cohorts (sub-cohort of your selected feature), sample size of each cohort, and the selected model performance metrics for each feature cohort.
+5. **Fairness metrics/disparity metrics**: Table that corresponds to the above metrics table and shows the maximum difference or maximum ratio in performance scores between any two feature cohorts.
+6. **Bar chart visualizing individual metric** (for example, mean absolute error) across the cohort for easy comparison.
+7. **Choose cohorts (y-axis)**: Selecting this will allow you to select which cohorts you want to view in the bar chart.
+
+ Selecting ΓÇ£Choose cohortsΓÇ¥ will open a panel with an option to either show a comparison of selected dataset cohorts or feature cohorts based on what is selected in the multi-select drop-down below it. Select ΓÇ£ConfirmΓÇ¥ to save the changes to the bar chart view.
+
+ :::image type="content" source="./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png" alt-text="Screenshot of the dashboard's model overview tab showing how to choose cohorts." lightbox= "./media/how-to-responsible-ai-dashboard/model-overview-choose-cohorts.png":::
+8. **Choose metric (x-axis)**: Selecting this will allow you to select which metric to view in the bar chart.
### Data explorer
The Data explorer component allows you to analyze data statistics along axes fil
:::image type="content" source="./media/how-to-responsible-ai-dashboard/data-explorer-aggregate.png" alt-text="Screenshot of the dashboard showing the data explorer." lightbox= "./media/how-to-responsible-ai-dashboard/data-explorer-aggregate.png"::: 1. **Select a dataset cohort to explore**: Specify which dataset cohort from your list of cohorts you want to view data statistics for.
-2. **X-axis**: displays the type of value being plotted horizontally, modify by clicking the button to open a side panel.
-3. **Y-axis**: displays the type of value being plotted vertically, modify by clicking the button to open a side panel.
+2. **X-axis**: displays the type of value being plotted horizontally, modify by selecting the button to open a side panel.
+3. **Y-axis**: displays the type of value being plotted vertically, modify by selecting the button to open a side panel.
4. **Chart type**: specifies chart type, choose between aggregate plots (bar charts) or individual datapoints (scatter plot). Selecting the "Individual datapoints" option under "Chart type" shifts to a disaggregated view of the data with the availability of a color axis.
The model explanation component allows you to see which features were most impor
3. **Sort by**: allows you to select which cohort's importances to sort the aggregate feature importance graph by. 4. **Chart type**: allows you to select between a bar plot view of average importances for each feature and a box plot of importances for all data.
-When you select on one of the features in the bar plot, the below dependence plot will be populated. The dependence plot shows the relationship of the values of a feature to its corresponding feature importance values impacting the model prediction.
+When you select one of the features in the bar plot, the below dependence plot will be populated. The dependence plot shows the relationship of the values of a feature to its corresponding feature importance values impacting the model prediction.
:::image type="content" source="./media/how-to-responsible-ai-dashboard/aggregate-feature-importance-2.png" alt-text="Screenshot of the dashboard showing a populated dependence plot on the aggregate feature importances tab." lightbox="./media/how-to-responsible-ai-dashboard/aggregate-feature-importance-2.png":::
Selecting the **Create what-if counterfactual** button opens a full window panel
5. **Search features**: finds features to observe and change values. 6. **Sort counterfactual by ranked features**: sorts counterfactual examples in order of perturbation effect (see above for top ranked features plot).
-7. **Counterfactual Examples**: lists feature values of example counterfactuals with the desired class or range. The first row is the original reference datapoint. Select on ΓÇ£Set valueΓÇ¥ to set all the values of your own counterfactual datapoint in the bottom row with the values of the pre-generated counterfactual example.
+7. **Counterfactual Examples**: lists feature values of example counterfactuals with the desired class or range. The first row is the original reference datapoint. Select ΓÇ£Set valueΓÇ¥ to set all the values of your own counterfactual datapoint in the bottom row with the values of the pre-generated counterfactual example.
8. **Predicted value or class** lists the model prediction of a counterfactual's class given those changed features.
-9. **Create your own counterfactual**: allows you to perturb your own features to modify the counterfactual, features that have been changed from the original feature value will be denoted by the title being bolded (ex. Employer and Programming language). Clicking on ΓÇ£See prediction deltaΓÇ¥ will show you the difference in the new prediction value from the original datapoint.
+9. **Create your own counterfactual**: allows you to perturb your own features to modify the counterfactual, features that have been changed from the original feature value will be denoted by the title being bolded (ex. Employer and Programming language). Selecting ΓÇ£See prediction deltaΓÇ¥ will show you the difference in the new prediction value from the original datapoint.
10. **What-if counterfactual name**: allows you to name the counterfactual uniquely. 11. **Save as new datapoint**: saves the counterfactual you've created.
Selecting the **Create what-if counterfactual** button opens a full window panel
#### Aggregate causal effects
-Selecting on the **Aggregate causal effects** tab of the Causal analysis component shows the average causal effects for pre-defined treatment features (the features that you want to treat to optimize your outcome).
+Selecting the **Aggregate causal effects** tab of the Causal analysis component shows the average causal effects for pre-defined treatment features (the features that you want to treat to optimize your outcome).
> [!NOTE] > Global cohort functionality is not supported for the causal analysis component.
To get a granular view of causal effects on an individual datapoint, switch to t
3. **Individual causal scatter plot**: visualizes points in table as scatter plot to select datapoint for analyzing causal-what-if and viewing the individual causal effects below 4. **Set new treatment value** 1. **(numerical)**: shows slider to change the value of the numerical feature as a real-world intervention.
- 1. **(categorical)**: shows dropdown to select the value of the categorical feature.
+ 1. **(categorical)**: shows drop-down to select the value of the categorical feature.
#### Treatment policy
Selecting the Treatment policy tab switches to a view to help determine real-wor
- Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python. - Explore the features of the Responsible AI Dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn more about how the Responsible AI Dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- Learn about how the Responsible AI Dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)
+- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn about how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)
machine-learning How To Responsible Ai Scorecard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-responsible-ai-scorecard.md
Previously updated : 05/10/2022 Last updated : 08/17/2022
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-Azure Machine LearningΓÇÖs Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions, and while it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
+Azure Machine LearningΓÇÖs Responsible AI scorecard is a PDF report generated based our Responsible AI dashboard insights and customizations to accompany your machine learning models. You can easily configure, download, and share your PDF scorecard with your technical and non-technical stakeholders to educate them about your data and model health, compliance, and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
++
+## Why Responsible AI scorecard?
+
+Our Responsible AI dashboard is designed for machine learning professionals and data scientists to explore and evaluate model insights and inform their data-driven decisions, and while it can help you implement Responsible AI practically in your machine learning lifecycle, there are some needs left unaddressed:
- There often exists a gap between the technical Responsible AI tools (designed for machine-learning professionals) and the ethical, regulatory, and business requirements that define the production environment. - While an end-to-end machine learning life cycle includes both technical and non-technical stakeholders in the loop, there's very little support to enable an effective multi-stakeholder alignment, helping technical experts get timely feedback and direction from the non-technical stakeholders. - AI regulations make it essential to be able to share model and data insights with auditors and risk officers for auditability purposes.
-One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Job History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard, a customizable report that you can easily configure, download, and share with your technical and non-technical stakeholders to educate them about your data and model health and compliance and build trust. This scorecard could also be used in audit reviews to inform the stakeholders about the characteristics of your model.
+One of the biggest benefits of using the Azure Machine Learning ecosystem is related to the archival of model and data insights in the Azure Machine Learning Run History (for quick reference in future). As a part of that infrastructure and to accompany machine learning models and their corresponding Responsible AI dashboards, we introduce the Responsible AI scorecard to empower ML professionals to generate and share their data and model health records easily.
## Who should use a Responsible AI scorecard?
-As a data scientist or machine learning professional, after you train a model and generate its corresponding Responsible AI dashboard for assessment and decision-making purposes, you can share your data and model health and ethical insights with non-technical stakeholders to build trust and gain their approval for deployment.
+- If you are a data scientist or a machine learning professional, after training your model and generating its corresponding Responsible AI dashboard(s) for assessment and decision-making purposes, you can extract those learnings via our PDF scorecard and share the report easily with your technical and non-technical stakeholders to build trust and gain their approval for deployment.
-As a technical or non-technical product owner of a model, you can pass some target values such as minimum accuracy, maximum error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
+- If you're a product manager, business leader, or an accountable stakeholder on an AI product, you can pass your desired model performance and fairness target values such as your target accuracy, target error rate, etc., to your data science team, asking them to generate this scorecard with respect to your identified target values and whether your model meets them. That can provide guidance into whether the model should be deployed or further improved.
-## How to generate a Responsible AI scorecard
+## How to generate a Responsible AI scorecard?
The configuration stage requires you to use your domain expertise around the problem to set your desired target values on model performance and fairness metrics.
-Like other Responsible AI dashboard components configured in the YAML pipeline, you can add a component to generate the scorecard in the YAML pipeline.
+Like other Responsible AI dashboard components [configured in the YAML pipeline](how-to-responsible-ai-dashboard-sdk-cli.md?tabs=yaml#responsible-ai-components), you can add a component to generate the scorecard in the YAML pipeline.
Where pdf_gen.json is the scorecard generation configuration json file and cohorts.json is the prebuilt cohorts definition json file.
scorecard_01:
Sample json for cohorts definition and score card generation config can be found below:
-Cohorts definition:
+Cohorts definition:
```yml [ { "name": "High Yoe", "cohort_filter_list": [ - { "method": "greater", "arg": [
Cohorts definition:
] } ] - ```-
-Scorecard generation config:
+Scorecard generation config for a regression example:
```yml {
Scorecard generation config:
"age" ] },
+ "Fairness": {
+ "metric": ["mean_squared_error"],
+ "sensitive_features": ["YOUR SENSITIVE ATTRIBUTE"],
+ "fairness_evaluation_kind": "difference OR ratio"
+ },
"Cohorts": [ "High Yoe", "Low Yoe"
- ]
+ ]
} ```
+Scorecard generation config for a classification example:
+
+```yml
+{
+ "Model": {
+ "ModelName": "Housing Price Range Prediction",
+ "ModelType": "Classification",
+ "ModelSummary": "This model is a classifier predicting if the house sells for more than median price or not."
+ },
+ "Metrics" :{
+ "accuracy_score": {
+ "threshold": ">=0.85"
+ },
+ }
+ "FeatureImportance": {
+ "top_n": 6
+ },
+ "DataExplorer": {
+ "features": [
+ "YearBuilt",
+ "OverallQual",
+ "GarageCars"
+ ]
+ },
+ "Fairness": {
+ "metric": ["accuracy_score", "selection_rate"],
+ "sensitive_features": ["YOUR SENSITIVE ATTRIBUTE"],
+ "fairness_evaluation_kind": "difference OR ratio"
+ }
+}
+```
+ ### Definition of inputs of the Responsible AI scorecard component
This section defines the list of parameters required to configure the Responsibl
| ModelName | Name of Model | |--|-|
-| ModelType | Values in [ΓÇÿclassificationΓÇÖ, ΓÇÿregressionΓÇÖ, ΓÇÿmulticlassΓÇÖ]. |
+| ModelType | Values in [ΓÇÿclassificationΓÇÖ, ΓÇÿregressionΓÇÖ]. |
| ModelSummary | Input a blurb of text summarizing what the model is for. |
+> [!NOTE]
+> For multi-class classification, you should first use the One-vs-Rest strategy to choose your reference class, and hence, split your multi-class classification model into a binary classification problem for your selected reference class vs the rest of classes.
+ #### Metrics | Performance Metric | Definition | Model Type |
Select which scorecard youΓÇÖd like to download from the list and select Downloa
:::image type="content" source="./media/how-to-responsible-ai-scorecard/studio-select-scorecard.png" alt-text="Screenshot of selecting a Responsible A I scorecard to download." lightbox= "./media/how-to-responsible-ai-scorecard/studio-select-scorecard.png":::
-## How to read your Responsible AI scorecard
+## How to read your Responsible AI scorecard?
The Responsible AI scorecard is a PDF summary of your key insights from the Responsible AI dashboard. The first summary segment of the scorecard gives you an overview of the machine learning model and the key target values you have set to help all stakeholders determine if your model is ready to be deployed.
Finally, you can observe your datasetΓÇÖs causal insights summarized, figuring o
- See the how-to guide for generating a Responsible AI dashboard via [CLIv2 and SDKv2](how-to-responsible-ai-dashboard-sdk-cli.md) or [studio UI ](how-to-responsible-ai-dashboard-ui.md). - Learn more about the [concepts and techniques behind the Responsible AI dashboard](concept-responsible-ai-dashboard.md). - View [sample YAML and Python notebooks](https://aka.ms/RAIsamples) to generate a Responsible AI dashboard with YAML or Python.-- Learn more about how the Responsible AI Dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)-- See how the Responsible AI Dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)-- Explore the features of the Responsible AI Dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- Learn more about how the Responsible AI dashboard and Scorecard can be used to debug data and models and inform better decision making in this [tech community blog post](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
+- See how the Responsible AI dashboard and Scorecard were used by the NHS in a [real life customer story](https://aka.ms/NHSCustomerStory)
+- Explore the features of the Responsible AI dashboard through this [interactive AI Lab web demo](https://www.microsoft.com/ai/ai-lab-responsible-ai-dashboard)
machine-learning How To Retrain Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-retrain-designer.md
In this article, you learned how to create a parameterized training pipeline end
For a complete walkthrough of how you can deploy a model to make predictions, see the [designer tutorial](tutorial-designer-automobile-price-train-score.md) to train and deploy a regression model.
-For how to publish and submit a job to pipeline endpoint using SDK, see [this article](how-to-deploy-pipelines.md).
+For how to publish and submit a job to pipeline endpoint using the SDK v1, see [this article](v1/how-to-deploy-pipelines.md).
machine-learning How To Run Batch Predictions Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-batch-predictions-designer.md
If you make some modifications in your training pipeline, you may want to update
## Next steps * Follow the [designer tutorial to train and deploy a regression model](tutorial-designer-automobile-price-train-score.md).
-* For how to publish and run a published pipeline using SDK, see the [How to deploy pipelines](how-to-deploy-pipelines.md) article.
+* For how to publish and run a published pipeline using the SDK v1, see the [How to deploy pipelines](v1/how-to-deploy-pipelines.md) article.
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Using the following keystroke shortcuts, you can more easily navigate and run co
* **File upload limit**: When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use one of the following methods: * Use the SDK to upload the data to a datastore. For more information, see the [Upload the data](./tutorial-1st-experiment-bring-data.md#upload) section of the tutorial.
- * Use [Azure Data Factory](how-to-data-ingest-adf.md) to create a data ingestion pipeline.
+ * Use [Azure Data Factory](v1/how-to-data-ingest-adf.md) to create a data ingestion pipeline.
## Next steps
machine-learning How To Save Write Experiment Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-save-write-experiment-files.md
The storage limit for experiment snapshots is 300 MB and/or 2000 files.
For this reason, we recommend:
-* **Storing your files in an Azure Machine Learning [dataset](/python/api/azureml-core/azureml.data).** This prevents experiment latency issues, and has the advantages of accessing data from a remote compute target, which means authentication and mounting are managed by Azure Machine Learning. Learn more about how to specify a dataset as your input data source in your training script with [Train with datasets](how-to-train-with-datasets.md).
+* **Storing your files in an Azure Machine Learning [dataset](/python/api/azureml-core/azureml.data).** This prevents experiment latency issues, and has the advantages of accessing data from a remote compute target, which means authentication and mounting are managed by Azure Machine Learning. Learn more about how to specify a dataset as your input data source in your training script with [Train with datasets](v1/how-to-train-with-datasets.md).
* **If you only need a couple data files and dependency scripts and can't use a datastore,** place the files in the same folder directory as your training script. Specify this folder as your `source_directory` directly in your training script, or in the code that calls your training script.
Jupyter notebooks| Create a `.amlignore` file or move your notebook into a new,
Due to the isolation of training experiments, the changes to files that happen during jobs are not necessarily persisted outside of your environment. If your script modifies the files local to compute, the changes are not persisted for your next experiment job, and they're not propagated back to the client machine automatically. Therefore, the changes made during the first experiment job don't and shouldn't affect those in the second.
-When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](how-to-train-with-datasets.md#where-to-write-training-output).
+When writing changes, we recommend writing files to storage via an Azure Machine Learning dataset with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig). See [how to create an OutputFileDatasetConfig](v1/how-to-train-with-datasets.md#where-to-write-training-output).
Otherwise, write files to the `./outputs` and/or `./logs` folder.
machine-learning How To Search Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-search-assets.md
+
+ Title: Search for assets (preview)
+
+description: Find your Azure Machine Learning assets with search
++++++ Last updated : 07/14/2022+++
+# Search for Azure Machine Learning assets (preview)
+
+Use the search bar to find machine learning assets across all workspaces, resource groups, and subscriptions in your organization. Your search text will be used to find assets such as:
+
+* Jobs
+* Models
+* Components
+* Environments
+* Data
+
+> [!IMPORTANT]
+> The search functionality is currently in public preview.
+> The preview version is provided without a service level agreement.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Free text search
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+1. In the top studio titlebar, if a workspace is open, select **This workspace** or **All workspaces** to set the search context.
+
+ :::image type="content" source="media/how-to-search-assets/search-bar.png" alt-text="Screenshot: Shows search in titlebar.":::
+
+1. Type your text and hit enter to trigger a 'contains' search.
+A contains search scans across all metadata fields for the given asset and sorts results by relevancy score which is determined by weightings for different column properties.
++
+## Structured search
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+1. In the top studio titlebar, select **All workspaces**.
+1. Click inside the search field to display filters to create more specific search queries.
++
+The following filters are supported:
+
+* Job
+* Model
+* Component
+* Tags
+* SubmittedBy
+* Environment
+* Data
+
+If an asset filter (job, model, component, environment, data) is present, results are scoped to those tabs. Other filters apply to all assets unless an asset filter is also present in the query. Similarly, free text search can be provided alongside filters, but are scoped to the tabs chosen by asset filters, if present.
+
+> [!TIP]
+> * Filters search for exact matches of text. Use free text queries for a contains search.
+> * Quotations are required around values that include spaces or other special characters.
+> * If duplicate filters are provided, only the first will be recognized in search results.
+> * Input text of any language is supported but filter strings must match the provided options (ex. submittedBy:).
+> * The tags filter can accept multiple key:value pairs separated by a comma (ex. tags:"key1:value1, key2:value2").
+
+## View search results
+
+You can view your search results in the individual **Jobs**, **Models**, **Components**, **Environments**, and **Data** tabs. Select an asset to open its **Details** page in the context of the relevant workspace. Results from workspaces you don't have permissions to view aren't displayed.
++
+If you've used this feature in a previous update, a search result error may occur. Reselect your preferred workspaces in the Directory + Subscription + Workspace tab.
+
+> [!IMPORTANT]
+> Search results may be unexpected for multiword terms in other languages (ex. Chinese characters).
+
+## Customize search results
+
+You can create, save and share different views for your search results.
+
+1. On the search results page, select **Edit view**.
+
+ :::image type="content" source="media/how-to-search-assets/edit-view.png" alt-text="Screenshot: Edit view for search results.":::
+
+Use the menu to customize and create new views:
+
+|Item |Description |
+|||
+|Edit columns | Add, delete, and re-order columns in the current view's search results table |
+|Reset | Add all hidden columns back into the view |
+|Share | Displays a URL you can copy to share this view |
+|New... | Create a new view |
+|Clone | Clone the current view as a new view |
+
+Since each tab displays different columns, you customize views separately for each tab.
+
+## Next steps
+
+* [What is an Azure Machine Learning workspace?](concept-workspace.md)
+* [Data in Azure Machine Learning](concept-data.md)
machine-learning How To Set Up Training Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-training-targets.md
See these notebooks for examples of configuring jobs for various training scenar
* **Job or experiment deletion**: Experiments can be archived by using the [Experiment.archive](/python/api/azureml-core/azureml.core.experiment%28class%29#archive--) method, or from the Experiment tab view in Azure Machine Learning studio client via the "Archive experiment" button. This action hides the experiment from list queries and views, but does not delete it.
- Permanent deletion of individual experiments or jobs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](how-to-export-delete-data.md).
+ Permanent deletion of individual experiments or jobs is not currently supported. For more information on deleting Workspace assets, see [Export or delete your Machine Learning service workspace data](v1/how-to-export-delete-data.md).
* **Metric Document is too large**: Azure Machine Learning has internal limits on the size of metric objects that can be logged at once from a training job. If you encounter a "Metric Document is too large" error when logging a list-valued metric, try splitting the list into smaller chunks, for example:
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
To use the key when deploying a model to Azure Container Instance, create a new
For more information on creating and using a deployment configuration, see the following articles: * [AciWebservice.deploy_configuration()](/python/api/azureml-core/azureml.core.webservice.aci.aciwebservice#deploy-configuration-cpu-cores-none--memory-gb-none--tags-none--properties-none--description-none--location-none--auth-enabled-none--ssl-enabled-none--enable-app-insights-none--ssl-cert-pem-file-none--ssl-key-pem-file-none--ssl-cname-none--dns-name-label-none--primary-key-none--secondary-key-none--collect-model-data-none--cmk-vault-base-url-none--cmk-key-name-none--cmk-key-version-none-) reference
-* [Where and how to deploy](how-to-deploy-and-where.md)
+* [Where and how to deploy](v1/how-to-deploy-and-where.md)
* [Deploy a model to Azure Container Instances](v1/how-to-deploy-azure-container-instance.md) For more information on using a customer-managed key with ACI, see [Encrypt data with a customer-managed key](../container-instances/container-instances-encrypt-data.md#encrypt-data-with-a-customer-managed-key).
machine-learning How To Track Designer Experiments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-designer-experiments.md
After the pipeline run completes, you can see the *Mean_Absolute_Error* in the E
In this article, you learned how to use logs in the designer. For next steps, see these related articles:
-* Learn how to troubleshoot designer pipelines, see [Debug & troubleshoot ML pipelines](how-to-debug-pipelines.md#azure-machine-learning-designer).
+* Learn how to troubleshoot designer pipelines, see [Debug & troubleshoot ML pipelines](v1/how-to-debug-pipelines.md#azure-machine-learning-designer).
* Learn how to use the Python SDK to log metrics in the SDK authoring experience, see [Enable logging in Azure ML training runs](how-to-log-view-metrics.md). * Learn how to use [Execute Python Script](./algorithm-module-reference/execute-python-script.md) in the designer.
machine-learning How To Train Pytorch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-pytorch.md
src = ScriptRunConfig(source_directory=project_folder,
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](v1/how-to-train-with-datasets.md).
For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
machine-learning How To Train Scikit Learn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-scikit-learn.md
In this tutorial, the [training script **train_iris.py**](https://github.com/Azu
Notes: - The provided training script shows how to log some metrics to your Azure ML run using the `Run` object within the script.-- The provided training script uses example data from the `iris = datasets.load_iris()` function. To use and access your own data, see [how to train with datasets](how-to-train-with-datasets.md) to make data available during training.
+- The provided training script uses example data from the `iris = datasets.load_iris()` function. To use and access your own data, see [how to train with datasets](v1/how-to-train-with-datasets.md) to make data available during training.
### Define your environment
run.wait_for_completion(show_output=True)
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](v1/how-to-train-with-datasets.md).
### What happens during run execution As the run is executed, it goes through the following stages:
machine-learning How To Train Tensorflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-tensorflow.md
src = ScriptRunConfig(source_directory=script_folder,
``` > [!WARNING]
-> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](how-to-train-with-datasets.md).
+> Azure Machine Learning runs training scripts by copying the entire source directory. If you have sensitive data that you don't want to upload, use a [.ignore file](how-to-save-write-experiment-files.md#storage-limits-of-experiment-snapshots) or don't include it in the source directory . Instead, access your data using an Azure ML [dataset](v1/how-to-train-with-datasets.md).
For more information on configuring jobs with ScriptRunConfig, see [Configure and submit training runs](how-to-set-up-training-targets.md).
machine-learning How To Troubleshoot Auto Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-auto-ml.md
If the listed version is not a supported version:
## Data access
-For automated ML jobs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](how-to-train-with-datasets.md#azurefile-storage).
+For automated ML jobs, you need to ensure the file datastore that connects to your AzureFile storage has the appropriate authentication credentials. Otherwise, the following message results. Learn how to [update your data access authentication credentials](v1/how-to-train-with-datasets.md#azurefile-storage).
Error message: `Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.`
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Next.**
- 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute. Learn more about [data profiling](how-to-connect-data-ui.md#profile).
+ 1. The **Confirm details** form is a summary of the information previously populated in the **Basic info** and **Settings and preview** forms. You also have the option to create a data profile for your dataset using a profiling enabled compute. Learn more about [data profiling](v1/how-to-connect-data-ui.md#profile).
Select **Next**. 1. Select your newly created dataset once it appears. You are also able to view a preview of the dataset and sample statistics.
Otherwise, you'll see a list of your recent automated ML experiments, including
Select **Create**. Creation of a new compute can take a few minutes. >[!NOTE]
- > Your compute name will indicate if the compute you select/create is *profiling enabled*. (See the section [data profiling](how-to-connect-data-ui.md#profile) for more details).
+ > Your compute name will indicate if the compute you select/create is *profiling enabled*. (See the section [data profiling](v1/how-to-connect-data-ui.md#profile) for more details).
Select **Next**.
machine-learning How To Use Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-event-grid.md
This example shows how to use event grid with an Azure Logic App to trigger retr
Before you begin, perform the following actions:
-* Set up a dataset monitor to [detect data drift](how-to-monitor-datasets.md) in a workspace
+* Set up a dataset monitor to [detect data drift](v1/how-to-monitor-datasets.md) in a workspace
* Create a published [Azure Data Factory pipeline](../data-factory/index.yml). In this example, a simple Data Factory pipeline is used to copy files into a blob store and run a published Machine Learning pipeline. For more information on this scenario, see how to set up a [Machine Learning step in Azure Data Factory](../data-factory/transform-data-machine-learning-service.md)
machine-learning How To Use Pipeline Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-parameter.md
Published endpoints are especially useful for retraining and batch prediction sc
In this article, you learned how to create pipeline parameters in the designer. Next, see how you can use pipeline parameters to [retrain models](how-to-retrain-designer.md) or perform [batch predictions](how-to-run-batch-predictions-designer.md).
-You can also learn how to [use pipelines programmatically with the SDK](how-to-deploy-pipelines.md).
+You can also learn how to [use pipelines programmatically with the SDK v1](v1/how-to-deploy-pipelines.md).
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-web-service.md
Use Azure Machine Learning pipeline endpoints to make predictions, retrain model
This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see the [migration overview article](migrate-overview.md). > [!NOTE]
-> This migration series focuses on the drag-and-drop designer. For more information on deploying models programmatically, see [Deploy machine learning models in Azure](how-to-deploy-and-where.md).
+> This migration series focuses on the drag-and-drop designer. For more information on deploying models programmatically, see [Deploy machine learning models in Azure](v1/how-to-deploy-and-where.md).
## Prerequisites
There are multiple ways to deploy a model in Azure Machine Learning. One of the
The designer converts the training pipeline into a real-time inference pipeline. A similar conversion also occurs in Studio (classic).
- In the designer, the conversion step also [registers the trained model to your Azure Machine Learning workspace](how-to-deploy-and-where.md#registermodel).
+ In the designer, the conversion step also [registers the trained model to your Azure Machine Learning workspace](v1/how-to-deploy-and-where.md#registermodel).
1. Select **Submit** to run the real-time inference pipeline, and verify that it runs successfully.
Use the following steps to publish a pipeline endpoint for batch prediction:
The designer converts the training pipeline into a batch inference pipeline. A similar conversion also occurs in Studio (classic).
- In the designer, this step also [registers the trained model to your Azure Machine Learning workspace](how-to-deploy-and-where.md#registermodel).
+ In the designer, this step also [registers the trained model to your Azure Machine Learning workspace](v1/how-to-deploy-and-where.md#registermodel).
1. Select **Submit** to run the batch inference pipeline and verify that it successfully completes.
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-register-dataset.md
If your data is already in a cloud storage service, and you want to keep your da
Use the following steps to register a dataset to Azure Machine Learning from a cloud service:
-1. [Create a datastore](how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
+1. [Create a datastore](v1/how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
-1. [Register a dataset](how-to-connect-data-ui.md#create-datasets). If you are migrating a Studio (classic) dataset, select the **Tabular** dataset setting.
+1. [Register a dataset](v1/how-to-connect-data-ui.md#create-datasets). If you are migrating a Studio (classic) dataset, select the **Tabular** dataset setting.
After you register a dataset in Azure Machine Learning, you can use it in designer:
After you register a dataset in Azure Machine Learning, you can use it in design
Use the following steps to import data directly to your designer pipeline:
-1. [Create a datastore](how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
+1. [Create a datastore](v1/how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
After you create the datastore, you can use the [**Import Data**](algorithm-module-reference/import-data.md) module in the designer to ingest data from it:
machine-learning Reference Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-kubernetes.md
This article contains reference information that may be useful when [configuring
## Prerequisites for ARO or OCP clusters ### Disable Security Enhanced Linux (SELinux)
-[AzureML dataset](./how-to-train-with-datasets.md) (used in AzureML training jobs) isn't supported on machines with SELinux enabled. Therefore, you need to disable `selinux` on all workers in order to use AzureML dataset.
+[AzureML dataset](v1/how-to-train-with-datasets.md) (used in AzureML training jobs) isn't supported on machines with SELinux enabled. Therefore, you need to disable `selinux` on all workers in order to use AzureML dataset.
### Privileged setup for ARO and OCP
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
* Model Profiling does not support 4 CPUs in the US-Arizona region. * Sample notebooks may not work in Azure Government if it needs access to public data. * IP addresses: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure Government](https://www.microsoft.com/download/details.aspx?id=57063) instead.
-* For scheduled pipelines, we also provide a blob-based trigger mechanism. This mechanism is not supported for CMK workspaces. For enabling a blob-based trigger for CMK workspaces, you have to do extra setup. For more information, see [Trigger a run of a machine learning pipeline from a Logic App](how-to-trigger-published-pipeline.md).
+* For scheduled pipelines, we also provide a blob-based trigger mechanism. This mechanism is not supported for CMK workspaces. For enabling a blob-based trigger for CMK workspaces, you have to do extra setup. For more information, see [Trigger a run of a machine learning pipeline from a Logic App](v1/how-to-trigger-published-pipeline.md).
* Firewalls: When using an Azure Government region, add the following hosts to your firewall setting: * For Arizona use: `usgovarizona.api.ml.azure.us`
machine-learning Resource Curated Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/resource-curated-environments.md
Azure ML pipeline training workflows that use AutoML automatically selects a cur
| AzureML-AutoML-GPU | GPU | No | | AzureML-AutoML-DNN-GPU | GPU | Yes |
-For more information on AutoML and Azure ML pipelines, see [use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
+For more information on AutoML and Azure ML pipelines, see [use automated ML in an Azure Machine Learning pipeline in Python](v1/how-to-use-automlstep-in-pipelines.md).
## Support Version updates for supported environments, including the base images they reference, are released every two weeks to address vulnerabilities no older than 30 days. Based on usage, some environments may be deprecated (hidden from the product but usable) to support more common machine learning scenarios.
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
See this article for steps on how to create a Power BI supported schema to facil
+ Learn more about [automated machine learning](concept-automated-ml.md). + For more information on classification metrics and charts, see the [Understand automated machine learning results](how-to-understand-automated-ml.md) article. + Learn more about [featurization](how-to-configure-auto-features.md#featurization).
-+ Learn more about [data profiling](how-to-connect-data-ui.md#profile).
++ Learn more about [data profiling](v1/how-to-connect-data-ui.md#profile). >[!NOTE] > This bike share dataset has been modified for this tutorial. This dataset was made available as part of a [Kaggle competition](https://www.kaggle.com/c/bike-sharing-demand/data) and was originally available via [Capital Bikeshare](https://www.capitalbikeshare.com/system-data). It can also be found within the [UCI Machine Learning Database](http://archive.ics.uci.edu/ml/datasets/Bike+Sharing+Dataset).<br><br>
machine-learning Tutorial First Experiment Automated Ml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-first-experiment-automated-ml.md
In this automated machine learning tutorial, you used Azure Machine Learning's a
+ Learn more about [automated machine learning](concept-automated-ml.md). + For more information on classification metrics and charts, see the [Understand automated machine learning results](how-to-understand-automated-ml.md) article. + Learn more about [featurization](how-to-configure-auto-features.md#featurization).
-+ Learn more about [data profiling](how-to-connect-data-ui.md#profile).
++ Learn more about [data profiling](v1/how-to-connect-data-ui.md#profile). >[!NOTE]
machine-learning Concept Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-data.md
Datasets can be created from local files, public urls, [Azure Open Datasets](htt
There are 2 types of datasets:
-+ A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleansed and ready to use in training experiments, you can [download or mount files](../how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) referenced by FileDatasets to your compute target.
++ A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs. If your data is already cleansed and ready to use in training experiments, you can [download or mount files](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) referenced by FileDatasets to your compute target. + A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represents data in a tabular format by parsing the provided file or list of files. You can load a TabularDataset into a pandas or Spark DataFrame for further manipulation and cleansing. For a complete list of data formats you can create TabularDatasets from, see the [TabularDatasetFactory class](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory). Additional datasets capabilities can be found in the following documentation:
-+ [Version and track](../how-to-version-track-datasets.md) dataset lineage.
-+ [Monitor your dataset](../how-to-monitor-datasets.md) to help with data drift detection.
++ [Version and track](how-to-version-track-datasets.md) dataset lineage.++ [Monitor your dataset](how-to-monitor-datasets.md) to help with data drift detection. ## Work with your data
With datasets, you can accomplish a number of machine learning tasks through sea
+ Train machine learning models: + [automated ML experiments](../how-to-use-automated-ml-for-ml-models.md) + the [designer](../tutorial-designer-automobile-price-train-score.md#import-data)
- + [notebooks](../how-to-train-with-datasets.md)
- + [Azure Machine Learning pipelines](../how-to-create-machine-learning-pipelines.md)
-+ Access datasets for scoring with [batch inference](../tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](../how-to-create-machine-learning-pipelines.md).
+ + [notebooks](how-to-train-with-datasets.md)
+ + [Azure Machine Learning pipelines](how-to-create-machine-learning-pipelines.md)
++ Access datasets for scoring with [batch inference](../tutorial-pipeline-batch-scoring-classification.md) in [machine learning pipelines](how-to-create-machine-learning-pipelines.md). + Set up a dataset monitor for [data drift](#monitor-model-performance-with-data-drift) detection.
Create an [image labeling project](../how-to-create-image-labeling-projects.md)
In the context of machine learning, data drift is the change in model input data that leads to model performance degradation. It is one of the top reasons model accuracy degrades over time, thus monitoring data drift helps detect model performance issues.
-See the [Create a dataset monitor](../how-to-monitor-datasets.md) article, to learn more about how to detect and alert to data drift on new data in a dataset.
+See the [Create a dataset monitor](how-to-monitor-datasets.md) article, to learn more about how to detect and alert to data drift on new data in a dataset.
## Next steps
machine-learning Concept Model Management And Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-model-management-and-deployment.md
Previously updated : 08/15/2022 Last updated : 08/18/2022 # MLOps: Model management, deployment, lineage, and monitoring with Azure Machine Learning v1
Registered models are identified by name and version. Each time you register a m
> You can also register models trained outside Azure Machine Learning. You can't delete a registered model that is being used in an active deployment.
-For more information, see the register model section of [Deploy models](../how-to-deploy-and-where.md#registermodel).
+For more information, see the register model section of [Deploy models](how-to-deploy-and-where.md#registermodel).
> [!IMPORTANT] > When using Filter by `Tags` option on the Models page of Azure Machine Learning Studio, instead of using `TagName : TagValue` customers should use `TagName=TagValue` (without space)
To deploy the model as a web service, you must provide the following items:
* Dependencies required to use the model. For example, a script that accepts requests and invokes the model, conda dependencies, etc. * Deployment configuration that describes how and where to deploy the model.
-For more information, see [Deploy models](../how-to-deploy-and-where.md).
+For more information, see [Deploy models](how-to-deploy-and-where.md).
#### Controlled rollout
For more information, see [How to enable model data collection](how-to-enable-da
## Retrain your model on new data
-Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](../how-to-monitor-datasets.md), model performance can degrade in the face of such things as changes to a particular sensor, natural data changes such as seasonal effects, or features shifting in their relation to other features.
+Often, you'll want to validate your model, update it, or even retrain it from scratch, as you receive new information. Sometimes, receiving new data is an expected part of the domain. Other times, as discussed in [Detect data drift (preview) on datasets](how-to-monitor-datasets.md), model performance can degrade in the face of such things as changes to a particular sensor, natural data changes such as seasonal effects, or features shifting in their relation to other features.
There is no universal answer to "How do I know if I should retrain?" but Azure ML event and monitoring tools previously discussed are good starting points for automation. Once you have decided to retrain, you should:
For more information on using Azure Pipelines with Azure Machine Learning, see t
* [Azure Machine Learning MLOps](https://aka.ms/mlops) repository * [Azure Machine Learning MLOpsPython](https://github.com/Microsoft/MLOpspython) repository
-You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](../how-to-cicd-data-ingestion.md).
+You can also use Azure Data Factory to create a data ingestion pipeline that prepares data for use with training. For more information, see [Data ingestion pipeline](how-to-cicd-data-ingestion.md).
## Next steps Learn more by reading and exploring the following resources:
-+ [How & where to deploy models](../how-to-deploy-and-where.md) with Azure Machine Learning
++ [How & where to deploy models](how-to-deploy-and-where.md) with Azure Machine Learning + [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md).
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
Datastores securely connect to your storage service on Azure without putting you
To understand where datastores fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
-For a low code experience, see how to use the [Azure Machine Learning studio to create and register datastores](../how-to-connect-data-ui.md#create-datastores).
+For a low code experience, see how to use the [Azure Machine Learning studio to create and register datastores](how-to-connect-data-ui.md#create-datastores).
>[!TIP] > This article assumes you want to connect to your storage service with credential-based authentication credentials, like a service principal or a shared access signature (SAS) token. Keep in mind, if credentials are registered with datastores, all users with workspace *Reader* role are able to retrieve these credentials. [Learn more about workspace *Reader* role.](../how-to-assign-roles.md#default-roles).
Within this section are examples for how to create and register a datastore via
To create datastores for other supported storage services, see the [reference documentation for the applicable `register_azure_*` methods](/python/api/azureml-core/azureml.core.datastore.datastore#methods).
-If you prefer a low code experience, see [Connect to data with Azure Machine Learning studio](../how-to-connect-data-ui.md).
+If you prefer a low code experience, see [Connect to data with Azure Machine Learning studio](how-to-connect-data-ui.md).
>[!IMPORTANT] > If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](../../key-vault/general/soft-delete-change.md#turn-on-soft-delete-for-an-existing-key-vault).
If you prefer to create and manage datastores using the Azure Machine Learning V
After you create a datastore, [create an Azure Machine Learning dataset](how-to-create-register-datasets.md) to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training.
-With datasets, you can [download or mount](../how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services for model training on a compute target. [Learn more about how to train ML models with datasets](../how-to-train-with-datasets.md).
+With datasets, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services for model training on a compute target. [Learn more about how to train ML models with datasets](how-to-train-with-datasets.md).
Azure Machine Learning provides several ways to use your models for scoring. Som
| Method | Datastore access | Description | | -- | :--: | -- | | [Batch prediction](../tutorial-pipeline-batch-scoring-classification.md) | Γ£ö | Make predictions on large quantities of data asynchronously. |
-| [Web service](../how-to-deploy-and-where.md) | &nbsp; | Deploy models as a web service. |
+| [Web service](how-to-deploy-and-where.md) | &nbsp; | Deploy models as a web service. |
For situations where the SDK doesn't provide access to datastores, you might be able to create custom code by using the relevant Azure SDK to access the data. For example, the [Azure Storage SDK for Python](https://github.com/Azure/azure-storage-python) is a client library that you can use to access data stored in blobs or files.
Azure Data Factory provides efficient and resilient data transfer with more than
* [Create an Azure machine learning dataset](how-to-create-register-datasets.md) * [Train a model](../how-to-set-up-training-targets.md)
-* [Deploy a model](../how-to-deploy-and-where.md)
+* [Deploy a model](how-to-deploy-and-where.md)
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
In this article, learn how to set up your workspace to use these compute resourc
* Apache Spark pools (powered by Azure Synapse Analytics) * Azure HDInsight * Azure Batch
-* Azure Databricks - used as a training compute target only in [machine learning pipelines](../how-to-create-machine-learning-pipelines.md)
+* Azure Databricks - used as a training compute target only in [machine learning pipelines](how-to-create-machine-learning-pipelines.md)
* Azure Data Lake Analytics * Azure Container Instance * Azure Machine Learning Kubernetes
machine-learning How To Cicd Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-cicd-data-ingestion.md
+
+ Title: DevOps for a data ingestion pipeline
+
+description: Learn how to apply DevOps practices to build a data ingestion pipeline to prepare data using Azure Data Factory and Azure Databricks.
+++++++++ Last updated : 08/17/2022+
+# Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
+++
+# DevOps for a data ingestion pipeline
+
+In most scenarios, a data ingestion solution is a composition of scripts, service invocations, and a pipeline orchestrating all the activities. In this article, you learn how to apply DevOps practices to the development lifecycle of a common data ingestion pipeline that prepares data for machine learning model training. The pipeline is built using the following Azure
+
+* __Azure Data Factory__: Reads the raw data and orchestrates data preparation.
+* __Azure Databricks__: Runs a Python notebook that transforms the data.
+* __Azure Pipelines__: Automates a continuous integration and development process.
+
+## Data ingestion pipeline workflow
+
+The data ingestion pipeline implements the following workflow:
+
+1. Raw data is read into an Azure Data Factory (ADF) pipeline.
+1. The ADF pipeline sends the data to an Azure Databricks cluster, which runs a Python notebook to transform the data.
+1. The data is stored to a blob container, where it can be used by Azure Machine Learning to train a model.
+
+![data ingestion pipeline workflow](media/how-to-cicd-data-ingestion/data-ingestion-pipeline.png)
+
+## Continuous integration and delivery overview
+
+As with many software solutions, there is a team (for example, Data Engineers) working on it. They collaborate and share the same Azure resources such as Azure Data Factory, Azure Databricks, and Azure Storage accounts. The collection of these resources is a Development environment. The data engineers contribute to the same source code base.
+
+A continuous integration and delivery system automates the process of building, testing, and delivering (deploying) the solution. The Continuous Integration (CI) process performs the following tasks:
+
+* Assembles the code
+* Checks it with the code quality tests
+* Runs unit tests
+* Produces artifacts such as tested code and Azure Resource Manager templates
+
+The Continuous Delivery (CD) process deploys the artifacts to the downstream environments.
+
+![cicd data ingestion diagram](media/how-to-cicd-data-ingestion/cicd-data-ingestion.png)
+
+This article demonstrates how to automate the CI and CD processes with [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/).
+
+## Source control management
+
+Source control management is needed to track changes and enable collaboration between team members.
+For example, the code would be stored in an Azure DevOps, GitHub, or GitLab repository. The collaboration workflow is based on a branching model. For example, [GitFlow](https://datasift.github.io/gitflow/IntroducingGitFlow.html).
+
+### Python Notebook Source Code
+
+The data engineers work with the Python notebook source code either locally in an IDE (for example, [Visual Studio Code](https://code.visualstudio.com)) or directly in the Databricks workspace. Once the code changes are complete, they are merged to the repository following a branching policy.
+
+> [!TIP]
+> We recommended storing the code in `.py` files rather than in `.ipynb` Jupyter Notebook format. It improves the code readability and enables automatic code quality checks in the CI process.
+
+### Azure Data Factory Source Code
+
+The source code of Azure Data Factory pipelines is a collection of JSON files generated by an Azure Data Factory workspace. Normally the data engineers work with a visual designer in the Azure Data Factory workspace rather than with the source code files directly.
+
+To configure the workspace to use a source control repository, see [Author with Azure Repos Git integration](/azure/data-factory/source-control#author-with-azure-repos-git-integration).
+
+## Continuous integration (CI)
+
+The ultimate goal of the Continuous Integration process is to gather the joint team work from the source code and prepare it for the deployment to the downstream environments. As with the source code management this process is different for the Python notebooks and Azure Data Factory pipelines.
+
+### Python Notebook CI
+
+The CI process for the Python Notebooks gets the code from the collaboration branch (for example, ***master*** or ***develop***) and performs the following activities:
+* Code linting
+* Unit testing
+* Saving the code as an artifact
+
+The following code snippet demonstrates the implementation of these steps in an Azure DevOps ***yaml*** pipeline:
+
+```yaml
+steps:
+- script: |
+ flake8 --output-file=$(Build.BinariesDirectory)/lint-testresults.xml --format junit-xml
+ workingDirectory: '$(Build.SourcesDirectory)'
+ displayName: 'Run flake8 (code style analysis)'
+
+- script: |
+ python -m pytest --junitxml=$(Build.BinariesDirectory)/unit-testresults.xml $(Build.SourcesDirectory)
+ displayName: 'Run unit tests'
+
+- task: PublishTestResults@2
+ condition: succeededOrFailed()
+ inputs:
+ testResultsFiles: '$(Build.BinariesDirectory)/*-testresults.xml'
+ testRunTitle: 'Linting & Unit tests'
+ failTaskOnFailedTests: true
+ displayName: 'Publish linting and unit test results'
+
+- publish: $(Build.SourcesDirectory)
+ artifact: di-notebooks
+```
+
+The pipeline uses [flake8](https://pypi.org/project/flake8/) to do the Python code linting. It runs the unit tests defined in the source code and publishes the linting and test results so they're available in the Azure Pipeline execution screen.
+
+If the linting and unit testing is successful, the pipeline will copy the source code to the artifact repository to be used by the subsequent deployment steps.
+
+### Azure Data Factory CI
+
+CI process for an Azure Data Factory pipeline is a bottleneck for a data ingestion pipeline.
+There's no continuous integration. A deployable artifact for Azure Data Factory is a collection of Azure Resource Manager templates. The only way to produce those templates is to click the ***publish*** button in the Azure Data Factory workspace.
+
+1. The data engineers merge the source code from their feature branches into the collaboration branch, for example, ***master*** or ***develop***.
+1. Someone with the granted permissions clicks the ***publish*** button to generate Azure Resource Manager templates from the source code in the collaboration branch.
+1. The workspace validates the pipelines (think of it as of linting and unit testing), generates Azure Resource Manager templates (think of it as of building) and saves the generated templates to a technical branch ***adf_publish*** in the same code repository (think of it as of publishing artifacts). This branch is created automatically by the Azure Data Factory workspace.
+
+For more information on this process, see [Continuous integration and delivery in Azure Data Factory](/azure/data-factory/continuous-integration-delivery).
+
+It's important to make sure that the generated Azure Resource Manager templates are environment agnostic. This means that all values that may differ between environments are parametrized. Azure Data Factory is smart enough to expose the majority of such values as parameters. For example, in the following template the connection properties to an Azure Machine Learning workspace are exposed as parameters:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "factoryName": {
+ "value": "devops-ds-adf"
+ },
+ "AzureMLService_servicePrincipalKey": {
+ "value": ""
+ },
+ "AzureMLService_properties_typeProperties_subscriptionId": {
+ "value": "0fe1c235-5cfa-4152-17d7-5dff45a8d4ba"
+ },
+ "AzureMLService_properties_typeProperties_resourceGroupName": {
+ "value": "devops-ds-rg"
+ },
+ "AzureMLService_properties_typeProperties_servicePrincipalId": {
+ "value": "6e35e589-3b22-4edb-89d0-2ab7fc08d488"
+ },
+ "AzureMLService_properties_typeProperties_tenant": {
+ "value": "72f988bf-86f1-41af-912b-2d7cd611db47"
+ }
+ }
+}
+```
+
+However, you may want to expose your custom properties that are not handled by the Azure Data Factory workspace by default. In the scenario of this article an Azure Data Factory pipeline invokes a Python notebook processing the data. The notebook accepts a parameter with the name of an input data file.
+
+```Python
+import pandas as pd
+import numpy as np
+
+data_file_name = getArgument("data_file_name")
+data = pd.read_csv(data_file_name)
+
+labels = np.array(data['target'])
+...
+```
+
+This name is different for ***Dev***, ***QA***, ***UAT***, and ***PROD*** environments. In a complex pipeline with multiple activities, there can be several custom properties. It's good practice to collect all those values in one place and define them as pipeline ***variables***:
+
+![Screenshot shows a Notebook called PrepareData and M L Execute Pipeline called M L Execute Pipeline at the top with the Variables tab selected below with the option to add new variables, each with a name, type, and default value.](media/how-to-cicd-data-ingestion/adf-variables.png)
+
+The pipeline activities may refer to the pipeline variables while actually using them:
+
+![Screenshot shows a Notebook called PrepareData and M L Execute Pipeline called M L Execute Pipeline at the top with the Settings tab selected below.](media/how-to-cicd-data-ingestion/adf-notebook-parameters.png)
+
+The Azure Data Factory workspace ***doesn't*** expose pipeline variables as Azure Resource Manager templates parameters by default. The workspace uses the [Default Parameterization Template](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters) dictating what pipeline properties should be exposed as Azure Resource Manager template parameters. To add pipeline variables to the list, update the `"Microsoft.DataFactory/factories/pipelines"` section of the [Default Parameterization Template](/azure/data-factory/continuous-integration-delivery-resource-manager-custom-parameters) with the following snippet and place the result json file in the root of the source folder:
+
+```json
+"Microsoft.DataFactory/factories/pipelines": {
+ "properties": {
+ "variables": {
+ "*": {
+ "defaultValue": "="
+ }
+ }
+ }
+ }
+```
+
+Doing so will force the Azure Data Factory workspace to add the variables to the parameters list when the ***publish*** button is clicked:
+
+```json
+{
+ "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#",
+ "contentVersion": "1.0.0.0",
+ "parameters": {
+ "factoryName": {
+ "value": "devops-ds-adf"
+ },
+ ...
+ "data-ingestion-pipeline_properties_variables_data_file_name_defaultValue": {
+ "value": "driver_prediction_train.csv"
+ }
+ }
+}
+```
+
+The values in the JSON file are default values configured in the pipeline definition. They're expected to be overridden with the target environment values when the Azure Resource Manager template is deployed.
+
+## Continuous delivery (CD)
+
+The Continuous Delivery process takes the artifacts and deploys them to the first target environment. It makes sure that the solution works by running tests. If successful, it continues to the next environment.
+
+The CD Azure Pipeline consists of multiple stages representing the environments. Each stage contains [deployments](/azure/devops/pipelines/process/deployment-jobs) and [jobs](/azure/devops/pipelines/process/phases?tabs=yaml) that perform the following steps:
+
+* Deploy a Python Notebook to Azure Databricks workspace
+* Deploy an Azure Data Factory pipeline
+* Run the pipeline
+* Check the data ingestion result
+
+The pipeline stages can be configured with [approvals](/azure/devops/pipelines/process/approvals?tabs=check-pass) and [gates](/azure/devops/pipelines/release/approvals/gates) that provide additional control on how the deployment process evolves through the chain of environments.
+
+### Deploy a Python Notebook
+
+The following code snippet defines an Azure Pipeline [deployment](/azure/devops/pipelines/process/deployment-jobs) that copies a Python notebook to a Databricks cluster:
+
+```yaml
+- stage: 'Deploy_to_QA'
+ displayName: 'Deploy to QA'
+ variables:
+ - group: devops-ds-qa-vg
+ jobs:
+ - deployment: "Deploy_to_Databricks"
+ displayName: 'Deploy to Databricks'
+ timeoutInMinutes: 0
+ environment: qa
+ strategy:
+ runOnce:
+ deploy:
+ steps:
+ - task: UsePythonVersion@0
+ inputs:
+ versionSpec: '3.x'
+ addToPath: true
+ architecture: 'x64'
+ displayName: 'Use Python3'
+
+ - task: configuredatabricks@0
+ inputs:
+ url: '$(DATABRICKS_URL)'
+ token: '$(DATABRICKS_TOKEN)'
+ displayName: 'Configure Databricks CLI'
+
+ - task: deploynotebooks@0
+ inputs:
+ notebooksFolderPath: '$(Pipeline.Workspace)/di-notebooks'
+ workspaceFolder: '/Shared/devops-ds'
+ displayName: 'Deploy (copy) data processing notebook to the Databricks cluster'
+```
+
+The artifacts produced by the CI are automatically copied to the deployment agent and are available in the `$(Pipeline.Workspace)` folder. In this case, the deployment task refers to the `di-notebooks` artifact containing the Python notebook. This [deployment](/azure/devops/pipelines/process/deployment-jobs) uses the [Databricks Azure DevOps extension](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks) to copy the notebook files to the Databricks workspace.
+
+The `Deploy_to_QA` stage contains a reference to the `devops-ds-qa-vg` variable group defined in the Azure DevOps project. The steps in this stage refer to the variables from this variable group (for example, `$(DATABRICKS_URL)` and `$(DATABRICKS_TOKEN)`). The idea is that the next stage (for example, `Deploy_to_UAT`) will operate with the same variable names defined in its own UAT-scoped variable group.
+
+### Deploy an Azure Data Factory pipeline
+
+A deployable artifact for Azure Data Factory is an Azure Resource Manager template. It's going to be deployed with the ***Azure Resource Group Deployment*** task as it is demonstrated in the following snippet:
+
+```yaml
+ - deployment: "Deploy_to_ADF"
+ displayName: 'Deploy to ADF'
+ timeoutInMinutes: 0
+ environment: qa
+ strategy:
+ runOnce:
+ deploy:
+ steps:
+ - task: AzureResourceGroupDeployment@2
+ displayName: 'Deploy ADF resources'
+ inputs:
+ azureSubscription: $(AZURE_RM_CONNECTION)
+ resourceGroupName: $(RESOURCE_GROUP)
+ location: $(LOCATION)
+ csmFile: '$(Pipeline.Workspace)/adf-pipelines/ARMTemplateForFactory.json'
+ csmParametersFile: '$(Pipeline.Workspace)/adf-pipelines/ARMTemplateParametersForFactory.json'
+ overrideParameters: -data-ingestion-pipeline_properties_variables_data_file_name_defaultValue "$(DATA_FILE_NAME)"
+```
+The value of the data filename parameter comes from the `$(DATA_FILE_NAME)` variable defined in a QA stage variable group. Similarly, all parameters defined in ***ARMTemplateForFactory.json*** can be overridden. If they are not, then the default values are used.
+
+### Run the pipeline and check the data ingestion result
+
+The next step is to make sure that the deployed solution is working. The following job definition runs an Azure Data Factory pipeline with a [PowerShell script](https://github.com/microsoft/DataOps/tree/master/adf/utils) and executes a Python notebook on an Azure Databricks cluster. The notebook checks if the data has been ingested correctly and validates the result data file with `$(bin_FILE_NAME)` name.
+
+```yaml
+ - job: "Integration_test_job"
+ displayName: "Integration test job"
+ dependsOn: [Deploy_to_Databricks, Deploy_to_ADF]
+ pool:
+ vmImage: 'ubuntu-latest'
+ timeoutInMinutes: 0
+ steps:
+ - task: AzurePowerShell@4
+ displayName: 'Execute ADF Pipeline'
+ inputs:
+ azureSubscription: $(AZURE_RM_CONNECTION)
+ ScriptPath: '$(Build.SourcesDirectory)/adf/utils/Invoke-ADFPipeline.ps1'
+ ScriptArguments: '-ResourceGroupName $(RESOURCE_GROUP) -DataFactoryName $(DATA_FACTORY_NAME) -PipelineName $(PIPELINE_NAME)'
+ azurePowerShellVersion: LatestVersion
+ - task: UsePythonVersion@0
+ inputs:
+ versionSpec: '3.x'
+ addToPath: true
+ architecture: 'x64'
+ displayName: 'Use Python3'
+
+ - task: configuredatabricks@0
+ inputs:
+ url: '$(DATABRICKS_URL)'
+ token: '$(DATABRICKS_TOKEN)'
+ displayName: 'Configure Databricks CLI'
+
+ - task: executenotebook@0
+ inputs:
+ notebookPath: '/Shared/devops-ds/test-data-ingestion'
+ existingClusterId: '$(DATABRICKS_CLUSTER_ID)'
+ executionParams: '{"bin_file_name":"$(bin_FILE_NAME)"}'
+ displayName: 'Test data ingestion'
+
+ - task: waitexecution@0
+ displayName: 'Wait until the testing is done'
+```
+
+The final task in the job checks the result of the notebook execution. If it returns an error, it sets the status of pipeline execution to failed.
+
+## Putting pieces together
+
+The complete CI/CD Azure Pipeline consists of the following stages:
+* CI
+* Deploy To QA
+ * Deploy to Databricks + Deploy to ADF
+ * Integration Test
+
+It contains a number of ***Deploy*** stages equal to the number of target environments you have. Each ***Deploy*** stage contains two [deployments](/azure/devops/pipelines/process/deployment-jobs) that run in parallel and a [job](/azure/devops/pipelines/process/phases?tabs=yaml) that runs after deployments to test the solution on the environment.
+
+A sample implementation of the pipeline is assembled in the following ***yaml*** snippet:
+
+```yaml
+variables:
+- group: devops-ds-vg
+
+stages:
+- stage: 'CI'
+ displayName: 'CI'
+ jobs:
+ - job: "CI_Job"
+ displayName: "CI Job"
+ pool:
+ vmImage: 'ubuntu-latest'
+ timeoutInMinutes: 0
+ steps:
+ - task: UsePythonVersion@0
+ inputs:
+ versionSpec: '3.x'
+ addToPath: true
+ architecture: 'x64'
+ displayName: 'Use Python3'
+ - script: pip install --upgrade flake8 flake8_formatter_junit_xml
+ displayName: 'Install flake8'
+ - checkout: self
+ - script: |
+ flake8 --output-file=$(Build.BinariesDirectory)/lint-testresults.xml --format junit-xml
+ workingDirectory: '$(Build.SourcesDirectory)'
+ displayName: 'Run flake8 (code style analysis)'
+ - script: |
+ python -m pytest --junitxml=$(Build.BinariesDirectory)/unit-testresults.xml $(Build.SourcesDirectory)
+ displayName: 'Run unit tests'
+ - task: PublishTestResults@2
+ condition: succeededOrFailed()
+ inputs:
+ testResultsFiles: '$(Build.BinariesDirectory)/*-testresults.xml'
+ testRunTitle: 'Linting & Unit tests'
+ failTaskOnFailedTests: true
+ displayName: 'Publish linting and unit test results'
+
+ # The CI stage produces two artifacts (notebooks and ADF pipelines).
+ # The pipelines Azure Resource Manager templates are stored in a technical branch "adf_publish"
+ - publish: $(Build.SourcesDirectory)/$(Build.Repository.Name)/code/dataingestion
+ artifact: di-notebooks
+ - checkout: git://${{variables['System.TeamProject']}}@adf_publish
+ - publish: $(Build.SourcesDirectory)/$(Build.Repository.Name)/devops-ds-adf
+ artifact: adf-pipelines
+
+- stage: 'Deploy_to_QA'
+ displayName: 'Deploy to QA'
+ variables:
+ - group: devops-ds-qa-vg
+ jobs:
+ - deployment: "Deploy_to_Databricks"
+ displayName: 'Deploy to Databricks'
+ timeoutInMinutes: 0
+ environment: qa
+ strategy:
+ runOnce:
+ deploy:
+ steps:
+ - task: UsePythonVersion@0
+ inputs:
+ versionSpec: '3.x'
+ addToPath: true
+ architecture: 'x64'
+ displayName: 'Use Python3'
+
+ - task: configuredatabricks@0
+ inputs:
+ url: '$(DATABRICKS_URL)'
+ token: '$(DATABRICKS_TOKEN)'
+ displayName: 'Configure Databricks CLI'
+
+ - task: deploynotebooks@0
+ inputs:
+ notebooksFolderPath: '$(Pipeline.Workspace)/di-notebooks'
+ workspaceFolder: '/Shared/devops-ds'
+ displayName: 'Deploy (copy) data processing notebook to the Databricks cluster'
+ - deployment: "Deploy_to_ADF"
+ displayName: 'Deploy to ADF'
+ timeoutInMinutes: 0
+ environment: qa
+ strategy:
+ runOnce:
+ deploy:
+ steps:
+ - task: AzureResourceGroupDeployment@2
+ displayName: 'Deploy ADF resources'
+ inputs:
+ azureSubscription: $(AZURE_RM_CONNECTION)
+ resourceGroupName: $(RESOURCE_GROUP)
+ location: $(LOCATION)
+ csmFile: '$(Pipeline.Workspace)/adf-pipelines/ARMTemplateForFactory.json'
+ csmParametersFile: '$(Pipeline.Workspace)/adf-pipelines/ARMTemplateParametersForFactory.json'
+ overrideParameters: -data-ingestion-pipeline_properties_variables_data_file_name_defaultValue "$(DATA_FILE_NAME)"
+ - job: "Integration_test_job"
+ displayName: "Integration test job"
+ dependsOn: [Deploy_to_Databricks, Deploy_to_ADF]
+ pool:
+ vmImage: 'ubuntu-latest'
+ timeoutInMinutes: 0
+ steps:
+ - task: AzurePowerShell@4
+ displayName: 'Execute ADF Pipeline'
+ inputs:
+ azureSubscription: $(AZURE_RM_CONNECTION)
+ ScriptPath: '$(Build.SourcesDirectory)/adf/utils/Invoke-ADFPipeline.ps1'
+ ScriptArguments: '-ResourceGroupName $(RESOURCE_GROUP) -DataFactoryName $(DATA_FACTORY_NAME) -PipelineName $(PIPELINE_NAME)'
+ azurePowerShellVersion: LatestVersion
+ - task: UsePythonVersion@0
+ inputs:
+ versionSpec: '3.x'
+ addToPath: true
+ architecture: 'x64'
+ displayName: 'Use Python3'
+
+ - task: configuredatabricks@0
+ inputs:
+ url: '$(DATABRICKS_URL)'
+ token: '$(DATABRICKS_TOKEN)'
+ displayName: 'Configure Databricks CLI'
+
+ - task: executenotebook@0
+ inputs:
+ notebookPath: '/Shared/devops-ds/test-data-ingestion'
+ existingClusterId: '$(DATABRICKS_CLUSTER_ID)'
+ executionParams: '{"bin_file_name":"$(bin_FILE_NAME)"}'
+ displayName: 'Test data ingestion'
+
+ - task: waitexecution@0
+ displayName: 'Wait until the testing is done'
+
+```
+
+## Next steps
+
+* [Source Control in Azure Data Factory](/azure/data-factory/source-control)
+* [Continuous integration and delivery in Azure Data Factory](/azure/data-factory/continuous-integration-delivery)
+* [DevOps for Azure Databricks](https://marketplace.visualstudio.com/items?itemName=riserrad.azdo-databricks)
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
Requirements for training data in machine learning:
Azure Machine Learning datasets expose functionality to: * Easily transfer data from static files or URL sources into your workspace.
-* Make your data available to training scripts when running on cloud compute resources. See [How to train with datasets](../how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) for an example of using the `Dataset` class to mount data to your remote compute target.
+* Make your data available to training scripts when running on cloud compute resources. See [How to train with datasets](how-to-train-with-datasets.md#mount-files-to-remote-compute-targets) for an example of using the `Dataset` class to mount data to your remote compute target.
The following code creates a TabularDataset from a web url. See [Create a TabularDataset](how-to-create-register-datasets.md) for code examples on how to create datasets from other sources like local files and datastores.
model = run.register_model(model_name = model_name,
```
-For details on how to create a deployment configuration and deploy a registered model to a web service, see [how and where to deploy a model](../how-to-deploy-and-where.md?tabs=python#define-a-deployment-configuration).
+For details on how to create a deployment configuration and deploy a registered model to a web service, see [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
> [!TIP] > For registered models, one-click deployment is available via the [Azure Machine Learning studio](https://ml.azure.com). See [how to deploy registered models from the studio](../how-to-use-automated-ml-for-ml-models.md#deploy-your-model).
For general information on how model explanations and feature importance can be
## Next steps
-+ Learn more about [how and where to deploy a model](../how-to-deploy-and-where.md).
++ Learn more about [how and where to deploy a model](/azure/machine-learning/how-to-deploy-managed-online-endpoints). + Learn more about [how to train a regression model with Automated machine learning](../tutorial-auto-train-models.md).
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-connect-data-ui.md
+
+ Title: Connect to data storage with the studio UI
+
+description: Create datastores and datasets to securely connect to data in storage services in Azure with the Azure Machine Learning studio.
+++++++ Last updated : 01/18/2021+
+#Customer intent: As low code experience data scientist, I need to make my data in storage on Azure available to my remote compute to train my ML models.
++
+# Connect to data with the Azure Machine Learning studio
+
+In this article, learn how to access your data with the [Azure Machine Learning studio](../overview-what-is-machine-learning-studio.md). Connect to your data in storage services on Azure with [Azure Machine Learning datastores](how-to-access-data.md), and then package that data for tasks in your ML workflows with [Azure Machine Learning datasets](how-to-create-register-datasets.md).
+
+The following table defines and summarizes the benefits of datastores and datasets.
+
+|Object|Description| Benefits|
+||||
+|Datastores| Securely connect to your storage service on Azure, by storing your connection information, like your subscription ID and token authorization in your [Key Vault](https://azure.microsoft.com/services/key-vault/) associated with the workspace | Because your information is securely stored, you <br><br> <li> Don't&nbsp;put&nbsp;authentication&nbsp;credentials&nbsp;or&nbsp;original&nbsp;data sources at risk. <li> No longer need to hard code them in your scripts.
+|Datasets| By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. With datasets you can, <br><br><li> Access data during model training.<li> Share data and collaborate with other users.<li> Use open-source libraries, like pandas, for data exploration. | Because datasets are lazily evaluated, and the data remains in its existing location, you <br><br><li>Keep a single copy of data in your storage.<li> Incur no extra storage cost <li> Don't risk unintentionally changing your original data sources.<li>Improve ML workflow performance speeds.
+
+To understand where datastores and datasets fit in Azure Machine Learning's overall data access workflow, see the [Securely access data](concept-data.md#data-workflow) article.
+
+For a code first experience, see the following articles to use the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/) to:
+* [Connect to Azure storage services with datastores](how-to-access-data.md).
+* [Create Azure Machine Learning datasets](how-to-create-register-datasets.md).
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- Access to [Azure Machine Learning studio](https://ml.azure.com/).
+
+- An Azure Machine Learning workspace. [Create workspace resources](../quickstart-create-resources.md).
+
+ - When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. If blob storage is sufficient for your needs, the `workspaceblobstore` is set as the default datastore, and already configured for use. Otherwise, you need a storage account on Azure with a [supported storage type](how-to-access-data.md#supported-data-storage-service-types).
+
+
+## Create datastores
+
+You can create datastores from [these Azure storage solutions](how-to-access-data.md#supported-data-storage-service-types). **For unsupported storage solutions**, and to save data egress cost during ML experiments, you must [move your data](how-to-access-data.md#move-data-to-supported-azure-storage-solutions) to a supported Azure storage solution. [Learn more about datastores](how-to-access-data.md).
+
+You can create datastores with credential-based access or identity-based access.
+
+# [Credential-based](#tab/credential)
+
+Create a new datastore in a few steps with the Azure Machine Learning studio.
+
+> [!IMPORTANT]
+> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+1. Select **Datastores** on the left pane under **Manage**.
+1. Select **+ New datastore**.
+1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type and authentication type. See the [storage access and permissions section](#access-validation) to understand where to find the authentication credentials you need to populate this form.
+
+The following example demonstrates what the form looks like when you create an **Azure blob datastore**:
+
+![Form for a new datastore](media/how-to-connect-data-ui/new-datastore-form.png)
+
+# [Identity-based](#tab/identity)
+
+Create a new datastore in a few steps with the Azure Machine Learning studio. Learn more about [identity-based data access](how-to-identity-based-data-access.md).
+
+> [!IMPORTANT]
+> If your data storage account is in a virtual network, additional configuration steps are required to ensure the studio has access to your data. See [Network isolation & privacy](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied.
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/).
+1. Select **Datastores** on the left pane under **Manage**.
+1. Select **+ New datastore**.
+1. Complete the form to create and register a new datastore. The form intelligently updates itself based on your selections for Azure storage type. See [which storage types support identity-based](how-to-identity-based-data-access.md#storage-access-permissions) data access.
+ 1. Customers need to choose the storage acct and container name they want to use
+Blob reader role (for ADLS Gen 2 and Blob storage) is required; whoever is creating needs permissions to see the contents of the storage
+Reader role of the subscription and resource group
+1. Select **No** to not **Save credentials with the datastore for data access**.
+
+The following example demonstrates what the form looks like when you create an **Azure blob datastore**:
+
+![Form for a new datastore](media/how-to-connect-data-ui/new-id-based-datastore-form.png)
+++
+## Create datasets
+
+After you create a datastore, create a dataset to interact with your data. Datasets package your data into a lazily evaluated consumable object for machine learning tasks, like training. [Learn more about datasets](how-to-create-register-datasets.md).
+
+There are two types of datasets, FileDataset and TabularDataset.
+[FileDatasets](how-to-create-register-datasets.md#filedataset) create references to single or multiple files or public URLs. Whereas [TabularDatasets](how-to-create-register-datasets.md#tabulardataset) represent your data in a tabular format. You can create TabularDatasets from .csv, .tsv, .parquet, .jsonl files, and from SQL query results.
+
+The following steps and animation show how to create a dataset in [Azure Machine Learning studio](https://ml.azure.com).
+
+> [!Note]
+> Datasets created through Azure Machine Learning studio are automatically registered to the workspace.
+
+![Create a dataset with the UI](./media/how-to-connect-data-ui/create-dataset-ui.gif)
+
+To create a dataset in the studio:
+1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com/).
+1. Select **Datasets** in the **Assets** section of the left pane.
+1. Select **Create Dataset** to choose the source of your dataset. This source can be local files, a datastore, public URLs, or [Azure Open Datasets](/azure/open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset).
+1. Select **Tabular** or **File** for Dataset type.
+1. Select **Next** to open the **Datastore and file selection** form. On this form you select where to keep your dataset after creation, and select what data files to use for your dataset.
+ 1. Enable skip validation if your data is in a virtual network. Learn more about [virtual network isolation and privacy](../how-to-enable-studio-virtual-network.md).
+
+1. Select **Next** to populate the **Settings and preview** and **Schema** forms; they're intelligently populated based on file type and you can further configure your dataset prior to creation on these forms.
+ 1. On the Settings and preview form, you can indicate if your data contains multi-line data.
+ 1. On the Schema form, you can specify that your TabularDataset has a time component by selecting type: **Timestamp** for your date or time column.
+ 1. If your data is formatted into subsets, for example time windows, and you want to use those subsets for training, select type **Partition timestamp**. Doing so enables time series operations on your dataset. Learn more about how to [use partitions in your dataset for training](how-to-monitor-datasets.md?tabs=azure-studio#create-target-dataset).
+1. Select **Next** to review the **Confirm details** form. Check your selections and create an optional data profile for your dataset. Learn more about [data profiling](#profile).
+1. Select **Create** to complete your dataset creation.
+
+<a name="profile"></a>
+
+### Data profile and preview
+
+After you create your dataset, verify you can view the profile and preview in the studio with the following steps.
+
+1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com/)
+1. Select **Datasets** in the **Assets** section of the left pane.
+1. Select the name of the dataset you want to view.
+1. Select the **Explore** tab.
+1. Select the **Preview** or **Profile** tab.
+
+![View dataset profile and preview](./media/how-to-connect-data-ui/dataset-preview-profile.gif)
+
+You can get a vast variety of summary statistics across your data set to verify whether your data set is ML-ready. For non-numeric columns, they include only basic statistics like min, max, and error count. For numeric columns, you can also review their statistical moments and estimated quantiles.
+
+Specifically, Azure Machine Learning dataset's data profile includes:
+
+>[!NOTE]
+> Blank entries appear for features with irrelevant types.
+
+|Statistic|Description
+||
+|Feature| Name of the column that is being summarized.
+|Profile| In-line visualization based on the type inferred. For example, strings, booleans, and dates will have value counts, while decimals (numerics) have approximated histograms. This allows you to gain a quick understanding of the distribution of the data.
+|Type distribution| In-line value count of types within a column. Nulls are their own type, so this visualization is useful for detecting odd or missing values.
+|Type|Inferred type of the column. Possible values include: strings, booleans, dates, and decimals.
+|Min| Minimum value of the column. Blank entries appear for features whose type doesn't have an inherent ordering (like, booleans).
+|Max| Maximum value of the column.
+|Count| Total number of missing and non-missing entries in the column.
+|Not missing count| Number of entries in the column that aren't missing. Empty strings and errors are treated as values, so they won't contribute to the "not missing count."
+|Quantiles| Approximated values at each quantile to provide a sense of the distribution of the data.
+|Mean| Arithmetic mean or average of the column.
+|Standard deviation| Measure of the amount of dispersion or variation of this column's data.
+|Variance| Measure of how far spread out this column's data is from its average value.
+|Skewness| Measure of how different this column's data is from a normal distribution.
+|Kurtosis| Measure of how heavily tailed this column's data is compared to a normal distribution.
+
+## Storage access and permissions
+
+To ensure you securely connect to your Azure storage service, Azure Machine Learning requires that you have permission to access the corresponding data storage. This access depends on the authentication credentials used to register the datastore.
+
+### Virtual network
+
+If your data storage account is in a **virtual network**, extra configuration steps are required to ensure Azure Machine Learning has access to your data. See [Use Azure Machine Learning studio in a virtual network](../how-to-enable-studio-virtual-network.md) to ensure the appropriate configuration steps are applied when you create and register your datastore.
+
+### Access validation
+
+> [!WARNING]
+> Cross tenant access to storage accounts is not supported. If cross tenant access is needed for your scenario, please reach out to the AzureML Data Support team alias at amldatasupport@microsoft.com for assistance with a custom code solution.
+
+**As part of the initial datastore creation and registration process**, Azure Machine Learning automatically validates that the underlying storage service exists and the user provided principal (username, service principal, or SAS token) has access to the specified storage.
+
+**After datastore creation**, this validation is only performed for methods that require access to the underlying storage container, **not** each time datastore objects are retrieved. For example, validation happens if you want to download files from your datastore; but if you just want to change your default datastore, then validation doesn't happen.
+
+To authenticate your access to the underlying storage service, you can provide either your account key, shared access signatures (SAS) tokens, or service principal according to the datastore type you want to create. The [storage type matrix](how-to-access-data.md#supported-data-storage-service-types) lists the supported authentication types that correspond to each datastore type.
+
+You can find account key, SAS token, and service principal information on your [Azure portal](https://portal.azure.com).
+
+* If you plan to use an account key or SAS token for authentication, select **Storage Accounts** on the left pane, and choose the storage account that you want to register.
+ * The **Overview** page provides information such as the account name, container, and file share name.
+ 1. For account keys, go to **Access keys** on the **Settings** pane.
+ 1. For SAS tokens, go to **Shared access signatures** on the **Settings** pane.
+
+* If you plan to use a [service principal](/azure/active-directory/develop/howto-create-service-principal-portal) for authentication, go to your **App registrations** and select which app you want to use.
+ * Its corresponding **Overview** page will contain required information like tenant ID and client ID.
+
+> [!IMPORTANT]
+> * If you need to change your access keys for an Azure Storage account (account key or SAS token), be sure to sync the new credentials with your workspace and the datastores connected to it. Learn how to [sync your updated credentials](../how-to-change-storage-access-key.md). <br> <br>
+> * If you unregister and re-register a datastore with the same name, and it fails, the Azure Key Vault for your workspace may not have soft-delete enabled. By default, soft-delete is enabled for the key vault instance created by your workspace, but it may not be enabled if you used an existing key vault or have a workspace created prior to October 2020. For information on how to enable soft-delete, see [Turn on Soft Delete for an existing key vault](/azure/key-vault/general/soft-delete-change#turn-on-soft-delete-for-an-existing-key-vault).
+
+### Permissions
+
+For Azure blob container and Azure Data Lake Gen 2 storage, make sure your authentication credentials have **Storage Blob Data Reader** access. Learn more about [Storage Blob Data Reader](/azure/role-based-access-control/built-in-roles#storage-blob-data-reader). An account SAS token defaults to no permissions.
+* For data **read access**, your authentication credentials must have a minimum of list and read permissions for containers and objects.
+
+* For data **write access**, write and add permissions also are required.
+
+## Train with datasets
+
+Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
+
+## Next steps
+
+* [A step-by-step example of training with TabularDatasets and automated machine learning](../tutorial-first-experiment-automated-ml.md).
+
+* [Train a model](../how-to-set-up-training-targets.md).
+
+* For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-machine-learning-pipelines.md
+
+ Title: Create and run ML pipelines
+
+description: Create and run machine learning pipelines to create and manage the workflows that stitch together machine learning (ML) phases.
++++++ Last updated : 10/21/2021++++
+# Create and run machine learning pipelines with Azure Machine Learning SDK
++
+In this article, you learn how to create and run [machine learning pipelines](../concept-ml-pipelines.md) by using the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro). Use **ML pipelines** to create a workflow that stitches together various ML phases. Then, publish that pipeline for later access or sharing with others. Track ML pipelines to see how your model is performing in the real world and to detect data drift. ML pipelines are ideal for batch scoring scenarios, using various computes, reusing steps instead of rerunning them, and sharing ML workflows with others.
+
+This article isn't a tutorial. For guidance on creating your first pipeline, see [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](../tutorial-pipeline-batch-scoring-classification.md) or [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
+
+While you can use a different kind of pipeline called an [Azure Pipeline](/azure/devops/pipelines/targets/azure-machine-learning?context=azure%2fmachine-learning%2fservice%2fcontext%2fml-context&tabs=yaml) for CI/CD automation of ML tasks, that type of pipeline isn't stored in your workspace. [Compare these different pipelines](../concept-ml-pipelines.md#which-azure-pipeline-technology-should-i-use).
+
+The ML pipelines you create are visible to the members of your Azure Machine Learning [workspace](../how-to-manage-workspace.md).
+
+ML pipelines execute on compute targets (see [What are compute targets in Azure Machine Learning](../concept-compute-target.md)). Pipelines can read and write data to and from supported [Azure Storage](../../storage/index.yml) locations.
+
+If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+## Prerequisites
+
+* An Azure Machine Learning workspace. [Create workspace resources](../quickstart-create-resources.md).
+
+* [Configure your development environment](../how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](../concept-compute-instance.md) with the SDK already installed.
+
+Start by attaching your workspace:
+
+```Python
+import azureml.core
+from azureml.core import Workspace, Datastore
+
+ws = Workspace.from_config()
+```
+
+## Set up machine learning resources
+
+Create the resources required to run an ML pipeline:
+
+* Set up a datastore used to access the data needed in the pipeline steps.
+
+* Configure a `Dataset` object to point to persistent data that lives in, or is accessible in, a datastore. Configure an `OutputFileDatasetConfig` object for temporary data passed between pipeline steps.
+
+* Set up the [compute targets](concept-azure-machine-learning-architecture.md#compute-targets) on which your pipeline steps will run.
+
+### Set up a datastore
+
+A datastore stores the data for the pipeline to access. Each workspace has a default datastore. You can register more datastores.
+
+When you create your workspace, [Azure Files](/azure/storage/files/storage-files-introduction) and [Azure Blob storage](/azure/storage/blobs/storage-blobs-introduction) are attached to the workspace. A default datastore is registered to connect to the Azure Blob storage. To learn more, see [Deciding when to use Azure Files, Azure Blobs, or Azure Disks](/azure/storage/common/storage-introduction).
+
+```python
+# Default datastore
+def_data_store = ws.get_default_datastore()
+
+# Get the blob storage associated with the workspace
+def_blob_store = Datastore(ws, "workspaceblobstore")
+
+# Get file storage associated with the workspace
+def_file_store = Datastore(ws, "workspacefilestore")
+
+```
+
+Steps generally consume data and produce output data. A step can create data such as a model, a directory with model and dependent files, or temporary data. This data is then available for other steps later in the pipeline. To learn more about connecting your pipeline to your data, see the articles [How to Access Data](how-to-access-data.md) and [How to Register Datasets](how-to-create-register-datasets.md).
+
+### Configure data with `Dataset` and `OutputFileDatasetConfig` objects
+
+The preferred way to provide data to a pipeline is a [Dataset](/python/api/azureml-core/azureml.core.dataset.Dataset) object. The `Dataset` object points to data that lives in or is accessible from a datastore or at a Web URL. The `Dataset` class is abstract, so you'll create an instance of either a `FileDataset` (referring to one or more files) or a `TabularDataset` that's created by from one or more files with delimited columns of data.
+
+You create a `Dataset` using methods like [from_files](/python/api/azureml-core/azureml.data.dataset_factory.filedatasetfactory#from-files-path--validate-true-) or [from_delimited_files](/python/api/azureml-core/azureml.data.dataset_factory.tabulardatasetfactory#from-delimited-files-path--validate-true--include-path-false--infer-column-types-true--set-column-types-none--separatorheader-true--partition-format-none--support-multi-line-false-).
+
+```python
+from azureml.core import Dataset
+
+my_dataset = Dataset.File.from_files([(def_blob_store, 'train-images/')])
+```
+
+Intermediate data (or output of a step) is represented by an [OutputFileDatasetConfig](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig) object. `output_data1` is produced as the output of a step. Optionally, this data can be registered as a dataset by calling `register_on_complete`. If you create an `OutputFileDatasetConfig` in one step and use it as an input to another step, that data dependency between steps creates an implicit execution order in the pipeline.
+
+`OutputFileDatasetConfig` objects return a directory, and by default writes output to the default datastore of the workspace.
+
+```python
+from azureml.data import OutputFileDatasetConfig
+
+output_data1 = OutputFileDatasetConfig(destination = (datastore, 'outputdataset/{run-id}'))
+output_data_dataset = output_data1.register_on_complete(name = 'prepared_output_data')
+
+```
+
+> [!IMPORTANT]
+> Intermediate data stored using `OutputFileDatasetConfig` isn't automatically deleted by Azure.
+> You should either programmatically delete intermediate data at the end of a pipeline run, use a
+> datastore with a short data-retention policy, or regularly do manual clean up.
+
+> [!TIP]
+> Only upload files relevant to the job at hand. Any change in files within the data directory will be seen as reason to rerun the step the next time the pipeline is run even if reuse is specified.
+
+## Set up a compute target
++
+In Azure Machine Learning, the term __compute__ (or __compute target__) refers to the machines or clusters that do the computational steps in your machine learning pipeline. See [compute targets for model training](../concept-compute-target.md#train) for a full list of compute targets and [Create compute targets](../how-to-create-attach-compute-studio.md) for how to create and attach them to your workspace. The process for creating and or attaching a compute target is the same whether you're training a model or running a pipeline step. After you create and attach your compute target, use the `ComputeTarget` object in your [pipeline step](#steps).
+
+> [!IMPORTANT]
+> Performing management operations on compute targets isn't supported from inside remote jobs. Since machine learning pipelines are submitted as a remote job, do not use management operations on compute targets from inside the pipeline.
+
+### Azure Machine Learning compute
+
+You can create an Azure Machine Learning compute for running your steps. The code for other compute targets is similar, with slightly different parameters, depending on the type.
+
+```python
+from azureml.core.compute import ComputeTarget, AmlCompute
+
+compute_name = "aml-compute"
+vm_size = "STANDARD_NC6"
+if compute_name in ws.compute_targets:
+ compute_target = ws.compute_targets[compute_name]
+ if compute_target and type(compute_target) is AmlCompute:
+ print('Found compute target: ' + compute_name)
+else:
+ print('Creating a new compute target...')
+ provisioning_config = AmlCompute.provisioning_configuration(vm_size=vm_size, # STANDARD_NC6 is GPU-enabled
+ min_nodes=0,
+ max_nodes=4)
+ # create the compute target
+ compute_target = ComputeTarget.create(
+ ws, compute_name, provisioning_config)
+
+ # Can poll for a minimum number of nodes and for a specific timeout.
+ # If no min node count is provided it will use the scale settings for the cluster
+ compute_target.wait_for_completion(
+ show_output=True, min_node_count=None, timeout_in_minutes=20)
+
+ # For a more detailed view of current cluster status, use the 'status' property
+ print(compute_target.status.serialize())
+```
+
+## Configure the training run's environment
+
+The next step is making sure that the remote training run has all the dependencies needed by the training steps. Dependencies and the runtime context are set by creating and configuring a `RunConfiguration` object.
+
+```python
+from azureml.core.runconfig import RunConfiguration
+from azureml.core.conda_dependencies import CondaDependencies
+from azureml.core import Environment
+
+aml_run_config = RunConfiguration()
+# `compute_target` as defined in "Azure Machine Learning compute" section above
+aml_run_config.target = compute_target
+
+USE_CURATED_ENV = True
+if USE_CURATED_ENV :
+ curated_environment = Environment.get(workspace=ws, name="AzureML-sklearn-0.24-ubuntu18.04-py37-cpu")
+ aml_run_config.environment = curated_environment
+else:
+ aml_run_config.environment.python.user_managed_dependencies = False
+
+ # Add some packages relied on by data prep step
+ aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
+ conda_packages=['pandas','scikit-learn'],
+ pip_packages=['azureml-sdk', 'azureml-dataset-runtime[fuse,pandas]'],
+ pin_sdk_version=False)
+```
+
+The code above shows two options for handling dependencies. As presented, with `USE_CURATED_ENV = True`, the configuration is based on a curated environment. Curated environments are "prebaked" with common inter-dependent libraries and can be faster to bring online. Curated environments have prebuilt Docker images in the [Microsoft Container Registry](https://hub.docker.com/publishers/microsoftowner). For more information, see [Azure Machine Learning curated environments](../resource-curated-environments.md).
+
+The path taken if you change `USE_CURATED_ENV` to `False` shows the pattern for explicitly setting your dependencies. In that scenario, a new custom Docker image will be created and registered in an Azure Container Registry within your resource group (see [Introduction to private Docker container registries in Azure](/azure/container-registry/container-registry-intro)). Building and registering this image can take quite a few minutes.
+
+## <a id="steps"></a>Construct your pipeline steps
+
+Once you have the compute resource and environment created, you're ready to define your pipeline's steps. There are many built-in steps available via the Azure Machine Learning SDK, as you can see on the [reference documentation for the `azureml.pipeline.steps` package](/python/api/azureml-pipeline-steps/azureml.pipeline.steps). The most flexible class is [PythonScriptStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.python_script_step.pythonscriptstep), which runs a Python script.
+
+```python
+from azureml.pipeline.steps import PythonScriptStep
+dataprep_source_dir = "./dataprep_src"
+entry_point = "prepare.py"
+# `my_dataset` as defined above
+ds_input = my_dataset.as_named_input('input1')
+
+# `output_data1`, `compute_target`, `aml_run_config` as defined above
+data_prep_step = PythonScriptStep(
+ script_name=entry_point,
+ source_directory=dataprep_source_dir,
+ arguments=["--input", ds_input.as_download(), "--output", output_data1],
+ compute_target=compute_target,
+ runconfig=aml_run_config,
+ allow_reuse=True
+)
+```
+
+The above code shows a typical initial pipeline step. Your data preparation code is in a subdirectory (in this example, `"prepare.py"` in the directory `"./dataprep.src"`). As part of the pipeline creation process, this directory is zipped and uploaded to the `compute_target` and the step runs the script specified as the value for `script_name`.
+
+The `arguments` values specify the inputs and outputs of the step. In the example above, the baseline data is the `my_dataset` dataset. The corresponding data will be downloaded to the compute resource since the code specifies it as `as_download()`. The script `prepare.py` does whatever data-transformation tasks are appropriate to the task at hand and outputs the data to `output_data1`, of type `OutputFileDatasetConfig`. For more information, see [Moving data into and between ML pipeline steps (Python)](how-to-move-data-in-out-of-pipelines.md).
+The step will run on the machine defined by `compute_target`, using the configuration `aml_run_config`.
+
+Reuse of previous results (`allow_reuse`) is key when using pipelines in a collaborative environment since eliminating unnecessary reruns offers agility. Reuse is the default behavior when the script_name, inputs, and the parameters of a step remain the same. When reuse is allowed, results from the previous run are immediately sent to the next step. If `allow_reuse` is set to `False`, a new run will always be generated for this step during pipeline execution.
+
+It's possible to create a pipeline with a single step, but almost always you'll choose to split your overall process into several steps. For instance, you might have steps for data preparation, training, model comparison, and deployment. For instance, one might imagine that after the `data_prep_step` specified above, the next step might be training:
+
+```python
+train_source_dir = "./train_src"
+train_entry_point = "train.py"
+
+training_results = OutputFileDatasetConfig(name = "training_results",
+ destination = def_blob_store)
+
+
+train_step = PythonScriptStep(
+ script_name=train_entry_point,
+ source_directory=train_source_dir,
+ arguments=["--prepped_data", output_data1.as_input(), "--training_results", training_results],
+ compute_target=compute_target,
+ runconfig=aml_run_config,
+ allow_reuse=True
+)
+```
+
+The above code is similar to the code in the data preparation step. The training code is in a directory separate from that of the data preparation code. The `OutputFileDatasetConfig` output of the data preparation step, `output_data1` is used as the _input_ to the training step. A new `OutputFileDatasetConfig` object, `training_results` is created to hold the results for a later comparison or deployment step.
+
+For other code examples, see how to [build a two step ML pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/pipeline-with-datasets/pipeline-for-image-classification.ipynb) and [how to write data back to datastores upon run completion](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/scriptrun-with-data-input-output/how-to-use-scriptrun.ipynb).
+
+After you define your steps, you build the pipeline by using some or all of those steps.
+
+> [!NOTE]
+> No file or data is uploaded to Azure Machine Learning when you define the steps or build the pipeline. The files are uploaded when you call [Experiment.submit()](/python/api/azureml-core/azureml.core.experiment.experiment#submit-config--tags-none-kwargs-).
+
+```python
+# list of steps to run (`compare_step` definition not shown)
+compare_models = [data_prep_step, train_step, compare_step]
+
+from azureml.pipeline.core import Pipeline
+
+# Build the pipeline
+pipeline1 = Pipeline(workspace=ws, steps=[compare_models])
+```
+
+### Use a dataset
+
+Datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL can be used as input to any pipeline step. You can write output to a [DataTransferStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.datatransferstep), [DatabricksStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.databricks_step.databricksstep), or if you want to write data to a specific datastore use [OutputFileDatasetConfig](/python/api/azureml-core/azureml.data.outputfiledatasetconfig).
+
+> [!IMPORTANT]
+> Writing output data back to a datastore using `OutputFileDatasetConfig` is only supported for Azure Blob, Azure File share, ADLS Gen 1 and Gen 2 datastores.
+
+```python
+dataset_consuming_step = PythonScriptStep(
+ script_name="iris_train.py",
+ inputs=[iris_tabular_dataset.as_named_input("iris_data")],
+ compute_target=compute_target,
+ source_directory=project_folder
+)
+```
+
+You then retrieve the dataset in your pipeline by using the [Run.input_datasets](/python/api/azureml-core/azureml.core.run.run#input-datasets) dictionary.
+
+```python
+# iris_train.py
+from azureml.core import Run, Dataset
+
+run_context = Run.get_context()
+iris_dataset = run_context.input_datasets['iris_data']
+dataframe = iris_dataset.to_pandas_dataframe()
+```
+
+The line `Run.get_context()` is worth highlighting. This function retrieves a `Run` representing the current experimental run. In the above sample, we use it to retrieve a registered dataset. Another common use of the `Run` object is to retrieve both the experiment itself and the workspace in which the experiment resides:
+
+```python
+# Within a PythonScriptStep
+
+ws = Run.get_context().experiment.workspace
+```
+
+For more detail, including alternate ways to pass and access data, see [Moving data into and between ML pipeline steps (Python)](how-to-move-data-in-out-of-pipelines.md).
+
+## Caching & reuse
+
+To optimize and customize the behavior of your pipelines, you can do a few things around caching and reuse. For example, you can choose to:
++ **Turn off the default reuse of the step run output** by setting `allow_reuse=False` during [step definition](/python/api/azureml-pipeline-steps/). Reuse is key when using pipelines in a collaborative environment since eliminating unnecessary runs offers agility. However, you can opt out of reuse.++ **Force output regeneration for all steps in a run** with `pipeline_run = exp.submit(pipeline, regenerate_outputs=True)`+
+By default, `allow_reuse` for steps is enabled and the `source_directory` specified in the step definition is hashed. So, if the script for a given step remains the same (`script_name`, inputs, and the parameters), and nothing else in the` source_directory` has changed, the output of a previous step run is reused, the job isn't submitted to the compute, and the results from the previous run are immediately available to the next step instead.
+
+```python
+step = PythonScriptStep(name="Hello World",
+ script_name="hello_world.py",
+ compute_target=aml_compute,
+ source_directory=source_directory,
+ allow_reuse=False,
+ hash_paths=['hello_world.ipynb'])
+```
+
+> [!Note]
+> If the names of the data inputs change, the step will rerun, _even if_ the underlying data does not change. You must explicitly set the `name` field of input data (`data.as_input(name=...)`). If you do not explicitly set this value, the `name` field will be set to a random guid and the step's results will not be reused.
+
+## Submit the pipeline
+
+When you submit the pipeline, Azure Machine Learning checks the dependencies for each step and uploads a snapshot of the source directory you specified. If no source directory is specified, the current local directory is uploaded. The snapshot is also stored as part of the experiment in your workspace.
+
+> [!IMPORTANT]
+> [!INCLUDE [amlinclude-info](../../../includes/machine-learning-amlignore-gitignore.md)]
+>
+> For more information, see [Snapshots](concept-azure-machine-learning-architecture.md#snapshots).
+
+```python
+from azureml.core import Experiment
+
+# Submit the pipeline to be run
+pipeline_run1 = Experiment(ws, 'Compare_Models_Exp').submit(pipeline1)
+pipeline_run1.wait_for_completion()
+```
+
+When you first run a pipeline, Azure Machine Learning:
+
+* Downloads the project snapshot to the compute target from the Blob storage associated with the workspace.
+* Builds a Docker image corresponding to each step in the pipeline.
+* Downloads the Docker image for each step to the compute target from the container registry.
+* Configures access to `Dataset` and `OutputFileDatasetConfig` objects. For `as_mount()` access mode, FUSE is used to provide virtual access. If mount isn't supported or if the user specified access as `as_upload()`, the data is instead copied to the compute target.
+
+* Runs the step in the compute target specified in the step definition.
+* Creates artifacts, such as logs, stdout and stderr, metrics, and output specified by the step. These artifacts are then uploaded and kept in the user's default datastore.
+
+![Diagram of running an experiment as a pipeline](../media/how-to-create-your-first-pipeline/run_an_experiment_as_a_pipeline.png)
+
+For more information, see the [Experiment class](/python/api/azureml-core/azureml.core.experiment.experiment) reference.
+
+## Use pipeline parameters for arguments that change at inference time
+
+Sometimes, the arguments to individual steps within a pipeline relate to the development and training period: things like training rates and momentum, or paths to data or configuration files. When a model is deployed, though, you'll want to dynamically pass the arguments upon which you're inferencing (that is, the query you built the model to answer!). You should make these types of arguments pipeline parameters. To do this in Python, use the `azureml.pipeline.core.PipelineParameter` class, as shown in the following code snippet:
+
+```python
+from azureml.pipeline.core import PipelineParameter
+
+pipeline_param = PipelineParameter(name="pipeline_arg", default_value="default_val")
+train_step = PythonScriptStep(script_name="train.py",
+ arguments=["--param1", pipeline_param],
+ target=compute_target,
+ source_directory=project_folder)
+```
+
+### How Python environments work with pipeline parameters
+
+As discussed previously in [Configure the training run's environment](#configure-the-training-runs-environment), environment state, and Python library dependencies are specified using an `Environment` object. Generally, you can specify an existing `Environment` by referring to its name and, optionally, a version:
+
+```python
+aml_run_config = RunConfiguration()
+aml_run_config.environment.name = 'MyEnvironment'
+aml_run_config.environment.version = '1.0'
+```
+
+However, if you choose to use `PipelineParameter` objects to dynamically set variables at runtime for your pipeline steps, you can't use this technique of referring to an existing `Environment`. Instead, if you want to use `PipelineParameter` objects, you must set the `environment` field of the `RunConfiguration` to an `Environment` object. It is your responsibility to ensure that such an `Environment` has its dependencies on external Python packages properly set.
++
+## View results of a pipeline
+
+See the list of all your pipelines and their run details in the studio:
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+
+1. [View your workspace](../how-to-manage-workspace.md#view).
+
+1. On the left, select **Pipelines** to see all your pipeline runs.
+ ![list of machine learning pipelines](../media/how-to-create-your-first-pipeline/pipelines.png)
+
+1. Select a specific pipeline to see the run results.
+
+### Git tracking and integration
+
+When you start a training run where the source directory is a local Git repository, information about the repository is stored in the run history. For more information, see [Git integration for Azure Machine Learning](../concept-train-model-git-integration.md).
+
+## Next steps
+
+- To share your pipeline with colleagues or customers, see [Publish machine learning pipelines](how-to-deploy-pipelines.md)
+- Use [these Jupyter notebooks on GitHub](https://aka.ms/aml-pipeline-readme) to explore machine learning pipelines further
+- See the SDK reference help for the [azureml-pipelines-core](/python/api/azureml-pipeline-core/) package and the [azureml-pipelines-steps](/python/api/azureml-pipeline-steps/) package
+- See the [how-to](how-to-debug-pipelines.md) for tips on debugging and troubleshooting pipelines=
+- Learn how to run notebooks by following the article [Use Jupyter notebooks to explore this service](../samples-notebooks.md).
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
In this article, you learn how to create Azure Machine Learning datasets to acce
By creating a dataset, you create a reference to the data source location, along with a copy of its metadata. Because the data remains in its existing location, you incur no extra storage cost, and don't risk the integrity of your data sources. Also datasets are lazily evaluated, which aids in workflow performance speeds. You can create datasets from datastores, public URLs, and [Azure Open Datasets](../../open-datasets/how-to-create-azure-machine-learning-dataset-from-open-dataset.md).
-For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](../how-to-connect-data-ui.md#create-datasets)
+For a low-code experience, [Create Azure Machine Learning datasets with the Azure Machine Learning studio.](how-to-connect-data-ui.md#create-datasets)
With Azure Machine Learning datasets, you can: * Keep a single copy of data in your storage, referenced by datasets.
-* Seamlessly access data during model training without worrying about connection strings or data paths. [Learn more about how to train with datasets](../how-to-train-with-datasets.md).
+* Seamlessly access data during model training without worrying about connection strings or data paths. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
* Share data and collaborate with other users.
There are two dataset types, based on how users consume them in training; FileDa
### FileDataset A [FileDataset](/python/api/azureml-core/azureml.data.file_dataset.filedataset) references single or multiple files in your datastores or public URLs.
-If your data is already cleansed, and ready to use in training experiments, you can [download or mount](../how-to-train-with-datasets.md#mount-vs-download) the files to your compute as a FileDataset object.
+If your data is already cleansed, and ready to use in training experiments, you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) the files to your compute as a FileDataset object.
We recommend FileDatasets for your machine learning workflows, since the source files can be in any format, which enables a wider range of machine learning scenarios, including deep learning.
-Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datasets)
+Create a FileDataset with the [Python SDK](#create-a-filedataset) or the [Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets)
. ### TabularDataset
A [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) represe
With TabularDatasets, you can specify a time stamp from a column in the data or from wherever the path pattern data is stored to enable a time series trait. This specification allows for easy and efficient filtering by time. For an example, see [Tabular time series-related API demo with NOAA weather data](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb).
-Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datasets).
+Create a TabularDataset with [the Python SDK](#create-a-tabulardataset) or [Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
>[!NOTE] > [Automated ML](../concept-automated-ml.md) workflows generated via the Azure Machine Learning studio currently only support TabularDatasets.
To reuse and share datasets across experiments in your workspace, [register your
## Wrangle data After you create and [register](#register-datasets) your dataset, you can load it into your notebook for data wrangling and [exploration](#explore-data) prior to model training.
-If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](../how-to-train-with-datasets.md).
+If you don't need to do any data wrangling or exploration, see how to consume datasets in your training scripts for submitting ML experiments in [Train with datasets](how-to-train-with-datasets.md).
### Filter datasets (preview)
partition_keys = new_dataset.partition_keys # ['country']
After you're done wrangling your data, you can [register](#register-datasets) your dataset, and then load it into your notebook for data exploration prior to model training.
-For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](../how-to-train-with-datasets.md#mount-vs-download).
+For FileDatasets, you can either **mount** or **download** your dataset, and apply the Python libraries you'd normally use for data exploration. [Learn more about mount vs download](how-to-train-with-datasets.md#mount-vs-download).
```python # download the dataset
For information on using these templates, see [Use an Azure Resource Manager tem
## Train with datasets
-Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](../how-to-train-with-datasets.md).
+Use your datasets in your machine learning experiments for training ML models. [Learn more about how to train with datasets](how-to-train-with-datasets.md).
## Version datasets
-You can register a new dataset under the same name by creating a new version. A dataset version is a way to bookmark the state of your data so that you can apply a specific version of the dataset for experimentation or future reproduction. Learn more about [dataset versions](../how-to-version-track-datasets.md).
+You can register a new dataset under the same name by creating a new version. A dataset version is a way to bookmark the state of your data so that you can apply a specific version of the dataset for experimentation or future reproduction. Learn more about [dataset versions](how-to-version-track-datasets.md).
```Python # create a TabularDataset from Titanic training data web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
titanic_ds = titanic_ds.register(workspace = workspace,
## Next steps
-* Learn [how to train with datasets](../how-to-train-with-datasets.md).
+* Learn [how to train with datasets](how-to-train-with-datasets.md).
* Use automated machine learning to [train with TabularDatasets](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-energy-demand/auto-ml-forecasting-energy-demand.ipynb). * For more dataset training examples, see the [sample notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/).
machine-learning How To Data Ingest Adf https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-ingest-adf.md
+
+ Title: Data ingestion with Azure Data Factory
+
+description: Learn the available options for building a data ingestion pipeline with Azure Data Factory and the benefits of each.
+++++++ Last updated : 08/17/2022++
+#Customer intent: As an experienced data engineer, I need to create a production data ingestion pipeline for the data used to train my models.
+++
+# Data ingestion with Azure Data Factory
+
+In this article, you learn about the available options for building a data ingestion pipeline with [Azure Data Factory](/azure/data-factory/introduction). This Azure Data Factory pipeline is used to ingest data for use with [Azure Machine Learning](../overview-what-is-azure-machine-learning.md). Data Factory allows you to easily extract, transform, and load (ETL) data. Once the data has been transformed and loaded into storage, it can be used to train your machine learning models in Azure Machine Learning.
+
+Simple data transformation can be handled with native Data Factory activities and instruments such as [data flow](/azure/data-factory/control-flow-execute-data-flow-activity). When it comes to more complicated scenarios, the data can be processed with some custom code. For example, Python or R code.
+
+## Compare Azure Data Factory data ingestion pipelines
+There are several common techniques of using Data Factory to transform data during ingestion. Each technique has advantages and disadvantages that help determine if it's a good fit for a specific use case:
+
+| Technique | Advantages | Disadvantages |
+| -- | -- | -- |
+| Data Factory + Azure Functions | <li> Low latency, serverless compute<li>Stateful functions<li>Reusable functions | Only good for short running processing |
+| Data Factory + custom component | <li>Large-scale parallel computing<li>Suited for heavy algorithms | <li>Requires wrapping code into an executable<li>Complexity of handling dependencies and IO |
+| Data Factory + Azure Databricks notebook |<li> Apache Spark<li>Native Python environment |<li>Can be expensive<li>Creating clusters initially takes time and adds latency
+
+## Azure Data Factory with Azure functions
+
+Azure Functions allows you to run small pieces of code (functions) without worrying about application infrastructure. In this option, the data is processed with custom Python code wrapped into an Azure Function.
+
+The function is invoked with the [Azure Data Factory Azure Function activity](/azure/data-factory/control-flow-azure-function-activity). This approach is a good option for lightweight data transformations.
+
+![Diagram shows an Azure Data Factory pipeline, with Azure Function and Run ML Pipeline, and an Azure Machine Learning pipeline, with Train Model, and how they interact with raw data and prepared data.](media/how-to-data-ingest-adf/adf-function.png)
+++
+* Advantages:
+ * The data is processed on a serverless compute with a relatively low latency
+ * Data Factory pipeline can invoke a [Durable Azure Function](/azure/azure-functions/durable/durable-functions-overview) that may implement a sophisticated data transformation flow
+ * The details of the data transformation are abstracted away by the Azure Function that can be reused and invoked from other places
+* Disadvantages:
+ * The Azure Functions must be created before use with ADF
+ * Azure Functions is good only for short running data processing
+
+## Azure Data Factory with Custom Component activity
+
+In this option, the data is processed with custom Python code wrapped into an executable. It's invoked with an [Azure Data Factory Custom Component activity](/azure/data-factory/transform-data-using-dotnet-custom-activity). This approach is a better fit for large data than the previous technique.
+
+![Diagram shows an Azure Data Factory pipeline, with a custom component and Run M L Pipeline, and an Azure Machine Learning pipeline, with Train Model, and how they interact with raw data and prepared data.](media/how-to-data-ingest-adf/adf-customcomponent.png)
+
+* Advantages:
+ * The data is processed on [Azure Batch](/azure/batch/batch-technical-overview) pool, which provides large-scale parallel and high-performance computing
+ * Can be used to run heavy algorithms and process significant amounts of data
+* Disadvantages:
+ * Azure Batch pool must be created before use with Data Factory
+ * Over engineering related to wrapping Python code into an executable. Complexity of handling dependencies and input/output parameters
+
+## Azure Data Factory with Azure Databricks Python notebook
+
+[Azure Databricks](https://azure.microsoft.com/services/databricks/) is an Apache Spark-based analytics platform in the Microsoft cloud.
+
+In this technique, the data transformation is performed by a [Python notebook](/azure/data-factory/transform-data-using-databricks-notebook), running on an Azure Databricks cluster. This is probably, the most common approach that uses the full power of an Azure Databricks service. It's designed for distributed data processing at scale.
+
+![Diagram shows an Azure Data Factory pipeline, with Azure Databricks Python and Run M L Pipeline, and an Azure Machine Learning pipeline, with Train Model, and how they interact with raw data and prepared data.](media/how-to-data-ingest-adf/adf-databricks.png)
+
+* Advantages:
+ * The data is transformed on the most powerful data processing Azure service, which is backed up by Apache Spark environment
+ * Native support of Python along with data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn
+ * There's no need to wrap the Python code into functions or executable modules. The code works as is.
+* Disadvantages:
+ * Azure Databricks infrastructure must be created before use with Data Factory
+ * Can be expensive depending on Azure Databricks configuration
+ * Spinning up compute clusters from "cold" mode takes some time that brings high latency to the solution
+
+
+## Consume data in Azure Machine Learning
+
+The Data Factory pipeline saves the prepared data to your cloud storage (such as Azure Blob or Azure Data Lake). <br>
+Consume your prepared data in Azure Machine Learning by,
+
+* Invoking an Azure Machine Learning pipeline from your Data Factory pipeline.<br>**OR**
+* Creating an [Azure Machine Learning datastore](how-to-access-data.md#create-and-register-datastores).
+
+### Invoke Azure Machine Learning pipeline from Data Factory
+
+This method is recommended for [Machine Learning Operations (MLOps) workflows](concept-model-management-and-deployment.md#what-is-mlops). If you don't want to set up an Azure Machine Learning pipeline, see [Read data directly from storage](#read-data-directly-from-storage).
+
+Each time the Data Factory pipeline runs,
+
+1. The data is saved to a different location in storage.
+1. To pass the location to Azure Machine Learning, the Data Factory pipeline calls an [Azure Machine Learning pipeline](../concept-ml-pipelines.md). When calling the ML pipeline, the data location and job ID are sent as parameters.
+1. The ML pipeline can then create an Azure Machine Learning datastore and dataset with the data location. Learn more in [Execute Azure Machine Learning pipelines in Data Factory](/azure/data-factory/transform-data-machine-learning-service).
+
+![Diagram shows an Azure Data Factory pipeline and an Azure Machine Learning pipeline and how they interact with raw data and prepared data. The Data Factory pipeline feeds data to the Prepared Data database, which feeds a data store, which feeds datasets in the Machine Learning workspace.](media/how-to-data-ingest-adf/aml-dataset.png)
+
+> [!TIP]
+> Datasets [support versioning](how-to-version-track-datasets.md), so the ML pipeline can register a new version of the dataset that points to the most recent data from the ADF pipeline.
+
+Once the data is accessible through a datastore or dataset, you can use it to train an ML model. The training process might be part of the same ML pipeline that is called from ADF. Or it might be a separate process such as experimentation in a Jupyter notebook.
+
+Since datasets support versioning, and each job from the pipeline creates a new version, it's easy to understand which version of the data was used to train a model.
+
+### Read data directly from storage
+
+If you don't want to create an ML pipeline, you can access the data directly from the storage account where your prepared data is saved with an Azure Machine Learning datastore and dataset.
+
+The following Python code demonstrates how to create a datastore that connects to Azure DataLake Generation 2 storage. [Learn more about datastores and where to find service principal permissions](how-to-access-data.md#create-and-register-datastores).
++
+```python
+ws = Workspace.from_config()
+adlsgen2_datastore_name = '<ADLS gen2 storage account alias>' #set ADLS Gen2 storage account alias in AzureML
+
+subscription_id=os.getenv("ADL_SUBSCRIPTION", "<ADLS account subscription ID>") # subscription id of ADLS account
+resource_group=os.getenv("ADL_RESOURCE_GROUP", "<ADLS account resource group>") # resource group of ADLS account
+
+account_name=os.getenv("ADLSGEN2_ACCOUNTNAME", "<ADLS account name>") # ADLS Gen2 account name
+tenant_id=os.getenv("ADLSGEN2_TENANT", "<tenant id of service principal>") # tenant id of service principal
+client_id=os.getenv("ADLSGEN2_CLIENTID", "<client id of service principal>") # client id of service principal
+client_secret=os.getenv("ADLSGEN2_CLIENT_SECRET", "<secret of service principal>") # the secret of service principal
+
+adlsgen2_datastore = Datastore.register_azure_data_lake_gen2(
+ workspace=ws,
+ datastore_name=adlsgen2_datastore_name,
+ account_name=account_name, # ADLS Gen2 account name
+ filesystem='<filesystem name>', # ADLS Gen2 filesystem
+ tenant_id=tenant_id, # tenant id of service principal
+ client_id=client_id, # client id of service principal
+```
+
+Next, create a dataset to reference the file(s) you want to use in your machine learning task.
+
+The following code creates a TabularDataset from a csv file, `prepared-data.csv`. Learn more about [dataset types and accepted file formats](how-to-create-register-datasets.md#dataset-types).
++
+```python
+from azureml.core import Workspace, Datastore, Dataset
+from azureml.core.experiment import Experiment
+from azureml.train.automl import AutoMLConfig
+
+# retrieve data via AzureML datastore
+datastore = Datastore.get(ws, adlsgen2_datastore)
+datastore_path = [(datastore, '/data/prepared-data.csv')]
+
+prepared_dataset = Dataset.Tabular.from_delimited_files(path=datastore_path)
+```
+
+From here, use `prepared_dataset` to reference your prepared data, like in your training scripts. Learn how to [Train models with datasets in Azure Machine Learning](how-to-train-with-datasets.md).
+
+## Next steps
+
+* [Run a Databricks notebook in Azure Data Factory](/azure/data-factory/transform-data-using-databricks-notebook)
+* [Access data in Azure storage services](./how-to-access-data.md#create-and-register-datastores)
+* [Train models with datasets in Azure Machine Learning](./how-to-train-with-datasets.md).
+* [DevOps for a data ingestion pipeline](./how-to-cicd-data-ingestion.md)
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-data-prep-synapse-spark-pool.md
+
+ Title: Data wrangling with Apache Spark pools (preview)
+
+description: Learn how to attach and launch Apache Spark pools for data wrangling with Azure Synapse Analytics and Azure Machine Learning.
+++++++ Last updated : 08/17/2022+
+#Customer intent: As a data scientist, I want to prepare my data at scale, and to train my machine learning models from a single notebook using Azure Machine Learning.
++
+# Data wrangling with Apache Spark pools (preview)
+++
+In this article, you learn how to perform data wrangling tasks interactively within a dedicated Synapse session, powered by [Azure Synapse Analytics](/azure/synapse-analytics/overview-what-is), in a Jupyter notebook using the [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/).
+
+If you prefer to use Azure Machine Learning pipelines, see [How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (preview)](how-to-use-synapsesparkstep.md).
+
+For guidance on how to use Azure Synapse Analytics with a Synapse workspace, see the [Azure Synapse Analytics get started series](/azure/synapse-analytics/get-started).
+
+>[!IMPORTANT]
+> The Azure Machine Learning and Azure Synapse Analytics integration is in preview. The capabilities presented in this article employ the `azureml-synapse` package which contains [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features that may change at any time.
+
+## Azure Machine Learning and Azure Synapse Analytics integration
+
+The Azure Synapse Analytics integration with Azure Machine Learning (preview) allows you to attach an Apache Spark pool backed by Azure Synapse for interactive data exploration and preparation. With this integration, you can have a dedicated compute for data wrangling at scale, all within the same Python notebook you use for training your machine learning models.
+
+## Prerequisites
+
+* The [Azure Machine Learning Python SDK installed](/python/api/overview/azure/ml/install).
+
+* [Create an Azure Machine Learning workspace](../quickstart-create-resources.md).
+
+* [Create an Azure Synapse Analytics workspace in Azure portal](/azure/synapse-analytics/quickstart-create-workspace).
+
+* [Create Apache Spark pool using Azure portal, web tools, or Synapse Studio](/azure/synapse-analytics/quickstart-create-apache-spark-pool-portal).
+
+* [Configure your development environment](../how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](../concept-compute-instance.md#create) with the SDK already installed.
+
+* Install the `azureml-synapse` package (preview) with the following code:
+
+ ```python
+ pip install azureml-synapse
+ ```
+
+* Link your Azure Machine Learning workspace and Azure Synapse Analytics workspace with the [Azure Machine Learning Python SDK](../how-to-link-synapse-ml-workspaces.md#link-sdk) or via the [Azure Machine Learning studio](../how-to-link-synapse-ml-workspaces.md#link-studio)
+
+* [Attach a Synapse Spark pool](../how-to-link-synapse-ml-workspaces.md#attach-synapse-spark-pool-as-a-compute) as a compute target.
+
+## Launch Synapse Spark pool for data wrangling tasks
+
+To begin data preparation with the Apache Spark pool, specify the attached Spark Synapse compute name. This name can be found via the Azure Machine Learning studio under the **Attached computes** tab.
+
+![get attached compute name](media/how-to-data-prep-synapse-spark-pool/attached-compute.png)
+
+> [!IMPORTANT]
+> To continue use of the Apache Spark pool you must indicate which compute resource to use throughout your data wrangling tasks with `%synapse` for single lines of code and `%%synapse` for multiple lines. [Learn more about the %synapse magic command](/python/api/azureml-synapse/azureml.synapse.magics.remotesynapsemagics(class)).
+
+```python
+%synapse start -c SynapseSparkPoolAlias
+```
+
+After the session starts, you can check the session's metadata.
+
+```python
+%synapse meta
+```
+
+You can specify an [Azure Machine Learning environment](../concept-environments.md) to use during your Apache Spark session. Only Conda dependencies specified in the environment will take effect. Docker image isn't supported.
+
+>[!WARNING]
+> Python dependencies specified in environment Conda dependencies are not supported in Apache Spark pools. Currently, only fixed Python versions are supported.
+> Check your Python version by including `sys.version_info` in your script.
+
+The following code, creates the environment, `myenv`, which installs `azureml-core` version 1.20.0 and `numpy` version 1.17.0 before the session begins. You can then include this environment in your Apache Spark session `start` statement.
+
+```python
+
+from azureml.core import Workspace, Environment
+
+# creates environment with numpy and azureml-core dependencies
+ws = Workspace.from_config()
+env = Environment(name="myenv")
+env.python.conda_dependencies.add_pip_package("azureml-core==1.20.0")
+env.python.conda_dependencies.add_conda_package("numpy==1.17.0")
+env.register(workspace=ws)
+```
+
+To begin data preparation with the Apache Spark pool and your custom environment, specify the Apache Spark pool name and which environment to use during the Apache Spark session. Furthermore, you can provide your subscription ID, the machine learning workspace resource group, and the name of the machine learning workspace.
+
+>[!IMPORTANT]
+> Make sure to [Allow session level packages](/azure/synapse-analytics/spark/apache-spark-manage-session-packages#session-scoped-python-packages) is enabled in the linked Synapse workspace.
+>
+>![enable session level packages](media/how-to-data-prep-synapse-spark-pool/enable-session-level-package.png)
+
+```python
+%synapse start -c SynapseSparkPoolAlias -e myenv -s AzureMLworkspaceSubscriptionID -r AzureMLworkspaceResourceGroupName -w AzureMLworkspaceName
+```
+
+## Load data from storage
+
+Once your Apache Spark session starts, read in the data that you wish to prepare. Data loading is supported for Azure Blob storage and Azure Data Lake Storage Generations 1 and 2.
+
+There are two ways to load data from these storage
+
+* Directly load data from storage using its Hadoop Distributed Files System (HDFS) path.
+
+* Read in data from an existing [Azure Machine Learning dataset](how-to-create-register-datasets.md).
+
+To access these storage services, you need **Storage Blob Data Reader** permissions. If you plan to write data back to these storage services, you need **Storage Blob Data Contributor** permissions. [Learn more about storage permissions and roles](/azure/storage/blobs/assign-azure-role-data-access).
+
+### Load data with Hadoop Distributed Files System (HDFS) path
+
+To load and read data in from storage with the corresponding HDFS path, you need to have your data access authentication credentials readily available. These credentials differ depending on your storage type.
+
+The following code demonstrates how to read data from an **Azure Blob storage** into a Spark dataframe with either your shared access signature (SAS) token or access key.
+
+```python
+%%synapse
+
+# setup access key or SAS token
+sc._jsc.hadoopConfiguration().set("fs.azure.account.key.<storage account name>.blob.core.windows.net", "<access key>")
+sc._jsc.hadoopConfiguration().set("fs.azure.sas.<container name>.<storage account name>.blob.core.windows.net", "<sas token>")
+
+# read from blob
+df = spark.read.option("header", "true").csv("wasbs://demo@dprepdata.blob.core.windows.net/Titanic.csv")
+```
+
+The following code demonstrates how to read data in from **Azure Data Lake Storage Generation 1 (ADLS Gen 1)** with your service principal credentials.
+
+```python
+%%synapse
+
+# setup service principal which has access of the data
+sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.access.token.provider.type","ClientCredential")
+
+sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.client.id", "<client id>")
+
+sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.credential", "<client secret>")
+
+sc._jsc.hadoopConfiguration().set("fs.adl.account.<storage account name>.oauth2.refresh.url",
+"https://login.microsoftonline.com/<tenant id>/oauth2/token")
+
+df = spark.read.csv("adl://<storage account name>.azuredatalakestore.net/<path>")
+
+```
+
+The following code demonstrates how to read data in from **Azure Data Lake Storage Generation 2 (ADLS Gen 2)** with your service principal credentials.
+
+```python
+%%synapse
+
+# setup service principal which has access of the data
+sc._jsc.hadoopConfiguration().set("fs.azure.account.auth.type.<storage account name>.dfs.core.windows.net","OAuth")
+sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth.provider.type.<storage account name>.dfs.core.windows.net", "org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider")
+sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.id.<storage account name>.dfs.core.windows.net", "<client id>")
+sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.secret.<storage account name>.dfs.core.windows.net", "<client secret>")
+sc._jsc.hadoopConfiguration().set("fs.azure.account.oauth2.client.endpoint.<storage account name>.dfs.core.windows.net",
+"https://login.microsoftonline.com/<tenant id>/oauth2/token")
+
+df = spark.read.csv("abfss://<container name>@<storage account>.dfs.core.windows.net/<path>")
+
+```
+
+### Read in data from registered datasets
+
+You can also get an existing registered dataset in your workspace and perform data preparation on it by converting it into a spark dataframe.
+
+The following example authenticates to the workspace, gets a registered TabularDataset, `blob_dset`, that references files in blob storage, and converts it into a spark dataframe. When you convert your datasets into a spark dataframe, you can use `pyspark` data exploration and preparation libraries.
+
+``` python
+%%synapse
+
+from azureml.core import Workspace, Dataset
+
+subscription_id = "<enter your subscription ID>"
+resource_group = "<enter your resource group>"
+workspace_name = "<enter your workspace name>"
+
+ws = Workspace(workspace_name = workspace_name,
+ subscription_id = subscription_id,
+ resource_group = resource_group)
+
+dset = Dataset.get_by_name(ws, "blob_dset")
+spark_df = dset.to_spark_dataframe()
+```
+
+## Perform data wrangling tasks
+
+After you've retrieved and explored your data, you can perform data wrangling tasks.
+
+The following code, expands upon the HDFS example in the previous section and filters the data in spark dataframe, `df`, based on the **Survivor** column and groups that list by **Age**
+
+```python
+%%synapse
+
+from pyspark.sql.functions import col, desc
+
+df.filter(col('Survived') == 1).groupBy('Age').count().orderBy(desc('count')).show(10)
+
+df.show()
+
+```
+
+## Save data to storage and stop spark session
+
+Once your data exploration and preparation is complete, store your prepared data for later use in your storage account on Azure.
+
+In the following example, the prepared data is written back to Azure Blob storage and overwrites the original `Titanic.csv` file in the `training_data` directory. To write back to storage, you need **Storage Blob Data Contributor** permissions. [Learn more about storage permissions and roles](/azure/storage/blobs/assign-azure-role-data-access).
+
+```python
+%% synapse
+
+df.write.format("csv").mode("overwrite").save("wasbs://demo@dprepdata.blob.core.windows.net/training_data/Titanic.csv")
+```
+
+When you've completed data preparation and saved your prepared data to storage, stop using your Apache Spark pool with the following command.
+
+```python
+%synapse stop
+```
+
+## Create dataset to represent prepared data
+
+When you're ready to consume your prepared data for model training, connect to your storage with an [Azure Machine Learning datastore](how-to-access-data.md), and specify which file(s) you want to use with an [Azure Machine Learning dataset](how-to-create-register-datasets.md).
+
+The following code example,
+
+* Assumes you already created a datastore that connects to the storage service where you saved your prepared data.
+* Gets that existing datastore, `mydatastore`, from the workspace, `ws` with the get() method.
+* Creates a [FileDataset](how-to-create-register-datasets.md#filedataset), `train_ds`, that references the prepared data files located in the `training_data` directory in `mydatastore`.
+* Creates the variable `input1`, which can be used at a later time to make the data files of the `train_ds` dataset available to a compute target for your training tasks.
+
+```python
+from azureml.core import Datastore, Dataset
+
+datastore = Datastore.get(ws, datastore_name='mydatastore')
+
+datastore_paths = [(datastore, '/training_data/')]
+train_ds = Dataset.File.from_files(path=datastore_paths, validate=True)
+input1 = train_ds.as_mount()
+
+```
+
+## Use a `ScriptRunConfig` to submit an experiment run to a Synapse Spark pool
+
+If you're ready to automate and productionize your data wrangling tasks, you can submit an experiment run to [an attached Synapse Spark pool](../how-to-link-synapse-ml-workspaces.md#attach-a-pool-with-the-python-sdk) with the [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig) object.
+
+Similarly, if you have an Azure Machine Learning pipeline, you can use the [SynapseSparkStep to specify your Synapse Spark pool as the compute target](how-to-use-synapsesparkstep.md) for the data preparation step in your pipeline.
+
+Making your data available to the Synapse Spark pool depends on your dataset type.
+
+* For a FileDataset, you can use the [`as_hdfs()`](/python/api/azureml-core/azureml.data.filedataset#as-hdfs--) method. When the run is submitted, the dataset is made available to the Synapse Spark pool as a Hadoop distributed file system (HFDS).
+* For a [TabularDataset](how-to-create-register-datasets.md#tabulardataset), you can use the [`as_named_input()`](/python/api/azureml-core/azureml.data.abstract_dataset.abstractdataset#as-named-input-name-) method.
+
+The following code,
+
+* Creates the variable `input2` from the FileDataset `train_ds` that was created in the previous code example.
+* Creates the variable `output` with the HDFSOutputDatasetConfiguration class. After the run is complete, this class allows us to save the output of the run as the dataset, `test` in the datastore, `mydatastore`. In the Azure Machine Learning workspace, the `test` dataset is registered under the name `registered_dataset`.
+* Configures settings the run should use in order to perform on the Synapse Spark pool.
+* Defines the ScriptRunConfig parameters to,
+ * Use the `dataprep.py`, for the run.
+ * Specify which data to use as input and how to make it available to the Synapse Spark pool.
+ * Specify where to store output data, `output`.
+
+```Python
+from azureml.core import Dataset, HDFSOutputDatasetConfig
+from azureml.core.environment import CondaDependencies
+from azureml.core import RunConfiguration
+from azureml.core import ScriptRunConfig
+from azureml.core import Experiment
+
+input2 = train_ds.as_hdfs()
+output = HDFSOutputDatasetConfig(destination=(datastore, "test").register_on_complete(name="registered_dataset")
+
+run_config = RunConfiguration(framework="pyspark")
+run_config.target = synapse_compute_name
+
+run_config.spark.configuration["spark.driver.memory"] = "1g"
+run_config.spark.configuration["spark.driver.cores"] = 2
+run_config.spark.configuration["spark.executor.memory"] = "1g"
+run_config.spark.configuration["spark.executor.cores"] = 1
+run_config.spark.configuration["spark.executor.instances"] = 1
+
+conda_dep = CondaDependencies()
+conda_dep.add_pip_package("azureml-core==1.20.0")
+
+run_config.environment.python.conda_dependencies = conda_dep
+
+script_run_config = ScriptRunConfig(source_directory = './code',
+ script= 'dataprep.py',
+ arguments = ["--file_input", input2,
+ "--output_dir", output],
+ run_config = run_config)
+```
+
+For more information about `run_config.spark.configuration` and general Spark configuration, see [SparkConfiguration Class](/python/api/azureml-core/azureml.core.runconfig.sparkconfiguration) and [Apache Spark's configuration documentation](https://spark.apache.org/docs/latest/configuration.html).
+
+Once your `ScriptRunConfig` object is set up, you can submit the run.
+
+```python
+from azureml.core import Experiment
+
+exp = Experiment(workspace=ws, name="synapse-spark")
+run = exp.submit(config=script_run_config)
+run
+```
+
+For more details, like the `dataprep.py` script used in this example, see the [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb).
+
+After your data is prepared, you can then use it as input for your training jobs. In the aforementioned code example, the `registered_dataset` is what you would specify as your input data for training jobs.
+
+## Example notebooks
+
+See the example notebooks for more concepts and demonstrations of the Azure Synapse Analytics and Azure Machine Learning integration capabilities.
+* [Run an interactive Spark session from a notebook in your Azure Machine Learning workspace](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_session_on_synapse_spark_pool.ipynb).
+* [Submit an Azure Machine Learning experiment run with a Synapse Spark pool as your compute target](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb).
+
+## Next steps
+
+* [Train a model](../how-to-set-up-training-targets.md).
+* [Train with Azure Machine Learning dataset](how-to-train-with-datasets.md).
machine-learning How To Debug Parallel Run Step https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-parallel-run-step.md
+
+ Title: Troubleshooting the ParallelRunStep
+
+description: Tips for how to troubleshoot when you get errors using the ParallelRunStep in machine learning pipelines.
++++++++ Last updated : 10/21/2021
+#Customer intent: As a data scientist, I want to figure out why my ParallelRunStep doesn't run so that I can fix it.
+++
+# Troubleshooting the ParallelRunStep
++
+In this article, you learn how to troubleshoot when you get errors using the [ParallelRunStep](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallel_run_step.parallelrunstep) class from the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro).
+
+For general tips on troubleshooting a pipeline, see [Troubleshooting machine learning pipelines](how-to-debug-pipelines.md).
+
+## Testing scripts locally
+
+ Your ParallelRunStep runs as a step in ML pipelines. You may want to [test your scripts locally](../how-to-debug-visual-studio-code.md#debug-and-troubleshoot-machine-learning-pipelines) as a first step.
+
+## Entry script requirements
+
+The entry script for a `ParallelRunStep` *must contain* a `run()` function and optionally contains an `init()` function:
+- `init()`: Use this function for any costly or common preparation for later processing. For example, use it to load the model into a global object. This function will be called only once at beginning of process.
+ > [!NOTE]
+ > If your `init` method creates an output directory, specify that `parents=True` and `exist_ok=True`. The `init` method is called from each worker process on every node on which the job is running.
+- `run(mini_batch)`: The function will run for each `mini_batch` instance.
+ - `mini_batch`: `ParallelRunStep` will invoke run method and pass either a list or pandas `DataFrame` as an argument to the method. Each entry in mini_batch will be a file path if input is a `FileDataset` or a pandas `DataFrame` if input is a `TabularDataset`.
+ - `response`: run() method should return a pandas `DataFrame` or an array. For append_row output_action, these returned elements are appended into the common output file. For summary_only, the contents of the elements are ignored. For all output actions, each returned output element indicates one successful run of input element in the input mini-batch. Make sure that enough data is included in run result to map input to run output result. Run output will be written in output file and not guaranteed to be in order, you should use some key in the output to map it to input.
+ > [!NOTE]
+ > One output element is expected for one input element.
+
+```python
+%%writefile digit_identification.py
+# Snippets from a sample script.
+# Refer to the accompanying digit_identification.py
+# (https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines/parallel-run)
+# for the implementation script.
+
+import os
+import numpy as np
+import tensorflow as tf
+from PIL import Image
+from azureml.core import Model
++
+def init():
+ global g_tf_sess
+
+ # Pull down the model from the workspace
+ model_path = Model.get_model_path("mnist")
+
+ # Construct a graph to execute
+ tf.reset_default_graph()
+ saver = tf.train.import_meta_graph(os.path.join(model_path, 'mnist-tf.model.meta'))
+ g_tf_sess = tf.Session()
+ saver.restore(g_tf_sess, os.path.join(model_path, 'mnist-tf.model'))
++
+def run(mini_batch):
+ print(f'run method start: {__file__}, run({mini_batch})')
+ resultList = []
+ in_tensor = g_tf_sess.graph.get_tensor_by_name("network/X:0")
+ output = g_tf_sess.graph.get_tensor_by_name("network/output/MatMul:0")
+
+ for image in mini_batch:
+ # Prepare each image
+ data = Image.open(image)
+ np_im = np.array(data).reshape((1, 784))
+ # Perform inference
+ inference_result = output.eval(feed_dict={in_tensor: np_im}, session=g_tf_sess)
+ # Find the best probability, and add it to the result list
+ best_result = np.argmax(inference_result)
+ resultList.append("{}: {}".format(os.path.basename(image), best_result))
+
+ return resultList
+```
+
+If you have another file or folder in the same directory as your inference script, you can reference it by finding the current working directory. If you want to import your packages, you can also append your package folder to `sys.path`.
+
+```python
+script_dir = os.path.realpath(os.path.join(__file__, '..',))
+file_path = os.path.join(script_dir, "<file_name>")
+
+packages_dir = os.path.join(file_path, '<your_package_folder>')
+if packages_dir not in sys.path:
+ sys.path.append(packages_dir)
+from <your_package> import <your_class>
+```
+
+### Parameters for ParallelRunConfig
+
+`ParallelRunConfig` is the major configuration for `ParallelRunStep` instance within the Azure Machine Learning pipeline. You use it to wrap your script and configure necessary parameters, including all of the following entries:
+- `entry_script`: A user script as a local file path that will be run in parallel on multiple nodes. If `source_directory` is present, use a relative path. Otherwise, use any path that's accessible on the machine.
+- `mini_batch_size`: The size of the mini-batch passed to a single `run()` call. (optional; the default value is `10` files for `FileDataset` and `1MB` for `TabularDataset`.)
+ - For `FileDataset`, it's the number of files with a minimum value of `1`. You can combine multiple files into one mini-batch.
+ - For `TabularDataset`, it's the size of data. Example values are `1024`, `1024KB`, `10MB`, and `1GB`. The recommended value is `1MB`. The mini-batch from `TabularDataset` will never cross file boundaries. For example, if you have .csv files with various sizes, the smallest file is 100 KB and the largest is 10 MB. If you set `mini_batch_size = 1MB`, then files with a size smaller than 1 MB will be treated as one mini-batch. Files with a size larger than 1 MB will be split into multiple mini-batches.
+ > [!NOTE]
+ > TabularDatasets backed by SQL cannot be partitioned.
+ > TabularDatasets from a single parquet file and single row group cannot be partitioned.
+
+- `error_threshold`: The number of record failures for `TabularDataset` and file failures for `FileDataset` that should be ignored during processing. If the error count for the entire input goes above this value, the job will be aborted. The error threshold is for the entire input and not for individual mini-batch sent to the `run()` method. The range is `[-1, int.max]`. The `-1` part indicates ignoring all failures during processing.
+- `output_action`: One of the following values indicates how the output will be organized:
+ - `summary_only`: The user script will store the output. `ParallelRunStep` will use the output only for the error threshold calculation.
+ - `append_row`: For all inputs, only one file will be created in the output folder to append all outputs separated by line.
+- `append_row_file_name`: To customize the output file name for append_row output_action (optional; default value is `parallel_run_step.txt`).
+- `source_directory`: Paths to folders that contain all files to execute on the compute target (optional).
+- `compute_target`: Only `AmlCompute` is supported.
+- `node_count`: The number of compute nodes to be used for running the user script.
+- `process_count_per_node`: The number of worker processes per node to run the entry script in parallel. For a GPU machine, the default value is 1. For a CPU machine, the default value is the number of cores per node. A worker process will call `run()` repeatedly by passing the mini batch it gets. The total number of worker processes in your job is `process_count_per_node * node_count`, which decides the max number of `run()` to execute in parallel.
+- `environment`: The Python environment definition. You can configure it to use an existing Python environment or to set up a temporary environment. The definition is also responsible for setting the required application dependencies (optional).
+- `logging_level`: Log verbosity. Values in increasing verbosity are: `WARNING`, `INFO`, and `DEBUG`. (optional; the default value is `INFO`)
+- `run_invocation_timeout`: The `run()` method invocation timeout in seconds. (optional; default value is `60`)
+- `run_max_try`: Maximum try count of `run()` for a mini-batch. A `run()` is failed if an exception is thrown, or nothing is returned when `run_invocation_timeout` is reached (optional; default value is `3`).
+
+You can specify `mini_batch_size`, `node_count`, `process_count_per_node`, `logging_level`, `run_invocation_timeout`, and `run_max_try` as `PipelineParameter`, so that when you resubmit a pipeline run, you can fine-tune the parameter values. In this example, you use `PipelineParameter` for `mini_batch_size` and `Process_count_per_node` and you will change these values when you resubmit another run.
+
+#### CUDA devices visibility
+For compute targets equipped with GPUs, the environment variable `CUDA_VISIBLE_DEVICES` will be set in worker processes. In AmlCompute, you can find the total number of GPU devices in the environment variable `AZ_BATCHAI_GPU_COUNT_FOUND`, which is set automatically. If you want each worker process to have a dedicated GPU, set `process_count_per_node` equal to the number of GPU devices on a machine. Each worker process will assign a unique index to `CUDA_VISIBLE_DEVICES`. If a worker process stops for any reason, the next started worker process will use the released GPU index.
+
+If the total number of GPU devices is less than `process_count_per_node`, the worker processes will be assigned GPU index until all have been used.
+
+Given the total GPU devices is 2 and `process_count_per_node = 4` as an example, process 0 and process 1 will have index 0 and 1. Process 2 and 3 won't have an environment variable. For a library using this environment variable for GPU assignment, process 2 and 3 won't have GPUs and won't try to acquire GPU devices. If process 0 stops, it will release GPU index 0. The next process, which is process 4, will have GPU index 0 assigned.
+
+For more information, see [CUDA Pro Tip: Control GPU Visibility with CUDA_VISIBLE_DEVICES](https://developer.nvidia.com/blog/cuda-pro-tip-control-gpu-visibility-cuda_visible_devices/).
+
+### Parameters for creating the ParallelRunStep
+
+Create the ParallelRunStep by using the script, environment configuration, and parameters. Specify the compute target that you already attached to your workspace as the target of execution for your inference script. Use `ParallelRunStep` to create the batch inference pipeline step, which takes all the following parameters:
+- `name`: The name of the step, with the following naming restrictions: unique, 3-32 characters, and regex ^\[a-z\]([-a-z0-9]*[a-z0-9])?$.
+- `parallel_run_config`: A `ParallelRunConfig` object, as defined earlier.
+- `inputs`: One or more single-typed Azure Machine Learning datasets to be partitioned for parallel processing.
+- `side_inputs`: One or more reference data or datasets used as side inputs without need to be partitioned.
+- `output`: An `OutputFileDatasetConfig` object that represents the directory path at which the output data will be stored.
+- `arguments`: A list of arguments passed to the user script. Use unknown_args to retrieve them in your entry script (optional).
+- `allow_reuse`: Whether the step should reuse previous results when run with the same settings/inputs. If this parameter is `False`, a new run will always be generated for this step during pipeline execution. (optional; the default value is `True`.)
+
+```python
+from azureml.pipeline.steps import ParallelRunStep
+
+parallelrun_step = ParallelRunStep(
+ name="predict-digits-mnist",
+ parallel_run_config=parallel_run_config,
+ inputs=[input_mnist_ds_consumption],
+ output=output_dir,
+ allow_reuse=True
+)
+```
+
+## Debugging scripts from remote context
+
+The transition from debugging a scoring script locally to debugging a scoring script in an actual pipeline can be a difficult leap. For information on finding your logs in the portal, see [machine learning pipelines section on debugging scripts from a remote context](how-to-debug-pipelines.md). The information in that section also applies to a ParallelRunStep.
+
+For example, the log file `70_driver_log.txt` contains information from the controller that launches the ParallelRunStep code.
+
+Because of the distributed nature of ParallelRunStep jobs, there are logs from several different sources. However, two consolidated files are created that provide high-level information:
+
+- `~/logs/job_progress_overview.txt`: This file provides a high-level info about the number of mini-batches (also known as tasks) created so far and number of mini-batches processed so far. At this end, it shows the result of the job. If the job failed, it will show the error message and where to start the troubleshooting.
+
+- `~/logs/sys/master_role.txt`: This file provides the principal node (also known as the orchestrator) view of the running job. Includes task creation, progress monitoring, the run result.
+
+Logs generated from entry script using EntryScript helper and print statements will be found in following files:
+
+- `~/logs/user/entry_script_log/<node_id>/<process_name>.log.txt`: These files are the logs written from entry_script using EntryScript helper.
+
+- `~/logs/user/stdout/<node_id>/<process_name>.stdout.txt`: These files are the logs from stdout (for example, print statement) of entry_script.
+
+- `~/logs/user/stderr/<node_id>/<process_name>.stderr.txt`: These files are the logs from stderr of entry_script.
+
+For a concise understanding of errors in your script there is:
+
+- `~/logs/user/error.txt`: This file will try to summarize the errors in your script.
+
+For more information on errors in your script, there is:
+
+- `~/logs/user/error/`: Contains full stack traces of exceptions thrown while loading and running entry script.
+
+When you need a full understanding of how each node executed the score script, look at the individual process logs for each node. The process logs can be found in the `sys/node` folder, grouped by worker nodes:
+
+- `~/logs/sys/node/<node_id>/<process_name>.txt`: This file provides detailed info about each mini-batch as it's picked up or completed by a worker. For each mini-batch, this file includes:
+
+ - The IP address and the PID of the worker process.
+ - The total number of items, successfully processed items count, and failed item count.
+ - The start time, duration, process time and run method time.
+
+You can also view the results of periodical checks of the resource usage for each node. The log files and setup files are in this folder:
+
+- `~/logs/perf`: Set `--resource_monitor_interval` to change the checking interval in seconds. The default interval is `600`, which is approximately 10 minutes. To stop the monitoring, set the value to `0`. Each `<node_id>` folder includes:
+
+ - `os/`: Information about all running processes in the node. One check runs an operating system command and saves the result to a file. On Linux, the command is `ps`. On Windows, use `tasklist`.
+ - `%Y%m%d%H`: The sub folder name is the time to hour.
+ - `processes_%M`: The file ends with the minute of the checking time.
+ - `node_disk_usage.csv`: Detailed disk usage of the node.
+ - `node_resource_usage.csv`: Resource usage overview of the node.
+ - `processes_resource_usage.csv`: Resource usage overview of each process.
+
+## How do I log from my user script from a remote context?
+
+ParallelRunStep may run multiple processes on one node based on process_count_per_node. In order to organize logs from each process on node and combine print and log statement, we recommend using ParallelRunStep logger as shown below. You get a logger from EntryScript and make the logs show up in **logs/user** folder in the portal.
+
+**A sample entry script using the logger:**
+```python
+from azureml_user.parallel_run import EntryScript
+
+def init():
+ """Init once in a worker process."""
+ entry_script = EntryScript()
+ logger = entry_script.logger
+ logger.info("This will show up in files under logs/user on the Azure portal.")
++
+def run(mini_batch):
+ """Call once for a mini batch. Accept and return the list back."""
+ # This class is in singleton pattern and will return same instance as the one in init()
+ entry_script = EntryScript()
+ logger = entry_script.logger
+ logger.info(f"{__file__}: {mini_batch}.")
+ ...
+
+ return mini_batch
+```
+
+## Where does the message from Python `logging` sink to?
+ParallelRunStep sets a handler on the root logger, which sinks the message to `logs/user/stdout/<node_id>/processNNN.stdout.txt`.
+
+`logging` defaults to `INFO` level. By default, levels below `INFO` won't show up, such as `DEBUG`.
+
+## How could I write to a file to show up in the portal?
+Files in `logs` folder will be uploaded and show up in the portal.
+You can get the folder `logs/user/entry_script_log/<node_id>` like below and compose your file path to write:
+
+```python
+from pathlib import Path
+from azureml_user.parallel_run import EntryScript
+
+def init():
+ """Init once in a worker process."""
+ entry_script = EntryScript()
+ log_dir = entry_script.log_dir
+ log_dir = Path(entry_script.log_dir) # logs/user/entry_script_log/<node_id>/.
+ log_dir.mkdir(parents=True, exist_ok=True) # Create the folder if not existing.
+
+ proc_name = entry_script.agent_name # The process name in pattern "processNNN".
+ fil_path = log_dir / f"{proc_name}_<file_name>" # Avoid conflicting among worker processes with proc_name.
+```
+
+## How to handle log in new processes?
+You can spawn new processes in you entry script with [`subprocess`](https://docs.python.org/3/library/subprocess.html) module, connect to their input/output/error pipes and obtain their return codes.
+
+The recommended approach is to use the [`run()`](https://docs.python.org/3/library/subprocess.html#subprocess.run) function with `capture_output=True`. Errors will show up in `logs/user/error/<node_id>/<process_name>.txt`.
+
+If you want to use `Popen()`, you should redirect stdout/stderr to files, like:
+```python
+from pathlib import Path
+from subprocess import Popen
+
+from azureml_user.parallel_run import EntryScript
++
+def init():
+ """Show how to redirect stdout/stderr to files in logs/user/entry_script_log/<node_id>/."""
+ entry_script = EntryScript()
+ proc_name = entry_script.agent_name # The process name in pattern "processNNN".
+ log_dir = Path(entry_script.log_dir) # logs/user/entry_script_log/<node_id>/.
+ log_dir.mkdir(parents=True, exist_ok=True) # Create the folder if not existing.
+ stdout_file = str(log_dir / f"{proc_name}_demo_stdout.txt")
+ stderr_file = str(log_dir / f"{proc_name}_demo_stderr.txt")
+ proc = Popen(
+ ["...")],
+ stdout=open(stdout_file, "w"),
+ stderr=open(stderr_file, "w"),
+ # ...
+ )
+
+```
+
+> [!NOTE]
+> A worker process runs "system" code and the entry script code in the same process.
+>
+> If no `stdout` or `stderr` specified, a subprocess created with `Popen()` in your entry script will inherit the setting of the worker process.
+>
+> `stdout` will write to `logs/sys/node/<node_id>/processNNN.stdout.txt` and `stderr` to `logs/sys/node/<node_id>/processNNN.stderr.txt`.
++
+## How do I write a file to the output directory, and then view it in the portal?
+
+You can get the output directory from the `EntryScript` class and write to it. To view the written files, in the step Run view in the Azure Machine Learning portal, select the **Outputs + logs** tab. Select the **Data outputs** link, and then complete the steps that are described in the dialog.
+
+Use `EntryScript` in your entry script like in this example:
+
+```python
+from pathlib import Path
+from azureml_user.parallel_run import EntryScript
+
+def run(mini_batch):
+ output_dir = Path(entry_script.output_dir)
+ (Path(output_dir) / res1).write...
+ (Path(output_dir) / res2).write...
+```
+
+## How can I pass a side input such as, a file or file(s) containing a lookup table, to all my workers?
+
+User can pass reference data to script using side_inputs parameter of ParalleRunStep. All datasets provided as side_inputs will be mounted on each worker node. User can get the location of mount by passing argument.
+
+Construct a [Dataset](/python/api/azureml-core/azureml.core.dataset.dataset) containing the reference data, specify a local mount path and register it with your workspace. Pass it to the `side_inputs` parameter of your `ParallelRunStep`. Additionally, you can add its path in the `arguments` section to easily access its mounted path.
+
+> [!NOTE]
+> Use FileDatasets only for side_inputs.
+
+```python
+local_path = "/tmp/{}".format(str(uuid.uuid4()))
+label_config = label_ds.as_named_input("labels_input").as_mount(local_path)
+batch_score_step = ParallelRunStep(
+ name=parallel_step_name,
+ inputs=[input_images.as_named_input("input_images")],
+ output=output_dir,
+ arguments=["--labels_dir", label_config],
+ side_inputs=[label_config],
+ parallel_run_config=parallel_run_config,
+)
+```
+
+After that you can access it in your inference script (for example, in your init() method) as follows:
+
+```python
+parser = argparse.ArgumentParser()
+parser.add_argument('--labels_dir', dest="labels_dir", required=True)
+args, _ = parser.parse_known_args()
+
+labels_path = args.labels_dir
+```
+
+## How to use input datasets with service principal authentication?
+User can pass input datasets with service principal authentication used in workspace. Using such dataset in ParallelRunStep requires that dataset to be registered for it to construct ParallelRunStep configuration.
+
+```python
+service_principal = ServicePrincipalAuthentication(
+ tenant_id="***",
+ service_principal_id="***",
+ service_principal_password="***")
+
+ws = Workspace(
+ subscription_id="***",
+ resource_group="***",
+ workspace_name="***",
+ auth=service_principal
+ )
+
+default_blob_store = ws.get_default_datastore() # or Datastore(ws, '***datastore-name***')
+ds = Dataset.File.from_files(default_blob_store, '**path***')
+registered_ds = ds.register(ws, '***dataset-name***', create_new_version=True)
+```
+
+## How to Check Progress and Analyze it
+This section is about how to check the progress of a ParallelRunStep job and check the cause of unexpected behavior.
+
+### How to check job progress?
+Besides looking at the overall status of the StepRun, the count of scheduled/processed mini-batches and the progress of generating output can be viewed in `~/logs/job_progress_overview.<timestamp>.txt`. The file rotates on daily basis, you can check the one with the largest timestamp for the latest information.
+
+### What should I check if there is no progress for a while?
+You can go into `~/logs/sys/errror` to see if there's any exception. If there is none, it's likely that your entry script is taking a long time, you can print out progress information in your code to locate the time-consuming part, or add `"--profiling_module", "cProfile"` to the `arguments` of `ParallelRunStep` to generate a profile file named as `<process_name>.profile` under `~/logs/sys/node/<node_id>` folder.
+
+### When will a job stop?
+if not canceled, the job will stop with status:
+- Completed. If all mini-batches have been processed and output has been generated for `append_row` mode.
+- Failed. If `error_threshold` in [`Parameters for ParallelRunConfig`](#parameters-for-parallelrunconfig) is exceeded, or system error occurred during the job.
+
+### Where to find the root cause of failure?
+You can follow the lead in `~logs/job_result.txt` to find the cause and detailed error log.
+
+### Will node failure impact the job result?
+Not if there are other available nodes in the designated compute cluster. The orchestrator will start a new node as replacement, and ParallelRunStep is resilient to such operation.
+
+### What happens if `init` function in entry script fails?
+ParallelRunStep has mechanism to retry for a certain times to give chance for recovery from transient issues without delaying the job failure for too long, the mechanism is as follows:
+1. If after a node starts, `init` on all agents keeps failing, we will stop trying after `3 * process_count_per_node` failures.
+2. If after job starts, `init` on all agents of all nodes keeps failing, we will stop trying if job runs more than 2 minutes and there're `2 * node_count * process_count_per_node` failures.
+3. If all agents are stuck on `init` for more than `3 * run_invocation_timeout + 30` seconds, the job would fail because of no progress for too long.
+
+### What will happen on OutOfMemory? How can I check the cause?
+ParallelRunStep will set the current attempt to process the mini-batch to failure status and try to restart the failed process. You can check `~logs/perf/<node_id>` to find the memory-consuming process.
+
+### Why do I have a lot of processNNN files?
+ParallelRunStep will start new worker processes in replace of the ones exited abnormally, and each process will generate a `processNNN` file as log. However, if the process failed because of exception during the `init` function of user script, and that the error repeated continuously for `3 * process_count_per_node` times, no new worker process will be started.
+
+## Next steps
+
+* See these [Jupyter notebooks demonstrating Azure Machine Learning pipelines](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines)
+
+* See the SDK reference for help with the [azureml-pipeline-steps](/python/api/azureml-pipeline-steps/azureml.pipeline.steps) package.
+
+* View reference [documentation](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunconfig) for ParallelRunConfig class and [documentation](/python/api/azureml-pipeline-steps/azureml.pipeline.steps.parallelrunstep) for ParallelRunStep class.
+
+* Follow the [advanced tutorial](../tutorial-pipeline-batch-scoring-classification.md) on using pipelines with ParallelRunStep. The tutorial shows how to pass another file as a side input.
machine-learning How To Debug Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-debug-pipelines.md
+
+ Title: Troubleshooting ML pipelines
+
+description: How to troubleshoot when you get errors running a machine learning pipeline. Common pitfalls and tips to help debug your scripts before and during remote execution.
+++++ Last updated : 10/21/2021++
+#Customer intent: As a data scientist, I want to figure out why my pipeline doesn't run so that I can fix it.
++
+# Troubleshooting machine learning pipelines
++
+In this article, you learn how to troubleshoot when you get errors running a [machine learning pipeline](../concept-ml-pipelines.md) in the [Azure Machine Learning SDK](/python/api/overview/azure/ml/intro) and [Azure Machine Learning designer](../concept-designer.md).
+
+## Troubleshooting tips
+
+The following table contains common problems during pipeline development, with potential solutions.
+
+| Problem | Possible solution |
+|--|--|
+| Unable to pass data to `PipelineData` directory | Ensure you have created a directory in the script that corresponds to where your pipeline expects the step output data. In most cases, an input argument will define the output directory, and then you create the directory explicitly. Use `os.makedirs(args.output_dir, exist_ok=True)` to create the output directory. See the [tutorial](../tutorial-pipeline-batch-scoring-classification.md#write-a-scoring-script) for a scoring script example that shows this design pattern. |
+| Dependency bugs | If you see dependency errors in your remote pipeline that did not occur when locally testing, confirm your remote environment dependencies and versions match those in your test environment. (See [Environment building, caching, and reuse](../concept-environments.md#environment-building-caching-and-reuse)|
+| Ambiguous errors with compute targets | Try deleting and re-creating compute targets. Re-creating compute targets is quick and can solve some transient issues. |
+| Pipeline not reusing steps | Step reuse is enabled by default, but ensure you haven't disabled it in a pipeline step. If reuse is disabled, the `allow_reuse` parameter in the step will be set to `False`. |
+| Pipeline is rerunning unnecessarily | To ensure that steps only rerun when their underlying data or scripts change, decouple your source-code directories for each step. If you use the same source directory for multiple steps, you may experience unnecessary reruns. Use the `source_directory` parameter on a pipeline step object to point to your isolated directory for that step, and ensure you aren't using the same `source_directory` path for multiple steps. |
+| Step slowing down over training epochs or other looping behavior | Try switching any file writes, including logging, from `as_mount()` to `as_upload()`. The **mount** mode uses a remote virtualized filesystem and uploads the entire file each time it is appended to. |
+| Compute target takes a long time to start | Docker images for compute targets are loaded from Azure Container Registry (ACR). By default, Azure Machine Learning creates an ACR that uses the *basic* service tier. Changing the ACR for your workspace to standard or premium tier may reduce the time it takes to build and load images. For more information, see [Azure Container Registry service tiers](/azure/container-registry/container-registry-skus). |
+
+### Authentication errors
+
+If you perform a management operation on a compute target from a remote job, you will receive one of the following errors:
+
+```json
+{"code":"Unauthorized","statusCode":401,"message":"Unauthorized","details":[{"code":"InvalidOrExpiredToken","message":"The request token was either invalid or expired. Please try again with a valid token."}]}
+```
+
+```json
+{"error":{"code":"AuthenticationFailed","message":"Authentication failed."}}
+```
+
+For example, you will receive an error if you try to create or attach a compute target from an ML Pipeline that is submitted for remote execution.
+## Troubleshooting `ParallelRunStep`
+
+The script for a `ParallelRunStep` *must contain* two functions:
+- `init()`: Use this function for any costly or common preparation for later inference. For example, use it to load the model into a global object. This function will be called only once at beginning of process.
+- `run(mini_batch)`: The function will run for each `mini_batch` instance.
+ - `mini_batch`: `ParallelRunStep` will invoke run method and pass either a list or pandas `DataFrame` as an argument to the method. Each entry in mini_batch will be a file path if input is a `FileDataset` or a pandas `DataFrame` if input is a `TabularDataset`.
+ - `response`: run() method should return a pandas `DataFrame` or an array. For append_row output_action, these returned elements are appended into the common output file. For summary_only, the contents of the elements are ignored. For all output actions, each returned output element indicates one successful run of input element in the input mini-batch. Make sure that enough data is included in run result to map input to run output result. Run output will be written in output file and not guaranteed to be in order, you should use some key in the output to map it to input.
+
+```python
+%%writefile digit_identification.py
+# Snippets from a sample script.
+# Refer to the accompanying digit_identification.py
+# (https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/machine-learning-pipelines/parallel-run)
+# for the implementation script.
+
+import os
+import numpy as np
+import tensorflow as tf
+from PIL import Image
+from azureml.core import Model
++
+def init():
+ global g_tf_sess
+
+ # Pull down the model from the workspace
+ model_path = Model.get_model_path("mnist")
+
+ # Construct a graph to execute
+ tf.reset_default_graph()
+ saver = tf.train.import_meta_graph(os.path.join(model_path, 'mnist-tf.model.meta'))
+ g_tf_sess = tf.Session()
+ saver.restore(g_tf_sess, os.path.join(model_path, 'mnist-tf.model'))
++
+def run(mini_batch):
+ print(f'run method start: {__file__}, run({mini_batch})')
+ resultList = []
+ in_tensor = g_tf_sess.graph.get_tensor_by_name("network/X:0")
+ output = g_tf_sess.graph.get_tensor_by_name("network/output/MatMul:0")
+
+ for image in mini_batch:
+ # Prepare each image
+ data = Image.open(image)
+ np_im = np.array(data).reshape((1, 784))
+ # Perform inference
+ inference_result = output.eval(feed_dict={in_tensor: np_im}, session=g_tf_sess)
+ # Find the best probability, and add it to the result list
+ best_result = np.argmax(inference_result)
+ resultList.append("{}: {}".format(os.path.basename(image), best_result))
+
+ return resultList
+```
+
+If you have another file or folder in the same directory as your inference script, you can reference it by finding the current working directory.
+
+```python
+script_dir = os.path.realpath(os.path.join(__file__, '..',))
+file_path = os.path.join(script_dir, "<file_name>")
+```
+
+### Parameters for ParallelRunConfig
+
+`ParallelRunConfig` is the major configuration for `ParallelRunStep` instance within the Azure Machine Learning pipeline. You use it to wrap your script and configure necessary parameters, including all of the following entries:
+- `entry_script`: A user script as a local file path that will be run in parallel on multiple nodes. If `source_directory` is present, use a relative path. Otherwise, use any path that's accessible on the machine.
+- `mini_batch_size`: The size of the mini-batch passed to a single `run()` call. (optional; the default value is `10` files for `FileDataset` and `1MB` for `TabularDataset`.)
+ - For `FileDataset`, it's the number of files with a minimum value of `1`. You can combine multiple files into one mini-batch.
+ - For `TabularDataset`, it's the size of data. Example values are `1024`, `1024KB`, `10MB`, and `1GB`. The recommended value is `1MB`. The mini-batch from `TabularDataset` will never cross file boundaries. For example, if you have .csv files with various sizes, the smallest file is 100 KB and the largest is 10 MB. If you set `mini_batch_size = 1MB`, then files with a size smaller than 1 MB will be treated as one mini-batch. Files with a size larger than 1 MB will be split into multiple mini-batches.
+- `error_threshold`: The number of record failures for `TabularDataset` and file failures for `FileDataset` that should be ignored during processing. If the error count for the entire input goes above this value, the job will be aborted. The error threshold is for the entire input and not for individual mini-batch sent to the `run()` method. The range is `[-1, int.max]`. The `-1` part indicates ignoring all failures during processing.
+- `output_action`: One of the following values indicates how the output will be organized:
+ - `summary_only`: The user script will store the output. `ParallelRunStep` will use the output only for the error threshold calculation.
+ - `append_row`: For all inputs, only one file will be created in the output folder to append all outputs separated by line.
+- `append_row_file_name`: To customize the output file name for append_row output_action (optional; default value is `parallel_run_step.txt`).
+- `source_directory`: Paths to folders that contain all files to execute on the compute target (optional).
+- `compute_target`: Only `AmlCompute` is supported.
+- `node_count`: The number of compute nodes to be used for running the user script.
+- `process_count_per_node`: The number of processes per node. Best practice is to set to the number of GPU or CPU one node has (optional; default value is `1`).
+- `environment`: The Python environment definition. You can configure it to use an existing Python environment or to set up a temporary environment. The definition is also responsible for setting the required application dependencies (optional).
+- `logging_level`: Log verbosity. Values in increasing verbosity are: `WARNING`, `INFO`, and `DEBUG`. (optional; the default value is `INFO`)
+- `run_invocation_timeout`: The `run()` method invocation timeout in seconds. (optional; default value is `60`)
+- `run_max_try`: Maximum try count of `run()` for a mini-batch. A `run()` is failed if an exception is thrown, or nothing is returned when `run_invocation_timeout` is reached (optional; default value is `3`).
+
+You can specify `mini_batch_size`, `node_count`, `process_count_per_node`, `logging_level`, `run_invocation_timeout`, and `run_max_try` as `PipelineParameter`, so that when you resubmit a pipeline run, you can fine-tune the parameter values. In this example, you use `PipelineParameter` for `mini_batch_size` and `Process_count_per_node` and you will change these values when resubmit a run later.
+
+### Parameters for creating the ParallelRunStep
+
+Create the ParallelRunStep by using the script, environment configuration, and parameters. Specify the compute target that you already attached to your workspace as the target of execution for your inference script. Use `ParallelRunStep` to create the batch inference pipeline step, which takes all the following parameters:
+- `name`: The name of the step, with the following naming restrictions: unique, 3-32 characters, and regex ^\[a-z\]([-a-z0-9]*[a-z0-9])?$.
+- `parallel_run_config`: A `ParallelRunConfig` object, as defined earlier.
+- `inputs`: One or more single-typed Azure Machine Learning datasets to be partitioned for parallel processing.
+- `side_inputs`: One or more reference data or datasets used as side inputs without need to be partitioned.
+- `output`: An `OutputFileDatasetConfig` object that corresponds to the output directory.
+- `arguments`: A list of arguments passed to the user script. Use unknown_args to retrieve them in your entry script (optional).
+- `allow_reuse`: Whether the step should reuse previous results when run with the same settings/inputs. If this parameter is `False`, a new run will always be generated for this step during pipeline execution. (optional; the default value is `True`.)
+
+```python
+from azureml.pipeline.steps import ParallelRunStep
+
+parallelrun_step = ParallelRunStep(
+ name="predict-digits-mnist",
+ parallel_run_config=parallel_run_config,
+ inputs=[input_mnist_ds_consumption],
+ output=output_dir,
+ allow_reuse=True
+)
+```
+
+## Debugging techniques
+
+There are three major techniques for debugging pipelines:
+
+* Debug individual pipeline steps on your local computer
+* Use logging and Application Insights to isolate and diagnose the source of the problem
+* Attach a remote debugger to a pipeline running in Azure
+
+### Debug scripts locally
+
+One of the most common failures in a pipeline is that the domain script does not run as intended, or contains runtime errors in the remote compute context that are difficult to debug.
+
+Pipelines themselves cannot be run locally, but running the scripts in isolation on your local machine allows you to debug faster because you don't have to wait for the compute and environment build process. Some development work is required to do this:
+
+* If your data is in a cloud datastore, you will need to download data and make it available to your script. Using a small sample of your data is a good way to cut down on runtime and quickly get feedback on script behavior
+* If you are attempting to simulate an intermediate pipeline step, you may need to manually build the object types that the particular script is expecting from the prior step
+* You will also need to define your own environment, and replicate the dependencies defined in your remote compute environment
+
+Once you have a script setup to run on your local environment, it is much easier to do debugging tasks like:
+
+* Attaching a custom debug configuration
+* Pausing execution and inspecting object-state
+* Catching type or logical errors that won't be exposed until runtime
+
+> [!TIP]
+> Once you can verify that your script is running as expected, a good next step is running the script in a single-step pipeline before
+> attempting to run it in a pipeline with multiple steps.
+
+## Configure, write to, and review pipeline logs
+
+Testing scripts locally is a great way to debug major code fragments and complex logic before you start building a pipeline, but at some point you will likely need to debug scripts during the actual pipeline run itself, especially when diagnosing behavior that occurs during the interaction between pipeline steps. We recommend liberal use of `print()` statements in your step scripts so that you can see object state and expected values during remote execution, similar to how you would debug JavaScript code.
+
+### Logging options and behavior
+
+The table below provides information for different debug options for pipelines. It isn't an exhaustive list, as other options exist besides just the Azure Machine Learning, Python, and OpenCensus ones shown here.
+
+| Library | Type | Example | Destination | Resources |
+|-|--||-||
+| Azure Machine Learning SDK | Metric | `run.log(name, val)` | Azure Machine Learning Portal UI | [How to track experiments](how-to-log-view-metrics.md)<br>[azureml.core.Run class](/python/api/azureml-core/azureml.core.run%28class%29) |
+| Python printing/logging | Log | `print(val)`<br>`logging.info(message)` | Driver logs, Azure Machine Learning designer | [How to track experiments](how-to-log-view-metrics.md)<br><br>[Python logging](https://docs.python.org/2/library/logging.html) |
+| OpenCensus Python | Log | `logger.addHandler(AzureLogHandler())`<br>`logging.log(message)` | Application Insights - traces | [Debug pipelines in Application Insights](./how-to-log-pipelines-application-insights.md)<br><br>[OpenCensus Azure Monitor Exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)<br>[Python logging cookbook](https://docs.python.org/3/howto/logging-cookbook.html) |
+
+#### Logging options example
+
+```python
+import logging
+
+from azureml.core.run import Run
+from opencensus.ext.azure.log_exporter import AzureLogHandler
+
+run = Run.get_context()
+
+# Azure ML Scalar value logging
+run.log("scalar_value", 0.95)
+
+# Python print statement
+print("I am a python print statement, I will be sent to the driver logs.")
+
+# Initialize Python logger
+logger = logging.getLogger(__name__)
+logger.setLevel(args.log_level)
+
+# Plain Python logging statements
+logger.debug("I am a plain debug statement, I will be sent to the driver logs.")
+logger.info("I am a plain info statement, I will be sent to the driver logs.")
+
+handler = AzureLogHandler(connection_string='<connection string>')
+logger.addHandler(handler)
+
+# Python logging with OpenCensus AzureLogHandler
+logger.warning("I am an OpenCensus warning statement, find me in Application Insights!")
+logger.error("I am an OpenCensus error statement with custom dimensions", {'step_id': run.id})
+```
+
+## Azure Machine Learning designer
+
+For pipelines created in the designer, you can find the **70_driver_log** file in either the authoring page, or in the pipeline run detail page.
+
+### Enable logging for real-time endpoints
+
+In order to troubleshoot and debug real-time endpoints in the designer, you must enable Application Insight logging using the SDK. Logging lets you troubleshoot and debug model deployment and usage issues. For more information, see [Logging for deployed models](how-to-enable-app-insights.md).
+
+### Get logs from the authoring page
+
+When you submit a pipeline run and stay in the authoring page, you can find the log files generated for each component as each component finishes running.
+
+1. Select a component that has finished running in the authoring canvas.
+1. In the right pane of the component, go to the **Outputs + logs** tab.
+1. Expand the right pane, and select the **70_driver_log.txt** to view the file in browser. You can also download logs locally.
+
+ ![Expanded output pane in the designer](./media/how-to-debug-pipelines/designer-logs.png)
+
+### Get logs from pipeline runs
+
+You can also find the log files for specific runs in the pipeline run detail page, which can be found in either the **Pipelines** or **Experiments** section of the studio.
+
+1. Select a pipeline run created in the designer.
+
+ ![Pipeline run page](./media/how-to-debug-pipelines/designer-pipelines.png)
+
+1. Select a component in the preview pane.
+1. In the right pane of the component, go to the **Outputs + logs** tab.
+1. Expand the right pane to view the **std_log.txt** file in browser, or select the file to download the logs locally.
+
+> [!IMPORTANT]
+> To update a pipeline from the pipeline run details page, you must **clone** the pipeline run to a new pipeline draft. A pipeline run is a snapshot of the pipeline. It's similar to a log file, and cannot be altered.
+
+## Application Insights
+For more information on using the OpenCensus Python library in this manner, see this guide: [Debug and troubleshoot machine learning pipelines in Application Insights](./how-to-log-pipelines-application-insights.md)
+
+## Interactive debugging with Visual Studio Code
+
+In some cases, you may need to interactively debug the Python code used in your ML pipeline. By using Visual Studio Code (VS Code) and debugpy, you can attach to the code as it runs in the training environment. For more information, visit the [interactive debugging in VS Code guide](../how-to-debug-visual-studio-code.md#debug-and-troubleshoot-machine-learning-pipelines).
+
+## Next steps
+
+* For a complete tutorial using `ParallelRunStep`, see [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](../tutorial-pipeline-batch-scoring-classification.md).
+
+* For a complete example showing automated machine learning in ML pipelines, see [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md).
+
+* See the SDK reference for help with the [azureml-pipelines-core](/python/api/azureml-pipeline-core/) package and the [azureml-pipelines-steps](/python/api/azureml-pipeline-steps/) package.
+
+* See the list of [designer exceptions and error codes](../algorithm-module-reference/designer-error-codes.md).
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-pipelines.md
+
+ Title: Publish ML pipelines
+
+description: Run machine learning workflows with machine learning pipelines and the Azure Machine Learning SDK for Python.
+++++ Last updated : 10/21/2021++++
+# Publish and track machine learning pipelines
++
+This article will show you how to share a machine learning pipeline with your colleagues or customers.
+
+Machine learning pipelines are reusable workflows for machine learning tasks. One benefit of pipelines is increased collaboration. You can also version pipelines, allowing customers to use the current model while you're working on a new version.
+
+## Prerequisites
+
+* Create an [Azure Machine Learning workspace](../quickstart-create-resources.md) to hold all your pipeline resources
+
+* [Configure your development environment](../how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](../concept-compute-instance.md) with the SDK already installed
+
+* Create and run a machine learning pipeline, such as by following [Tutorial: Build an Azure Machine Learning pipeline for batch scoring](../tutorial-pipeline-batch-scoring-classification.md). For other options, see [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md)
+
+## Publish a pipeline
+
+Once you have a pipeline up and running, you can publish a pipeline so that it runs with different inputs. For the REST endpoint of an already published pipeline to accept parameters, you must configure your pipeline to use `PipelineParameter` objects for the arguments that will vary.
+
+1. To create a pipeline parameter, use a [PipelineParameter](/python/api/azureml-pipeline-core/azureml.pipeline.core.graph.pipelineparameter) object with a default value.
+
+ ```python
+ from azureml.pipeline.core.graph import PipelineParameter
+
+ pipeline_param = PipelineParameter(
+ name="pipeline_arg",
+ default_value=10)
+ ```
+
+2. Add this `PipelineParameter` object as a parameter to any of the steps in the pipeline as follows:
+
+ ```python
+ compareStep = PythonScriptStep(
+ script_name="compare.py",
+ arguments=["--comp_data1", comp_data1, "--comp_data2", comp_data2, "--output_data", out_data3, "--param1", pipeline_param],
+ inputs=[ comp_data1, comp_data2],
+ outputs=[out_data3],
+ compute_target=compute_target,
+ source_directory=project_folder)
+ ```
+
+3. Publish this pipeline that will accept a parameter when invoked.
+
+ ```python
+ published_pipeline1 = pipeline_run1.publish_pipeline(
+ name="My_Published_Pipeline",
+ description="My Published Pipeline Description",
+ version="1.0")
+ ```
+
+4. After you publish your pipeline, you can check it in the UI. Pipeline ID is the unique identified of the published pipeline.
+
+ :::image type="content" source="./media/how-to-deploy-pipelines/published-pipeline-detail.png" alt-text="Screenshot showing published pipeline detail." lightbox= "./media/how-to-deploy-pipelines/published-pipeline-detail.png":::
+
+## Run a published pipeline
+
+All published pipelines have a REST endpoint. With the pipeline endpoint, you can trigger a run of the pipeline from any external systems, including non-Python clients. This endpoint enables "managed repeatability" in batch scoring and retraining scenarios.
+
+> [!IMPORTANT]
+> If you are using Azure role-based access control (Azure RBAC) to manage access to your pipeline, [set the permissions for your pipeline scenario (training or scoring)](../how-to-assign-roles.md#common-scenarios).
+
+To invoke the run of the preceding pipeline, you need an Azure Active Directory authentication header token. Getting such a token is described in the [AzureCliAuthentication class](/python/api/azureml-core/azureml.core.authentication.azurecliauthentication) reference and in the [Authentication in Azure Machine Learning](https://aka.ms/pl-restep-auth) notebook.
+
+```python
+from azureml.pipeline.core import PublishedPipeline
+import requests
+
+response = requests.post(published_pipeline1.endpoint,
+ headers=aad_token,
+ json={"ExperimentName": "My_Pipeline",
+ "ParameterAssignments": {"pipeline_arg": 20}})
+```
+
+The `json` argument to the POST request must contain, for the `ParameterAssignments` key, a dictionary containing the pipeline parameters and their values. In addition, the `json` argument may contain the following keys:
+
+| Key | Description |
+| | |
+| `ExperimentName` | The name of the experiment associated with this endpoint |
+| `Description` | Freeform text describing the endpoint |
+| `Tags` | Freeform key-value pairs that can be used to label and annotate requests |
+| `DataSetDefinitionValueAssignments` | Dictionary used for changing datasets without retraining (see discussion below) |
+| `DataPathAssignments` | Dictionary used for changing datapaths without retraining (see discussion below) |
+
+### Run a published pipeline using C#
+
+The following code shows how to call a pipeline asynchronously from C#. The partial code snippet just shows the call structure and isn't part of a Microsoft sample. It doesn't show complete classes or error handling.
+
+```csharp
+[DataContract]
+public class SubmitPipelineRunRequest
+{
+ [DataMember]
+ public string ExperimentName { get; set; }
+
+ [DataMember]
+ public string Description { get; set; }
+
+ [DataMember(IsRequired = false)]
+ public IDictionary<string, string> ParameterAssignments { get; set; }
+}
+
+// ... in its own class and method ...
+const string RestEndpoint = "your-pipeline-endpoint";
+
+using (HttpClient client = new HttpClient())
+{
+ var submitPipelineRunRequest = new SubmitPipelineRunRequest()
+ {
+ ExperimentName = "YourExperimentName",
+ Description = "Asynchronous C# REST api call",
+ ParameterAssignments = new Dictionary<string, string>
+ {
+ {
+ // Replace with your pipeline parameter keys and values
+ "your-pipeline-parameter", "default-value"
+ }
+ }
+ };
+
+ string auth_key = "your-auth-key";
+ client.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", auth_key);
+
+ // submit the job
+ var requestPayload = JsonConvert.SerializeObject(submitPipelineRunRequest);
+ var httpContent = new StringContent(requestPayload, Encoding.UTF8, "application/json");
+ var submitResponse = await client.PostAsync(RestEndpoint, httpContent).ConfigureAwait(false);
+ if (!submitResponse.IsSuccessStatusCode)
+ {
+ await WriteFailedResponse(submitResponse); // ... method not shown ...
+ return;
+ }
+
+ var result = await submitResponse.Content.ReadAsStringAsync().ConfigureAwait(false);
+ var obj = JObject.Parse(result);
+ // ... use `obj` dictionary to access results
+}
+```
+
+### Run a published pipeline using Java
+
+The following code shows a call to a pipeline that requires authentication (see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md)). If your pipeline is deployed publicly, you don't need the calls that produce `authKey`. The partial code snippet doesn't show Java class and exception-handling boilerplate. The code uses `Optional.flatMap` for chaining together functions that may return an empty `Optional`. The use of `flatMap` shortens and clarifies the code, but note that `getRequestBody()` swallows exceptions.
+
+```java
+import java.net.URI;
+import java.net.http.HttpClient;
+import java.net.http.HttpRequest;
+import java.net.http.HttpResponse;
+import java.util.Optional;
+// JSON library
+import com.google.gson.Gson;
+
+String scoringUri = "scoring-endpoint";
+String tenantId = "your-tenant-id";
+String clientId = "your-client-id";
+String clientSecret = "your-client-secret";
+String resourceManagerUrl = "https://management.azure.com";
+String dataToBeScored = "{ \"ExperimentName\" : \"My_Pipeline\", \"ParameterAssignments\" : { \"pipeline_arg\" : \"20\" }}";
+
+HttpClient client = HttpClient.newBuilder().build();
+Gson gson = new Gson();
+
+HttpRequest tokenAuthenticationRequest = tokenAuthenticationRequest(tenantId, clientId, clientSecret, resourceManagerUrl);
+Optional<String> authBody = getRequestBody(client, tokenAuthenticationRequest);
+Optional<String> authKey = authBody.flatMap(body -> Optional.of(gson.fromJson(body, AuthenticationBody.class).access_token);;
+Optional<HttpRequest> scoringRequest = authKey.flatMap(key -> Optional.of(scoringRequest(key, scoringUri, dataToBeScored)));
+Optional<String> scoringResult = scoringRequest.flatMap(req -> getRequestBody(client, req));
+// ... etc (`scoringResult.orElse()`) ...
+
+static HttpRequest tokenAuthenticationRequest(String tenantId, String clientId, String clientSecret, String resourceManagerUrl)
+{
+ String authUrl = String.format("https://login.microsoftonline.com/%s/oauth2/token", tenantId);
+ String clientIdParam = String.format("client_id=%s", clientId);
+ String resourceParam = String.format("resource=%s", resourceManagerUrl);
+ String clientSecretParam = String.format("client_secret=%s", clientSecret);
+
+ String bodyString = String.format("grant_type=client_credentials&%s&%s&%s", clientIdParam, resourceParam, clientSecretParam);
+
+ HttpRequest request = HttpRequest.newBuilder()
+ .uri(URI.create(authUrl))
+ .POST(HttpRequest.BodyPublishers.ofString(bodyString))
+ .build();
+ return request;
+}
+
+static HttpRequest scoringRequest(String authKey, String scoringUri, String dataToBeScored)
+{
+ HttpRequest request = HttpRequest.newBuilder()
+ .uri(URI.create(scoringUri))
+ .header("Authorization", String.format("Token %s", authKey))
+ .POST(HttpRequest.BodyPublishers.ofString(dataToBeScored))
+ .build();
+ return request;
+
+}
+
+static Optional<String> getRequestBody(HttpClient client, HttpRequest request) {
+ try {
+ HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());
+ if (response.statusCode() != 200) {
+ System.out.println(String.format("Unexpected server response %d", response.statusCode()));
+ return Optional.empty();
+ }
+ return Optional.of(response.body());
+ }catch(Exception x)
+ {
+ System.out.println(x.toString());
+ return Optional.empty();
+ }
+}
+
+class AuthenticationBody {
+ String access_token;
+ String token_type;
+ int expires_in;
+ String scope;
+ String refresh_token;
+ String id_token;
+
+ AuthenticationBody() {}
+}
+```
+
+### Changing datasets and datapaths without retraining
+
+You may want to train and inference on different datasets and datapaths. For instance, you may wish to train on a smaller dataset but inference on the complete dataset. You switch datasets with the `DataSetDefinitionValueAssignments` key in the request's `json` argument. You switch datapaths with `DataPathAssignments`. The technique for both is similar:
+
+1. In your pipeline definition script, create a `PipelineParameter` for the dataset. Create a `DatasetConsumptionConfig` or `DataPath` from the `PipelineParameter`:
+
+ ```python
+ tabular_dataset = Dataset.Tabular.from_delimited_files('https://dprepdata.blob.core.windows.net/demo/Titanic.csv')
+ tabular_pipeline_param = PipelineParameter(name="tabular_ds_param", default_value=tabular_dataset)
+ tabular_ds_consumption = DatasetConsumptionConfig("tabular_dataset", tabular_pipeline_param)
+ ```
+
+1. In your ML script, access the dynamically specified dataset using `Run.get_context().input_datasets`:
+
+ ```python
+ from azureml.core import Run
+
+ input_tabular_ds = Run.get_context().input_datasets['tabular_dataset']
+ dataframe = input_tabular_ds.to_pandas_dataframe()
+ # ... etc ...
+ ```
+
+ Notice that the ML script accesses the value specified for the `DatasetConsumptionConfig` (`tabular_dataset`) and not the value of the `PipelineParameter` (`tabular_ds_param`).
+
+1. In your pipeline definition script, set the `DatasetConsumptionConfig` as a parameter to the `PipelineScriptStep`:
+
+ ```python
+ train_step = PythonScriptStep(
+ name="train_step",
+ script_name="train_with_dataset.py",
+ arguments=["--param1", tabular_ds_consumption],
+ inputs=[tabular_ds_consumption],
+ compute_target=compute_target,
+ source_directory=source_directory)
+
+ pipeline = Pipeline(workspace=ws, steps=[train_step])
+ ```
+
+1. To switch datasets dynamically in your inferencing REST call, use `DataSetDefinitionValueAssignments`:
+
+ ```python
+ tabular_ds1 = Dataset.Tabular.from_delimited_files('path_to_training_dataset')
+ tabular_ds2 = Dataset.Tabular.from_delimited_files('path_to_inference_dataset')
+ ds1_id = tabular_ds1.id
+ d22_id = tabular_ds2.id
+
+ response = requests.post(rest_endpoint,
+ headers=aad_token,
+ json={
+ "ExperimentName": "MyRestPipeline",
+ "DataSetDefinitionValueAssignments": {
+ "tabular_ds_param": {
+ "SavedDataSetReference": {"Id": ds1_id #or ds2_id
+ }}}})
+ ```
+
+The notebooks [Showcasing Dataset and PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.ipynb) and [Showcasing DataPath and PipelineParameter](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) have complete examples of this technique.
+
+## Create a versioned pipeline endpoint
+
+You can create a Pipeline Endpoint with multiple published pipelines behind it. This technique gives you a fixed REST endpoint as you iterate on and update your ML pipelines.
+
+```python
+from azureml.pipeline.core import PipelineEndpoint
+
+published_pipeline = PublishedPipeline.get(workspace=ws, id="My_Published_Pipeline_id")
+pipeline_endpoint = PipelineEndpoint.publish(workspace=ws, name="PipelineEndpointTest",
+ pipeline=published_pipeline, description="Test description Notebook")
+```
+
+## Submit a job to a pipeline endpoint
+
+You can submit a job to the default version of a pipeline endpoint:
+
+```python
+pipeline_endpoint_by_name = PipelineEndpoint.get(workspace=ws, name="PipelineEndpointTest")
+run_id = pipeline_endpoint_by_name.submit("PipelineEndpointExperiment")
+print(run_id)
+```
+
+You can also submit a job to a specific version:
+
+```python
+run_id = pipeline_endpoint_by_name.submit("PipelineEndpointExperiment", pipeline_version="0")
+print(run_id)
+```
+
+The same can be accomplished using the REST API:
+
+```python
+rest_endpoint = pipeline_endpoint_by_name.endpoint
+response = requests.post(rest_endpoint,
+ headers=aad_token,
+ json={"ExperimentName": "PipelineEndpointExperiment",
+ "RunSource": "API",
+ "ParameterAssignments": {"1": "united", "2":"city"}})
+```
+
+## Use published pipelines in the studio
+
+You can also run a published pipeline from the studio:
+
+1. Sign in to [Azure Machine Learning studio](https://ml.azure.com).
+
+1. [View your workspace](../how-to-manage-workspace.md#view).
+
+1. On the left, select **Endpoints**.
+
+1. On the top, select **Pipeline endpoints**.
+ ![list of machine learning published pipelines](../media/how-to-create-your-first-pipeline/pipeline-endpoints.png)
+
+1. Select a specific pipeline to run, consume, or review results of previous runs of the pipeline endpoint.
+
+## Disable a published pipeline
+
+To hide a pipeline from your list of published pipelines, you disable it, either in the studio or from the SDK:
+
+```python
+# Get the pipeline by using its ID from Azure Machine Learning studio
+p = PublishedPipeline.get(ws, id="068f4885-7088-424b-8ce2-eeb9ba5381a6")
+p.disable()
+```
+
+You can enable it again with `p.enable()`. For more information, see [PublishedPipeline class](/python/api/azureml-pipeline-core/azureml.pipeline.core.publishedpipeline) reference.
+
+## Next steps
+
+- Use [these Jupyter notebooks on GitHub](https://aka.ms/aml-pipeline-readme) to explore machine learning pipelines further.
+- See the SDK reference help for the [azureml-pipelines-core](/python/api/azureml-pipeline-core/) package and the [azureml-pipelines-steps](/python/api/azureml-pipeline-steps/) package.
+- See the [how-to](how-to-debug-pipelines.md) for tips on debugging and troubleshooting pipelines.
machine-learning How To Designer Import Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-designer-import-data.md
+
+ Title: Import data into the designer
+
+description: Learn how to import data into Azure Machine Learning designer using Azure Machine Learning datasets and the Import Data component.
+++++ Last updated : 10/21/2021++++
+# Import data into Azure Machine Learning designer
+
+In this article, you learn how to import your own data in the designer to create custom solutions. There are two ways you can import data into the designer:
+
+* **Azure Machine Learning datasets** - Register [datasets](concept-data.md) in Azure Machine Learning to enable advanced features that help you manage your data.
+* **Import Data component** - Use the [Import Data](../algorithm-module-reference/import-data.md) component to directly access data from online data sources.
++
+## Use Azure Machine Learning datasets
+
+We recommend that you use [datasets](concept-data.md) to import data into the designer. When you register a dataset, you can take full advantage of advanced data features like [versioning and tracking](how-to-version-track-datasets.md) and [data monitoring](how-to-monitor-datasets.md).
+
+### Register a dataset
+
+You can register existing datasets [programatically with the SDK](how-to-create-register-datasets.md#create-datasets-from-datastores) or [visually in Azure Machine Learning studio](how-to-connect-data-ui.md#create-datasets).
+
+You can also register the output for any designer component as a dataset.
+
+1. Select the component that outputs the data you want to register.
+
+1. In the properties pane, select **Outputs + logs** > **Register dataset**.
+
+ ![Screenshot showing how to navigate to the Register Dataset option](media/how-to-designer-import-data/register-dataset-designer.png)
+
+If the component output data is in a tabular format, you must choose to register the output as a **file dataset** or **tabular dataset**.
+
+ - **File dataset** registers the component's output folder as a file dataset. The output folder contains a data file and meta files that the designer uses internally. Select this option if you want to continue to use the registered dataset in the designer.
+
+ - **Tabular dataset** registers only the component's the output data file as a tabular dataset. This format is easily consumed by other tools, for example in Automated Machine Learning or the Python SDK. Select this option if you plan to use the registered dataset outside of the designer.
+
+
+### Use a dataset
+
+Your registered datasets can be found in the component palette, under **Datasets**. To use a dataset, drag and drop it onto the pipeline canvas. Then, connect the output port of the dataset to other components in the canvas.
+
+If you register a file dataset, the output port type of the dataset is **AnyDirectory**. If you register a Tabular dataset, the output port type of the dataset if **DataFrameDirectory**. Note that if you connect the output port of the dataset to other components in the designer, the port type of datasets and components need to be aligned.
+
+![Screenshot showing location of saved datasets in the designer palette](media/how-to-designer-import-data/use-datasets-designer.png)
++
+> [!NOTE]
+> The designer supports [dataset versioning](how-to-version-track-datasets.md). Specify the dataset version in the property panel of the dataset component.
+
+### Limitations
+
+- Currently you can only visualize tabular dataset in the designer. If you register a file dataset outside designer, you cannot visualize it in the designer canvas.
+- Currently the designer only supports preview outputs which are stored in **Azure blob storage**. You can check and change your output datastore in the **Output settings** under **Parameters** tab in the right panel of the component.
+- If your data is stored in virtual network (VNet) and you want to preview, you need to enable workspace managed identity of the datastore.
+ 1. Go the related datastore and click **Update authentication**
+ :::image type="content" source="../media/resource-known-issues/datastore-update-credential.png" alt-text="Update Credentials":::
+ 1. Select **Yes** to enable workspace managed identity.
+ :::image type="content" source="../media/resource-known-issues/enable-workspace-managed-identity.png" alt-text="Enable Workspace Managed Identity":::
+
+## Import data using the Import Data component
+
+While we recommend that you use datasets to import data, you can also use the [Import Data](../algorithm-module-reference/import-data.md) component. The Import Data component skips registering your dataset in Azure Machine Learning and imports data directly from a [datastore](concept-data.md) or HTTP URL.
+
+For detailed information on how to use the Import Data component, see the [Import Data reference page](../algorithm-module-reference/import-data.md).
+
+> [!NOTE]
+> If your dataset has too many columns, you may encounter the following error: "Validation failed due to size limitation". To avoid this, [register the dataset in the Datasets interface](how-to-connect-data-ui.md#create-datasets).
+
+## Supported sources
+
+This section lists the data sources supported by the designer. Data comes into the designer from either a datastore or from [tabular dataset](how-to-create-register-datasets.md#dataset-types).
+
+### Datastore sources
+For a list of supported datastore sources, see [Access data in Azure storage services](how-to-access-data.md#supported-data-storage-service-types).
+
+### Tabular dataset sources
+
+The designer supports tabular datasets created from the following sources:
+ * Delimited files
+ * JSON files
+ * Parquet files
+ * SQL queries
+
+## Data types
+
+The designer internally recognizes the following data types:
+
+* String
+* Integer
+* Decimal
+* Boolean
+* Date
+
+The designer uses an internal data type to pass data between components. You can explicitly convert your data into data table format using the [Convert to Dataset](../algorithm-module-reference/convert-to-dataset.md) component. Any component that accepts formats other than the internal format will convert the data silently before passing it to the next component.
+
+## Data constraints
+
+Modules in the designer are limited by the size of the compute target. For larger datasets, you should use a larger Azure Machine Learning compute resource. For more information on Azure Machine Learning compute, see [What are compute targets in Azure Machine Learning?](../concept-compute-target.md#azure-machine-learning-compute-managed)
+
+## Access data in a virtual network
+
+If your workspace is in a virtual network, you must perform additional configuration steps to visualize data in the designer. For more information on how to use datastores and datasets in a virtual network, see [Use Azure Machine Learning studio in an Azure virtual network](../how-to-enable-studio-virtual-network.md).
+
+## Next steps
+
+Learn the designer fundamentals with this [Tutorial: Predict automobile price with the designer](../tutorial-designer-automobile-price-train-score.md).
machine-learning How To Differential Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-differential-privacy.md
+
+ Title: Differential privacy how-to - SmartNoise (preview)
+
+description: Learn how to apply differential privacy best practices to Azure Machine Learning models by using the SmartNoise open-source libraries.
++++++++ Last updated : 10/21/2021
+# Customer intent: As an experienced data scientist, I want to use differential privacy in Azure Machine Learning.
++
+# Use differential privacy in Azure Machine Learning (preview)
+
+Learn how to apply differential privacy best practices to Azure Machine Learning models by using the SmartNoise Python open-source libraries.
+
+Differential privacy is the gold-standard definition of privacy. Systems that adhere to this definition of privacy provide strong assurances against a wide range of data reconstruction and reidentification attacks, including attacks by adversaries who possess auxiliary information. Learn more about [how differential privacy works](../concept-differential-privacy.md).
++
+## Prerequisites
+
+- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+- [Python 3](https://www.python.org/downloads/)
+
+## Install SmartNoise Python libraries
+
+### Standalone installation
+
+The libraries are designed to work from distributed Spark clusters, and can be installed just like any other package.
+
+The instructions below assume that your `python` and `pip` commands are mapped to `python3` and `pip3`.
+
+Use pip to install the [SmartNoise Python packages](https://pypi.org/project/opendp-smartnoise/).
+
+`pip install opendp-smartnoise`
+
+To verify that the packages are installed, launch a Python prompt and type:
+
+```python
+import opendp.smartnoise.core
+import opendp.smartnoise.sql
+```
+
+If the imports succeed, the libraries are installed, and ready to use.
+
+### Docker image installation
+
+You can also use SmartNoise packages with Docker.
+
+Pull the `opendp/smartnoise` image to use the libraries inside a Docker container that includes Spark, Jupyter, and sample code.
++
+```sh
+docker pull opendp/smartnoise:privacy
+```
+
+Once you've pulled the image, launch the Jupyter server:
+
+```sh
+docker run --rm -p 8989:8989 --name smartnoise-run opendp/smartnoise:privacy
+```
+
+This starts a Jupyter server at port `8989` on your `localhost`, with password `pass@word99`. Assuming you used the command line above to start the container with name `smartnoise-privacy`, you can open a bash terminal in the Jupyter server by running:
+
+```sh
+docker exec -it smartnoise-run bash
+```
+
+The Docker instance clears all state on shutdown, so you'll lose any notebooks you create in the running instance. To remedy this, you can mount a local folder to the container when you launch it:
+
+```sh
+docker run --rm -p 8989:8989 --name smartnoise-run --mount type=bind,source=/Users/your_name/my-notebooks,target=/home/privacy/my-notebooks opendp/smartnoise:privacy
+```
+
+Any notebooks you create under the *my-notebooks* folder will be stored in your local filesystem.
+
+## Perform data analysis
+
+To prepare a differentially private release, you need to choose a data source, a statistic, and some privacy parameters, indicating the level of privacy protection.
+
+This sample references the California Public Use Microdata (PUMS), representing anonymized records of citizen demographics:
+
+```python
+import os
+import sys
+import numpy as np
+import opendp.smartnoise.core as sn
+
+data_path = os.path.join('.', 'data', 'PUMS_california_demographics_1000', 'data.csv')
+var_names = ["age", "sex", "educ", "race", "income", "married", "pid"]
+```
+
+In this example, we compute the mean and the variance of the age. We use a total `epsilon` of 1.0 (epsilon is our privacy parameter, spreading our privacy budget across the two quantities we want to compute. Learn more about [privacy metrics](../concept-differential-privacy.md#differential-privacy-metrics).
+
+```python
+with sn.Analysis() as analysis:
+ # load data
+ data = sn.Dataset(path = data_path, column_names = var_names)
+
+ # get mean of age
+ age_mean = sn.dp_mean(data = sn.cast(data['age'], type="FLOAT"),
+ privacy_usage = {'epsilon': .65},
+ data_lower = 0.,
+ data_upper = 100.,
+ data_n = 1000
+ )
+ # get variance of age
+ age_var = sn.dp_variance(data = sn.cast(data['age'], type="FLOAT"),
+ privacy_usage = {'epsilon': .35},
+ data_lower = 0.,
+ data_upper = 100.,
+ data_n = 1000
+ )
+analysis.release()
+
+print("DP mean of age: {0}".format(age_mean.value))
+print("DP variance of age: {0}".format(age_var.value))
+print("Privacy usage: {0}".format(analysis.privacy_usage))
+```
+
+The results look something like those below:
+
+```text
+DP mean of age: 44.55598845931517
+DP variance of age: 231.79044646429134
+Privacy usage: approximate {
+ epsilon: 1.0
+}
+```
+
+There are some important things to note about this example. First, the `Analysis` object represents a data processing graph. In this example, the mean and variance are computed from the same source node. However, you can include more complex expressions that combine inputs with outputs in arbitrary ways.
+
+The analysis graph includes `data_upper` and `data_lower` metadata, specifying the lower and upper bounds for ages. These values are used to precisely calibrate the noise to ensure differential privacy. These values are also used in some handling of outliers or missing values.
+
+Finally, the analysis graph keeps track of the total privacy budget spent.
+
+You can use the library to compose more complex analysis graphs, with several mechanisms, statistics, and utility functions:
+
+| Statistics | Mechanisms | Utilities |
+| - |||
+| Count | Gaussian | Cast |
+| Histogram | Geometric | Clamping |
+| Mean | Laplace | Digitize |
+| Quantiles | | Filter |
+| Sum | | Imputation |
+| Variance/Covariance | | Transform |
+
+See the [data analysis notebook](https://github.com/opendifferentialprivacy/smartnoise-samples/blob/master/analysis/basic_data_analysis.ipynb) for more details.
+
+## Approximate utility of differentially private releases
+
+Because differential privacy operates by calibrating noise, the utility of releases may vary depending on the privacy risk. Generally, the noise needed to protect each individual becomes negligible as sample sizes grow large, but overwhelm the result for releases that target a single individual. Analysts can review the accuracy information for a release to determine how useful the release is:
+
+```python
+with sn.Analysis() as analysis:
+ # load data
+ data = sn.Dataset(path = data_path, column_names = var_names)
+
+ # get mean of age
+ age_mean = sn.dp_mean(data = sn.cast(data['age'], type="FLOAT"),
+ privacy_usage = {'epsilon': .65},
+ data_lower = 0.,
+ data_upper = 100.,
+ data_n = 1000
+ )
+analysis.release()
+
+print("Age accuracy is: {0}".format(age_mean.get_accuracy(0.05)))
+```
+
+The result of that operation should look similar to that below:
+
+```text
+Age accuracy is: 0.2995732273553991
+```
+
+This example computes the mean as above, and uses the `get_accuracy` function to request accuracy at `alpha` of 0.05. An `alpha` of 0.05 represents a 95% interval, in that released value will fall within the reported accuracy bounds about 95% of the time. In this example, the reported accuracy is 0.3, which means the released value will be within an interval of width 0.6, about 95% of the time. It isn't correct to think of this value as an error bar, since the released value will fall outside the reported accuracy range at the rate specified by `alpha`, and values outside the range may be outside in either direction.
+
+Analysts may query `get_accuracy` for different values of `alpha` to get narrower or wider confidence intervals, without incurring other privacy cost.
+
+## Generate a histogram
+
+The built-in `dp_histogram` function creates differentially private histograms over any of the following data types:
+
+- A continuous variable, where the set of numbers has to be divided into bins
+- A boolean or dichotomous variable that can only take on two values
+- A categorical variable, where there are distinct categories enumerated as strings
+
+Here's an example of an `Analysis` specifying bins for a continuous variable histogram:
+
+```python
+income_edges = list(range(0, 100000, 10000))
+
+with sn.Analysis() as analysis:
+ data = sn.Dataset(path = data_path, column_names = var_names)
+
+ income_histogram = sn.dp_histogram(
+ sn.cast(data['income'], type='int', lower=0, upper=100),
+ edges = income_edges,
+ upper = 1000,
+ null_value = 150,
+ privacy_usage = {'epsilon': 0.5}
+ )
+```
+
+Because the individuals are disjointly partitioned among histogram bins, the privacy cost is incurred only once per histogram, even if the histogram includes many bins.
+
+For more on histograms, see the [histograms notebook](https://github.com/opendifferentialprivacy/smartnoise-samples/blob/master/analysis/histograms.ipynb).
+
+## Generate a covariance matrix
+
+SmartNoise offers three different functionalities with its `dp_covariance` function:
+
+- Covariance between two vectors
+- Covariance matrix of a matrix
+- Cross-covariance matrix of a pair of matrices
+
+Here's an example of computing a scalar covariance:
+
+```python
+with sn.Analysis() as analysis:
+ wn_data = sn.Dataset(path = data_path, column_names = var_names)
+
+ age_income_cov_scalar = sn.dp_covariance(
+ left = sn.cast(wn_data['age'],
+ type = "FLOAT"),
+ right = sn.cast(wn_data['income'],
+ type = "FLOAT"),
+ privacy_usage = {'epsilon': 1.0},
+ left_lower = 0.,
+ left_upper = 100.,
+ left_n = 1000,
+ right_lower = 0.,
+ right_upper = 500_000.,
+ right_n = 1000)
+```
+
+For more information, see the [covariance notebook](
+https://github.com/opendifferentialprivacy/smartnoise-samples/blob/master/analysis/covariance.ipynb)
+
+## Next Steps
+
+- Explore [SmartNoise sample notebooks](https://github.com/opendifferentialprivacy/smartnoise-samples/tree/master/analysis).
machine-learning How To Enable Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-enable-data-collection.md
This article shows how to collect data from an Azure Machine Learning model depl
Once collection is enabled, the data you collect helps you:
-* [Monitor data drifts](../how-to-monitor-datasets.md) on the production data you collect.
+* [Monitor data drifts](how-to-monitor-datasets.md) on the production data you collect.
* Analyze collected data using [Power BI](#powerbi) or [Azure Databricks](#databricks)
The path to the output data in the blob follows this syntax:
- You need a trained machine-learning model to be deployed to AKS. If you don't have a model, see the [Train image classification model](../tutorial-train-deploy-notebook.md) tutorial. -- You need an AKS cluster. For information on how to create one and deploy to it, see [How to deploy and where](how-to-deploy-and-where.md).
+- You need an AKS cluster. For information on how to create one and deploy to it, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
- [Set up your environment](../how-to-configure-environment.md) and install the [Azure Machine Learning Monitoring SDK](/python/api/overview/azure/ml/install).
To enable data collection, you need to:
aks_config = AksWebservice.deploy_configuration(collect_model_data=True, enable_app_insights=True) ```
-1. To create a new image and deploy the machine learning model, see [How to deploy and where](how-to-deploy-and-where.md).
+1. To create a new image and deploy the machine learning model, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
1. Add the 'Azure-Monitoring' pip package to the conda-dependencies of the web service environment: ```Python
You can choose a tool of your preference to analyze the data collected in your B
## Next steps
-[Detect data drift](../how-to-monitor-datasets.md) on the data you have collected.
+[Detect data drift](how-to-monitor-datasets.md) on the data you have collected.
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-export-delete-data.md
+
+ Title: Export or delete workspace data
+
+description: Learn how to export or delete your workspace with the Azure Machine Learning studio, CLI, SDK, and authenticated REST APIs.
+++++ Last updated : 10/21/2021+++++
+# Export or delete your Machine Learning service workspace data
+
+In Azure Machine Learning, you can export or delete your workspace data using either the portal's graphical interface or the Python SDK. This article describes both options.
+++
+## Control your workspace data
+
+In-product data stored by Azure Machine Learning is available for export and deletion. You can export and delete using Azure Machine Learning studio, CLI, and SDK. Telemetry data can be accessed through the Azure Privacy portal.
+
+In Azure Machine Learning, personal data consists of user information in job history documents.
+
+## Delete high-level resources using the portal
+
+When you create a workspace, Azure creates several resources within the resource group:
+
+- The workspace itself
+- A storage account
+- A container registry
+- An Applications Insights instance
+- A key vault
+
+These resources can be deleted by selecting them from the list and choosing **Delete**
++
+Job history documents, which may contain personal user information, are stored in the storage account in blob storage, in subfolders of `/azureml`. You can download and delete the data from the portal.
++
+## Export and delete machine learning resources using Azure Machine Learning studio
+
+Azure Machine Learning studio provides a unified view of your machine learning resources, such as notebooks, datasets, models, and experiments. Azure Machine Learning studio emphasizes preserving a record of your data and experiments. Computational resources such as pipelines and compute resources can be deleted using the browser. For these resources, navigate to the resource in question and choose **Delete**.
+
+Datasets can be unregistered and Experiments can be archived, but these operations don't delete the data. To entirely remove the data, datasets and experiment data must be deleted at the storage level. Deleting at the storage level is done using the portal, as described previously. An individual Job can be deleted directly in studio. Deleting a Job deletes the Job's data.
+
+> [!NOTE]
+> Prior to unregistering a Dataset, use its **Data source** link to find the specific Data URL to delete.
+
+You can download training artifacts from experimental jobs using the Studio. Choose the **Experiment** and **Job** in which you're interested. Choose **Output + logs** and navigate to the specific artifacts you wish to download. Choose **...** and **Download**.
+
+You can download a registered model by navigating to the **Model** and choosing **Download**.
++
+## Export and delete resources using the Python SDK
+
+You can download the outputs of a particular job using:
+
+```python
+# Retrieved from Azure Machine Learning web UI
+run_id = 'aaaaaaaa-bbbb-cccc-dddd-0123456789AB'
+experiment = ws.experiments['my-experiment']
+run = next(run for run in ex.get_runs() if run.id == run_id)
+metrics_output_port = run.get_pipeline_output('metrics_output')
+model_output_port = run.get_pipeline_output('model_output')
+
+metrics_output_port.download('.', show_progress=True)
+model_output_port.download('.', show_progress=True)
+```
+
+The following machine learning resources can be deleted using the Python SDK:
+
+| Type | Function Call | Notes |
+| | | |
+| `Workspace` | [`delete`](/python/api/azureml-core/azureml.core.workspace.workspace#delete-delete-dependent-resources-false--no-wait-false-) | Use `delete-dependent-resources` to cascade the delete |
+| `Model` | [`delete`](/python/api/azureml-core/azureml.core.model%28class%29#delete--) | |
+| `ComputeTarget` | [`delete`](/python/api/azureml-core/azureml.core.computetarget#delete--) | |
+| `WebService` | [`delete`](/python/api/azureml-core/azureml.core.webservice%28class%29) | |
machine-learning How To Identity Based Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-identity-based-data-access.md
In this article, you learn how to connect to storage services on Azure by using
Typically, datastores use **credential-based authentication** to confirm you have permission to access the storage service. They keep connection information, like your subscription ID and token authorization, in the [key vault](https://azure.microsoft.com/services/key-vault/) that's associated with the workspace. When you create a datastore that uses **identity-based data access**, your Azure account ([Azure Active Directory token](../../active-directory/fundamentals/active-directory-whatis.md)) is used to confirm you have permission to access the storage service. In the **identity-based data access** scenario, no authentication credentials are saved. Only the storage account information is stored in the datastore.
-To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](../how-to-connect-data-ui.md#create-datastores).
+To create datastores with **identity-based** data access via the Azure Machine Learning studio UI, see [Connect to data with the Azure Machine Learning studio](how-to-connect-data-ui.md#create-datastores).
To create datastores that use **credential-based** authentication, like access keys or service principals, see [Connect to storage services on Azure](how-to-access-data.md).
We recommend that you use [Azure Machine Learning datasets](how-to-create-regist
> [!IMPORTANT] > Datasets using identity-based data access are not supported for [automated ML experiments](../how-to-configure-auto-train.md).
-Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](../how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
+Datasets package your data into a lazily evaluated consumable object for machine learning tasks like training. Also, with datasets you can [download or mount](how-to-train-with-datasets.md#mount-vs-download) files of any format from Azure storage services like Azure Blob Storage and Azure Data Lake Storage to a compute target.
To create a dataset, you can reference paths from datastores that also use identity-based data access .
identity:
## Next steps * [Create an Azure Machine Learning dataset](how-to-create-register-datasets.md)
-* [Train with datasets](../how-to-train-with-datasets.md)
+* [Train with datasets](how-to-train-with-datasets.md)
* [Create a datastore with key-based data access](how-to-access-data.md)
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-log-pipelines-application-insights.md
+
+ Title: 'Monitor & collect pipeline log files'
+
+description: Add logging to your training and batch scoring pipelines and view the logged results in Application Insights.
+++++ Last updated : 10/21/2021++++
+# Collect machine learning pipeline log files in Application Insights for alerts and debugging
++
+The [OpenCensus](https://opencensus.io/quickstart/python/) Python library can be used to route logs to Application Insights from your scripts. Aggregating logs from pipeline runs in one place allows you to build queries and diagnose issues. Using Application Insights will allow you to track logs over time and compare pipeline logs across runs.
+
+Having your logs in once place will provide a history of exceptions and error messages. Since Application Insights integrates with Azure Alerts, you can also create alerts based on Application Insights queries.
+
+## Prerequisites
+
+* Follow the steps to create an [Azure Machine Learning workspace](../quickstart-create-resources.md) and [create your first pipeline](./how-to-create-machine-learning-pipelines.md)
+* [Configure your development environment](../how-to-configure-environment.md) to install the Azure Machine Learning SDK.
+* Install the [OpenCensus Azure Monitor Exporter](https://pypi.org/project/opencensus-ext-azure/) package locally:
+ ```python
+ pip install opencensus-ext-azure
+ ```
+* Create an [Application Insights instance](/azure/azure-monitor/app/opencensus-python) (this doc also contains information on getting the connection string for the resource)
+
+## Getting Started
+
+This section is an introduction specific to using OpenCensus from an Azure Machine Learning pipeline. For a detailed tutorial, see [OpenCensus Azure Monitor Exporters](https://github.com/census-instrumentation/opencensus-python/tree/master/contrib/opencensus-ext-azure)
+
+Add a PythonScriptStep to your Azure ML Pipeline. Configure your [RunConfiguration](/python/api/azureml-core/azureml.core.runconfiguration) with the dependency on opencensus-ext-azure. Configure the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable.
+
+```python
+from azureml.core.conda_dependencies import CondaDependencies
+from azureml.core.runconfig import RunConfiguration
+from azureml.pipeline.core import Pipeline
+from azureml.pipeline.steps import PythonScriptStep
+
+# Connecting to the workspace and compute target not shown
+
+# Add pip dependency on OpenCensus
+dependencies = CondaDependencies()
+dependencies.add_pip_package("opencensus-ext-azure>=1.0.1")
+run_config = RunConfiguration(conda_dependencies=dependencies)
+
+# Add environment variable with Application Insights Connection String
+# Replace the value with your own connection string
+run_config.environment.environment_variables = {
+ "APPLICATIONINSIGHTS_CONNECTION_STRING": 'InstrumentationKey=00000000-0000-0000-0000-000000000000'
+}
+
+# Configure step with runconfig
+sample_step = PythonScriptStep(
+ script_name="sample_step.py",
+ compute_target=compute_target,
+ runconfig=run_config
+)
+
+# Submit new pipeline run
+pipeline = Pipeline(workspace=ws, steps=[sample_step])
+pipeline.submit(experiment_name="Logging_Experiment")
+```
+
+Create a file called `sample_step.py`. Import the AzureLogHandler class to route logs to Application Insights. You'll also need to import the Python Logging library.
+
+```python
+from opencensus.ext.azure.log_exporter import AzureLogHandler
+import logging
+```
+
+Next, add the AzureLogHandler to the Python logger.
+
+```python
+logger = logging.getLogger(__name__)
+logger.setLevel(logging.DEBUG)
+logger.addHandler(logging.StreamHandler())
+
+# Assumes the environment variable APPLICATIONINSIGHTS_CONNECTION_STRING is already set
+logger.addHandler(AzureLogHandler())
+logger.warning("I will be sent to Application Insights")
+```
+
+## Logging with Custom Dimensions
+
+By default, logs forwarded to Application Insights won't have enough context to trace back to the run or experiment. To make the logs actionable for diagnosing issues, more fields are needed.
+
+To add these fields, Custom Dimensions can be added to provide context to a log message. One example is when someone wants to view logs across multiple steps in the same pipeline run.
+
+Custom Dimensions make up a dictionary of key-value (stored as string, string) pairs. The dictionary is then sent to Application Insights and displayed as a column in the query results. Its individual dimensions can be used as [query parameters](#other-helpful-queries).
+
+### Helpful Context to include
+
+| Field | Reasoning/Example |
+|--|--|
+| parent_run_id | Can query logs for ones with the same parent_run_id to see logs over time for all steps, instead of having to dive into each individual step |
+| step_id | Can query logs for ones with the same step_id to see where an issue occurred with a narrow scope to just the individual step |
+| step_name | Can query logs to see step performance over time. Also helps to find a step_id for recent runs without diving into the portal UI |
+| experiment_name | Can query across logs to see experiment performance over time. Also helps find a parent_run_id or step_id for recent runs without diving into the portal UI |
+| run_url | Can provide a link directly back to the run for investigation. |
+
+**Other helpful fields**
+
+These fields may require extra code instrumentation, and aren't provided by the run context.
+
+| Field | Reasoning/Example |
+|-|--|
+| build_url/build_version | If using CI/CD to deploy, this field can correlate logs to the code version that provided the step and pipeline logic. This link can further help to diagnose issues, or identify models with specific traits (log/metric values) |
+| run_type | Can differentiate between different model types, or training vs. scoring runs |
+
+### Creating a Custom Dimensions dictionary
+
+```python
+from azureml.core import Run
+
+run = Run.get_context(allow_offline=False)
+
+custom_dimensions = {
+ "parent_run_id": run.parent.id,
+ "step_id": run.id,
+ "step_name": run.name,
+ "experiment_name": run.experiment.name,
+ "run_url": run.parent.get_portal_url(),
+ "run_type": "training"
+}
+
+# Assumes AzureLogHandler was already registered above
+logger.info("I will be sent to Application Insights with Custom Dimensions", extra= {"custom_dimensions":custom_dimensions})
+```
+
+## OpenCensus Python logging considerations
+
+The OpenCensus AzureLogHandler is used to route Python logs to Application Insights. As a result, Python logging nuances should be considered. When a logger is created, it has a default log level and will show logs greater than or equal to that level. A good reference for using Python logging features is the [Logging Cookbook](https://docs.python.org/3/howto/logging-cookbook.html).
+
+The `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable is needed for the OpenCensus library. We recommend setting this environment variable instead of passing it in as a pipeline parameter to avoid passing around plaintext connection strings.
+
+## Querying logs in Application Insights
+
+The logs routed to Application Insights will show up under 'traces' or 'exceptions'. Be sure to adjust your time window to include your pipeline run.
+
+![Application Insights Query result](../media/how-to-debug-pipelines-application-insights/traces-application-insights-query.png)
+
+The result in Application Insights will show the log message and level, file path, and code line number. It will also show any custom dimensions included. In this image, the customDimensions dictionary shows the key/value pairs from the previous [code sample](#creating-a-custom-dimensions-dictionary).
+
+### Other helpful queries
+
+Some of the queries below use 'customDimensions.Level'. These severity levels correspond to the level the Python log was originally sent with. For more query information, see [Azure Monitor Log Queries](/azure/data-explorer/kusto/query/).
+
+| Use case | Query |
+||-|
+| Log results for specific custom dimension, for example 'parent_run_id' | <pre>traces \| <br>where customDimensions.parent_run_id == '931024c2-3720-11ea-b247-c49deda841c1</pre> |
+| Log results for all training runs over the last seven days | <pre>traces \| <br>where timestamp > ago(7d) <br>and customDimensions.run_type == 'training'</pre> |
+| Log results with severityLevel Error from the last seven days | <pre>traces \| <br>where timestamp > ago(7d) <br>and customDimensions.Level == 'ERROR' |
+| Count of log results with severityLevel Error over the last seven days | <pre>traces \| <br>where timestamp > ago(7d) <br>and customDimensions.Level == 'ERROR' \| <br>summarize count()</pre> |
+
+## Next Steps
+
+Once you have logs in your Application Insights instance, they can be used to set [Azure Monitor alerts](/azure/azure-monitor/alerts/alerts-overview) based on query results.
+
+You can also add results from queries to an [Azure Dashboard](/azure/azure-monitor/app/tutorial-app-dashboards#add-logs-query) for more insights.
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-monitor-datasets.md
+
+ Title: Detect data drift on datasets (preview)
+
+description: Learn how to set up data drift detection in Azure Learning. Create datasets monitors (preview), monitor for data drift, and set up alerts.
++++++ Last updated : 08/17/2022++
+#Customer intent: As a data scientist, I want to detect data drift in my datasets and set alerts for when drift is large.
++
+# Detect data drift (preview) on datasets
++
+Learn how to monitor data drift and set alerts when drift is high.
+
+With Azure Machine Learning dataset monitors (preview), you can:
+* **Analyze drift in your data** to understand how it changes over time.
+* **Monitor model data** for differences between training and serving datasets. Start by [collecting model data from deployed models](how-to-enable-data-collection.md).
+* **Monitor new data** for differences between any baseline and target dataset.
+* **Profile features in data** to track how statistical properties change over time.
+* **Set up alerts on data drift** for early warnings to potential issues.
+* **[Create a new dataset version](how-to-version-track-datasets.md)** when you determine the data has drifted too much.
+
+An [Azure Machine learning dataset](how-to-create-register-datasets.md) is used to create the monitor. The dataset must include a timestamp column.
+
+You can view data drift metrics with the Python SDK or in Azure Machine Learning studio. Other metrics and insights are available through the [Azure Application Insights](/azure/azure-monitor/app/app-insights-overview) resource associated with the Azure Machine Learning workspace.
+
+> [!IMPORTANT]
+> Data drift detection for datasets is currently in public preview.
+> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+## Prerequisites
+
+To create and work with dataset monitors, you need:
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+* An [Azure Machine Learning workspace](../quickstart-create-resources.md).
+* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package.
+* Structured (tabular) data with a timestamp specified in the file path, file name, or column in the data.
+
+## What is data drift?
+
+Data drift is one of the top reasons model accuracy degrades over time. For machine learning models, data drift is the change in model input data that leads to model performance degradation. Monitoring data drift helps detect these model performance issues.
+
+Causes of data drift include:
+
+- Upstream process changes, such as a sensor being replaced that changes the units of measurement from inches to centimeters.
+- Data quality issues, such as a broken sensor always reading 0.
+- Natural drift in the data, such as mean temperature changing with the seasons.
+- Change in relation between features, or covariate shift.
+
+Azure Machine Learning simplifies drift detection by computing a single metric abstracting the complexity of datasets being compared. These datasets may have hundreds of features and tens of thousands of rows. Once drift is detected, you drill down into which features are causing the drift. You then inspect feature level metrics to debug and isolate the root cause for the drift.
+
+This top down approach makes it easy to monitor data instead of traditional rules-based techniques. Rules-based techniques such as allowed data range or allowed unique values can be time consuming and error prone.
+
+In Azure Machine Learning, you use dataset monitors to detect and alert for data drift.
+
+### Dataset monitors
+
+With a dataset monitor you can:
+
+* Detect and alert to data drift on new data in a dataset.
+* Analyze historical data for drift.
+* Profile new data over time.
+
+The data drift algorithm provides an overall measure of change in data and indication of which features are responsible for further investigation. Dataset monitors produce a number of other metrics by profiling new data in the `timeseries` dataset.
+
+Custom alerting can be set up on all metrics generated by the monitor through [Azure Application Insights](/azure/azure-monitor/app/app-insights-overview). Dataset monitors can be used to quickly catch data issues and reduce the time to debug the issue by identifying likely causes.
+
+Conceptually, there are three primary scenarios for setting up dataset monitors in Azure Machine Learning.
+
+Scenario | Description
+|
+Monitor a model's serving data for drift from the training data | Results from this scenario can be interpreted as monitoring a proxy for the model's accuracy, since model accuracy degrades when the serving data drifts from the training data.
+Monitor a time series dataset for drift from a previous time period. | This scenario is more general, and can be used to monitor datasets involved upstream or downstream of model building. The target dataset must have a timestamp column. The baseline dataset can be any tabular dataset that has features in common with the target dataset.
+Perform analysis on past data. | This scenario can be used to understand historical data and inform decisions in settings for dataset monitors.
+
+Dataset monitors depend on the following Azure services.
+
+|Azure service |Description |
+|||
+| *Dataset* | Drift uses Machine Learning datasets to retrieve training data and compare data for model training. Generating profile of data is used to generate some of the reported metrics such as min, max, distinct values, distinct values count. |
+| *Azureml pipeline and compute* | The drift calculation job is hosted in azureml pipeline. The job is triggered on demand or by schedule to run on a compute configured at drift monitor creation time.
+| *Application insights*| Drift emits metrics to Application Insights belonging to the machine learning workspace.
+| *Azure blob storage*| Drift emits metrics in json format to Azure blob storage.
+
+### Baseline and target datasets
+
+You monitor [Azure machine learning datasets](how-to-create-register-datasets.md) for data drift. When you create a dataset monitor, you will reference your:
+* Baseline dataset - usually the training dataset for a model.
+* Target dataset - usually model input data - is compared over time to your baseline dataset. This comparison means that your target dataset must have a timestamp column specified.
+
+The monitor will compare the baseline and target datasets.
+
+## Create target dataset
+
+The target dataset needs the `timeseries` trait set on it by specifying the timestamp column either from a column in the data or a virtual column derived from the path pattern of the files. Create the dataset with a timestamp through the [Python SDK](#sdk-dataset) or [Azure Machine Learning studio](#studio-dataset). A column representing a "timestamp" must be specified to add `timeseries` trait to the dataset. If your data is partitioned into folder structure with time info, such as '{yyyy/MM/dd}', create a virtual column through the path pattern setting and set it as the "partition timestamp" to enable time series API functionality.
+
+# [Python](#tab/python)
+<a name="sdk-dataset"></a>
++
+The [`Dataset`](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-) class [`with_timestamp_columns()`](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-) method defines the time stamp column for the dataset.
+
+```python
+from azureml.core import Workspace, Dataset, Datastore
+
+# get workspace object
+ws = Workspace.from_config()
+
+# get datastore object
+dstore = Datastore.get(ws, 'your datastore name')
+
+# specify datastore paths
+dstore_paths = [(dstore, 'weather/*/*/*/*/data.parquet')]
+
+# specify partition format
+partition_format = 'weather/{state}/{date:yyyy/MM/dd}/data.parquet'
+
+# create the Tabular dataset with 'state' and 'date' as virtual columns
+dset = Dataset.Tabular.from_parquet_files(path=dstore_paths, partition_format=partition_format)
+
+# assign the timestamp attribute to a real or virtual column in the dataset
+dset = dset.with_timestamp_columns('date')
+
+# register the dataset as the target dataset
+dset = dset.register(ws, 'target')
+```
+
+> [!TIP]
+> For a full example of using the `timeseries` trait of datasets, see the [example notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/timeseries-datasets/tabular-timeseries-dataset-filtering.ipynb) or the [datasets SDK documentation](/python/api/azureml-core/azureml.data.tabulardataset#with-timestamp-columns-timestamp-none--partition-timestamp-none--validate-false-kwargs-).
+
+# [Studio](#tab/azure-studio)
+
+<a name="studio-dataset"></a>
+
+If you create your dataset using Azure Machine Learning studio, ensure the path to your data contains timestamp information, include all subfolders with data, and set the partition format.
+
+In the following example, all data under the subfolder *NoaaIsdFlorida/2019* is taken, and the partition format specifies the timestamp's year, month, and day.
+
+[![Partition format](./media/how-to-monitor-datasets/partition-format.png)](media/how-to-monitor-datasets/partition-format-expand.png)
+
+In the **Schema** settings, specify the **timestamp** column from a virtual or real column in the specified dataset. This type indicates that your data has a time component.
++
+If your data is already partitioned by date or time, as is the case here, you can also specify the **Partition timestamp**. This allows more efficient processing of dates and enables timeseries APIs that you can leverage during training.
++++
+## Create dataset monitor
+
+Create a dataset monitor to detect and alert to data drift on a new dataset. Use either the [Python SDK](#sdk-monitor) or [Azure Machine Learning studio](#studio-monitor).
+
+# [Python](#tab/python)
+<a name="sdk-monitor"></a>
++
+See the [Python SDK reference documentation on data drift](/python/api/azureml-datadrift/azureml.datadrift) for full details.
+
+The following example shows how to create a dataset monitor using the Python SDK
+
+```python
+from azureml.core import Workspace, Dataset
+from azureml.datadrift import DataDriftDetector
+from datetime import datetime
+
+# get the workspace object
+ws = Workspace.from_config()
+
+# get the target dataset
+target = Dataset.get_by_name(ws, 'target')
+
+# set the baseline dataset
+baseline = target.time_before(datetime(2019, 2, 1))
+
+# set up feature list
+features = ['latitude', 'longitude', 'elevation', 'windAngle', 'windSpeed', 'temperature', 'snowDepth', 'stationName', 'countryOrRegion']
+
+# set up data drift detector
+monitor = DataDriftDetector.create_from_datasets(ws, 'drift-monitor', baseline, target,
+ compute_target='cpu-cluster',
+ frequency='Week',
+ feature_list=None,
+ drift_threshold=.6,
+ latency=24)
+
+# get data drift detector by name
+monitor = DataDriftDetector.get_by_name(ws, 'drift-monitor')
+
+# update data drift detector
+monitor = monitor.update(feature_list=features)
+
+# run a backfill for January through May
+backfill1 = monitor.backfill(datetime(2019, 1, 1), datetime(2019, 5, 1))
+
+# run a backfill for May through today
+backfill1 = monitor.backfill(datetime(2019, 5, 1), datetime.today())
+
+# disable the pipeline schedule for the data drift detector
+monitor = monitor.disable_schedule()
+
+# enable the pipeline schedule for the data drift detector
+monitor = monitor.enable_schedule()
+```
+
+> [!TIP]
+> For a full example of setting up a `timeseries` dataset and data drift detector, see our [example notebook](https://aka.ms/datadrift-notebook).
++
+# [Studio](#tab/azure-studio)
+<a name="studio-monitor"></a>
+
+1. Navigate to the [studio's homepage](https://ml.azure.com).
+1. Select the **Data** tab on the left.
+1. Select **Dataset monitors**.
+ ![Monitor list](./media/how-to-monitor-datasets/monitor-list.png)
+
+1. Click on the **+Create monitor** button and continue through the wizard by clicking **Next**.
++
+* **Select target dataset**. The target dataset is a tabular dataset with timestamp column specified which will be analyzed for data drift. The target dataset must have features in common with the baseline dataset, and should be a `timeseries` dataset, which new data is appended to. Historical data in the target dataset can be analyzed, or new data can be monitored.
+
+* **Select baseline dataset.** Select the tabular dataset to be used as the baseline for comparison of the target dataset over time. The baseline dataset must have features in common with the target dataset. Select a time range to use a slice of the target dataset, or specify a separate dataset to use as the baseline.
+
+* **Monitor settings**. These settings are for the scheduled dataset monitor pipeline, which will be created.
+
+ | Setting | Description | Tips | Mutable |
+ | - | -- | - | - |
+ | Name | Name of the dataset monitor. | | No |
+ | Features | List of features that will be analyzed for data drift over time. | Set to a model's output feature(s) to measure concept drift. Don't include features that naturally drift over time (month, year, index, etc.). You can backfill and existing data drift monitor after adjusting the list of features. | Yes |
+ | Compute target | Azure Machine Learning compute target to run the dataset monitor jobs. | | Yes |
+ | Enable | Enable or disable the schedule on the dataset monitor pipeline | Disable the schedule to analyze historical data with the backfill setting. It can be enabled after the dataset monitor is created. | Yes |
+ | Frequency | The frequency that will be used to schedule the pipeline job and analyze historical data if running a backfill. Options include daily, weekly, or monthly. | Each job compares data in the target dataset according to the frequency: <li>Daily: Compare most recent complete day in target dataset with baseline <li>Weekly: Compare most recent complete week (Monday - Sunday) in target dataset with baseline <li>Monthly: Compare most recent complete month in target dataset with baseline | No |
+ | Latency | Time, in hours, it takes for data to arrive in the dataset. For instance, if it takes three days for data to arrive in the SQL DB the dataset encapsulates, set the latency to 72. | Cannot be changed after the dataset monitor is created | No |
+ | Email addresses | Email addresses for alerting based on breach of the data drift percentage threshold. | Emails are sent through Azure Monitor. | Yes |
+ | Threshold | Data drift percentage threshold for email alerting. | Further alerts and events can be set on many other metrics in the workspace's associated Application Insights resource. | Yes |
+
+After finishing the wizard, the resulting dataset monitor will appear in the list. Select it to go to that monitor's details page.
+++
+## Understand data drift results
+
+This section shows you the results of monitoring a dataset, found in the **Datasets** / **Dataset monitors** page in Azure studio. You can update the settings as well as analyze existing data for a specific time period on this page.
+
+Start with the top-level insights into the magnitude of data drift and a highlight of features to be further investigated.
+++
+| Metric | Description |
+| | -- |
+| Data drift magnitude | A percentage of drift between the baseline and target dataset over time. Ranging from 0 to 100, 0 indicates identical datasets and 100 indicates the Azure Machine Learning data drift model can completely tell the two datasets apart. Noise in the precise percentage measured is expected due to machine learning techniques being used to generate this magnitude. |
+| Top drifting features | Shows the features from the dataset that have drifted the most and are therefore contributing the most to the Drift Magnitude metric. Due to covariate shift, the underlying distribution of a feature does not necessarily need to change to have relatively high feature importance. |
+| Threshold | Data Drift magnitude beyond the set threshold will trigger alerts. This can be configured in the monitor settings. |
+
+### Drift magnitude trend
+
+See how the dataset differs from the target dataset in the specified time period. The closer to 100%, the more the two datasets differ.
++
+### Drift magnitude by features
+
+This section contains feature-level insights into the change in the selected feature's distribution, as well as other statistics, over time.
+
+The target dataset is also profiled over time. The statistical distance between the baseline distribution of each feature is compared with the target dataset's over time. Conceptually, this is similar to the data drift magnitude. However this statistical distance is for an individual feature rather than all features. Min, max, and mean are also available.
+
+In the Azure Machine Learning studio, click on a bar in the graph to see the feature-level details for that date. By default, you see the baseline dataset's distribution and the most recent job's distribution of the same feature.
++
+These metrics can also be retrieved in the Python SDK through the `get_metrics()` method on a `DataDriftDetector` object.
+
+### Feature details
+
+Finally, scroll down to view details for each individual feature. Use the dropdowns above the chart to select the feature, and additionally select the metric you want to view.
++
+Metrics in the chart depend on the type of feature.
+
+* Numeric features
+
+ | Metric | Description |
+ | | -- |
+ | Wasserstein distance | Minimum amount of work to transform baseline distribution into the target distribution. |
+ | Mean value | Average value of the feature. |
+ | Min value | Minimum value of the feature. |
+ | Max value | Maximum value of the feature. |
+
+* Categorical features
+
+ | Metric | Description |
+ | | -- |
+ | Euclidian distance     |  Computed for categorical columns. Euclidean distance is computed on two vectors, generated from empirical distribution of the same categorical column from two datasets. 0 indicates there is no difference in the empirical distributions.  The more it deviates from 0, the more this column has drifted. Trends can be observed from a time series plot of this metric and can be helpful in uncovering a drifting feature.  |
+ | Unique values | Number of unique values (cardinality) of the feature. |
+
+On this chart, select a single date to compare the feature distribution between the target and this date for the displayed feature. For numeric features, this shows two probability distributions. If the feature is numeric, a bar chart is shown.
++
+## Metrics, alerts, and events
+
+Metrics can be queried in the [Azure Application Insights](/azure/azure-monitor/app/app-insights-overview) resource associated with your machine learning workspace. You have access to all features of Application Insights including set up for custom alert rules and action groups to trigger an action such as, an Email/SMS/Push/Voice or Azure Function. Refer to the complete Application Insights documentation for details.
+
+To get started, navigate to the [Azure portal](https://portal.azure.com) and select your workspace's **Overview** page. The associated Application Insights resource is on the far right:
+
+[![Azure portal overview](./media/how-to-monitor-datasets/ap-overview.png)](media/how-to-monitor-datasets/ap-overview-expanded.png)
+
+Select Logs (Analytics) under Monitoring on the left pane:
+
+![Application insights overview](./media/how-to-monitor-datasets/ai-overview.png)
+
+The dataset monitor metrics are stored as `customMetrics`. You can write and run a query after setting up a dataset monitor to view them:
+
+[![Log analytics query](./media/how-to-monitor-datasets/simple-query.png)](media/how-to-monitor-datasets/simple-query-expanded.png)
+
+After identifying metrics to set up alert rules, create a new alert rule:
+
+![New alert rule](./media/how-to-monitor-datasets/alert-rule.png)
+
+You can use an existing action group, or create a new one to define the action to be taken when the set conditions are met:
+
+![New action group](./media/how-to-monitor-datasets/action-group.png)
++
+## Troubleshooting
+
+Limitations and known issues for data drift monitors:
+
+* The time range when analyzing historical data is limited to 31 intervals of the monitor's frequency setting.
+* Limitation of 200 features, unless a feature list is not specified (all features used).
+* Compute size must be large enough to handle the data.
+* Ensure your dataset has data within the start and end date for a given monitor job.
+* Dataset monitors will only work on datasets that contain 50 rows or more.
+* Columns, or features, in the dataset are classified as categorical or numeric based on the conditions in the following table. If the feature does not meet these conditions - for instance, a column of type string with >100 unique values - the feature is dropped from our data drift algorithm, but is still profiled.
+
+ | Feature type | Data type | Condition | Limitations |
+ | | | | -- |
+ | Categorical | string, bool, int, float | The number of unique values in the feature is less than 100 and less than 5% of the number of rows. | Null is treated as its own category. |
+ | Numerical | int, float | The values in the feature are of a numerical data type and do not meet the condition for a categorical feature. | Feature dropped if >15% of values are null. |
+
+* When you have created a data drift monitor but cannot see data on the **Dataset monitors** page in Azure Machine Learning studio, try the following.
+
+ 1. Check if you have selected the right date range at the top of the page.
+ 1. On the **Dataset Monitors** tab, select the experiment link to check job status. This link is on the far right of the table.
+ 1. If the job completed successfully, check the driver logs to see how many metrics have been generated or if there's any warning messages. Find driver logs in the **Output + logs** tab after you click on an experiment.
+
+* If the SDK `backfill()` function does not generate the expected output, it may be due to an authentication issue. When you create the compute to pass into this function, do not use `Run.get_context().experiment.workspace.compute_targets`. Instead, use [ServicePrincipalAuthentication](/python/api/azureml-core/azureml.core.authentication.serviceprincipalauthentication) such as the following to create the compute that you pass into that `backfill()` function:
+
+ ```python
+ auth = ServicePrincipalAuthentication(
+ tenant_id=tenant_id,
+ service_principal_id=app_id,
+ service_principal_password=client_secret
+ )
+ ws = Workspace.get("xxx", auth=auth, subscription_id="xxx", resource_group="xxx")
+ compute = ws.compute_targets.get("xxx")
+ ```
+
+* From the Model Data Collector, it can take up to (but usually less than) 10 minutes for data to arrive in your blob storage account. In a script or Notebook, wait 10 minutes to ensure cells below will run.
+
+ ```python
+ import time
+ time.sleep(600)
+ ```
+
+## Next steps
+
+* Head to the [Azure Machine Learning studio](https://ml.azure.com) or the [Python notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datadrift-tutorial/datadrift-tutorial.ipynb) to set up a dataset monitor.
+* See how to set up data drift on [models deployed to Azure Kubernetes Service](how-to-enable-data-collection.md).
+* Set up dataset drift monitors with [Azure Event Grid](../how-to-use-event-grid.md).
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-move-data-in-out-of-pipelines.md
+
+ Title: 'Moving data in ML pipelines'
+
+description: Learn how Azure Machine Learning pipelines ingest data, and how to manage and move data between pipeline steps.
+++++ Last updated : 08/18/2022++
+#Customer intent: As a data scientist using Python, I want to get data into my pipeline and flowing between steps.
++
+# Moving data into and between ML pipeline steps (Python)
++
+This article provides code for importing, transforming, and moving data between steps in an Azure Machine Learning pipeline. For an overview of how data works in Azure Machine Learning, see [Access data in Azure storage services](how-to-access-data.md). For the benefits and structure of Azure Machine Learning pipelines, see [What are Azure Machine Learning pipelines?](../concept-ml-pipelines.md)
+
+This article will show you how to:
+
+- Use `Dataset` objects for pre-existing data
+- Access data within your steps
+- Split `Dataset` data into subsets, such as training and validation subsets
+- Create `OutputFileDatasetConfig` objects to transfer data to the next pipeline step
+- Use `OutputFileDatasetConfig` objects as input to pipeline steps
+- Create new `Dataset` objects from `OutputFileDatasetConfig` you wish to persist
+
+## Prerequisites
+
+You'll need:
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+
+- The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), or access to [Azure Machine Learning studio](https://ml.azure.com/).
+
+- An Azure Machine Learning workspace.
+
+ Either [create an Azure Machine Learning workspace](../quickstart-create-resources.md) or use an existing one via the Python SDK. Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This function looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
+
+ ```python
+ import azureml.core
+ from azureml.core import Workspace, Datastore
+
+ ws = Workspace.from_config()
+ ```
+
+- Some pre-existing data. This article briefly shows the use of an [Azure blob container](/azure/storage/blobs/storage-blobs-overview).
+
+- Optional: An existing machine learning pipeline, such as the one described in [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md).
+
+## Use `Dataset` objects for pre-existing data
+
+The preferred way to ingest data into a pipeline is to use a [Dataset](/python/api/azureml-core/azureml.core.dataset%28class%29) object. `Dataset` objects represent persistent data available throughout a workspace.
+
+There are many ways to create and register `Dataset` objects. Tabular datasets are for delimited data available in one or more files. File datasets are for binary data (such as images) or for data that you'll parse. The simplest programmatic ways to create `Dataset` objects are to use existing blobs in workspace storage or public URLs:
+
+```python
+datastore = Datastore.get(workspace, 'training_data')
+iris_dataset = Dataset.Tabular.from_delimited_files(DataPath(datastore, 'iris.csv'))
+
+datastore_path = [
+ DataPath(datastore, 'animals/dog/1.jpg'),
+ DataPath(datastore, 'animals/dog/2.jpg'),
+ DataPath(datastore, 'animals/cat/*.jpg')
+]
+cats_dogs_dataset = Dataset.File.from_files(path=datastore_path)
+```
+
+For more options on creating datasets with different options and from different sources, registering them and reviewing them in the Azure Machine Learning UI, understanding how data size interacts with compute capacity, and versioning them, see [Create Azure Machine Learning datasets](how-to-create-register-datasets.md).
+
+### Pass datasets to your script
+
+To pass the dataset's path to your script, use the `Dataset` object's `as_named_input()` method. You can either pass the resulting `DatasetConsumptionConfig` object to your script as an argument or, by using the `inputs` argument to your pipeline script, you can retrieve the dataset using `Run.get_context().input_datasets[]`.
+
+Once you've created a named input, you can choose its access mode: `as_mount()` or `as_download()`. If your script processes all the files in your dataset and the disk on your compute resource is large enough for the dataset, the download access mode is the better choice. The download access mode will avoid the overhead of streaming the data at runtime. If your script accesses a subset of the dataset or it's too large for your compute, use the mount access mode. For more information, read [Mount vs. Download](how-to-train-with-datasets.md#mount-vs-download)
+
+To pass a dataset to your pipeline step:
+
+1. Use `TabularDataset.as_named_input()` or `FileDataset.as_named_input()` (no 's' at end) to create a `DatasetConsumptionConfig` object
+1. Use `as_mount()` or `as_download()` to set the access mode
+1. Pass the datasets to your pipeline steps using either the `arguments` or the `inputs` argument
+
+The following snippet shows the common pattern of combining these steps within the `PythonScriptStep` constructor:
+
+```python
+
+train_step = PythonScriptStep(
+ name="train_data",
+ script_name="train.py",
+ compute_target=cluster,
+ inputs=[iris_dataset.as_named_input('iris').as_mount()]
+)
+```
+
+> [!NOTE]
+> You would need to replace the values for all these arguments (that is, `"train_data"`, `"train.py"`, `cluster`, and `iris_dataset`) with your own data.
+> The above snippet just shows the form of the call and is not part of a Microsoft sample.
+
+You can also use methods such as `random_split()` and `take_sample()` to create multiple inputs or reduce the amount of data passed to your pipeline step:
+
+```python
+seed = 42 # PRNG seed
+smaller_dataset = iris_dataset.take_sample(0.1, seed=seed) # 10%
+train, test = smaller_dataset.random_split(percentage=0.8, seed=seed)
+
+train_step = PythonScriptStep(
+ name="train_data",
+ script_name="train.py",
+ compute_target=cluster,
+ inputs=[train.as_named_input('train').as_download(), test.as_named_input('test').as_download()]
+)
+```
+
+### Access datasets within your script
+
+Named inputs to your pipeline step script are available as a dictionary within the `Run` object. Retrieve the active `Run` object using `Run.get_context()` and then retrieve the dictionary of named inputs using `input_datasets`. If you passed the `DatasetConsumptionConfig` object using the `arguments` argument rather than the `inputs` argument, access the data using `ArgParser` code. Both techniques are demonstrated in the following snippet.
+
+```python
+# In pipeline definition script:
+# Code for demonstration only: It would be very confusing to split datasets between `arguments` and `inputs`
+train_step = PythonScriptStep(
+ name="train_data",
+ script_name="train.py",
+ compute_target=cluster,
+ arguments=['--training-folder', train.as_named_input('train').as_download()],
+ inputs=[test.as_named_input('test').as_download()]
+)
+
+# In pipeline script
+parser = argparse.ArgumentParser()
+parser.add_argument('--training-folder', type=str, dest='train_folder', help='training data folder mounting point')
+args = parser.parse_args()
+training_data_folder = args.train_folder
+
+testing_data_folder = Run.get_context().input_datasets['test']
+```
+
+The passed value will be the path to the dataset file(s).
+
+It's also possible to access a registered `Dataset` directly. Since registered datasets are persistent and shared across a workspace, you can retrieve them directly:
+
+```python
+run = Run.get_context()
+ws = run.experiment.workspace
+ds = Dataset.get_by_name(workspace=ws, name='mnist_opendataset')
+```
+
+> [!NOTE]
+> The preceding snippets show the form of the calls and are not part of a Microsoft sample. You must replace the various arguments with values from your own project.
+
+## Use `OutputFileDatasetConfig` for intermediate data
+
+While `Dataset` objects represent only persistent data, [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig) object(s) can be used for temporary data output from pipeline steps **and** persistent output data. `OutputFileDatasetConfig` supports writing data to blob storage, fileshare, adlsgen1, or adlsgen2. It supports both mount mode and upload mode. In mount mode, files written to the mounted directory are permanently stored when the file is closed. In upload mode, files written to the output directory are uploaded at the end of the job. If the job fails or is canceled, the output directory will not be uploaded.
+
+ `OutputFileDatasetConfig` object's default behavior is to write to the default datastore of the workspace. Pass your `OutputFileDatasetConfig` objects to your `PythonScriptStep` with the `arguments` parameter.
+
+```python
+from azureml.data import OutputFileDatasetConfig
+dataprep_output = OutputFileDatasetConfig()
+input_dataset = Dataset.get_by_name(workspace, 'raw_data')
+
+dataprep_step = PythonScriptStep(
+ name="prep_data",
+ script_name="dataprep.py",
+ compute_target=cluster,
+ arguments=[input_dataset.as_named_input('raw_data').as_mount(), dataprep_output]
+ )
+```
+
+> [!NOTE]
+> Concurrent writes to a `OutputFileDatasetConfig` will fail. Do not attempt to use a single `OutputFileDatasetConfig` concurrently. Do not share a single `OutputFileDatasetConfig` in a multiprocessing situation, such as when using [distributed training](../how-to-train-distributed-gpu.md).
+
+### Use `OutputFileDatasetConfig` as outputs of a training step
+
+Within your pipeline's `PythonScriptStep`, you can retrieve the available output paths using the program's arguments. If this step is the first and will initialize the output data, you must create the directory at the specified path. You can then write whatever files you wish to be contained in the `OutputFileDatasetConfig`.
+
+```python
+parser = argparse.ArgumentParser()
+parser.add_argument('--output_path', dest='output_path', required=True)
+args = parser.parse_args()
+
+# Make directory for file
+os.makedirs(os.path.dirname(args.output_path), exist_ok=True)
+with open(args.output_path, 'w') as f:
+ f.write("Step 1's output")
+```
+
+### Read `OutputFileDatasetConfig` as inputs to non-initial steps
+
+After the initial pipeline step writes some data to the `OutputFileDatasetConfig` path and it becomes an output of that initial step, it can be used as an input to a later step.
+
+In the following code:
+
+* `step1_output_data` indicates that the output of the PythonScriptStep, `step1` is written to the ADLS Gen 2 datastore, `my_adlsgen2` in upload access mode. Learn more about how to [set up role permissions](how-to-access-data.md) in order to write data back to ADLS Gen 2 datastores.
+
+* After `step1` completes and the output is written to the destination indicated by `step1_output_data`, then step2 is ready to use `step1_output_data` as an input.
+
+```python
+# get adls gen 2 datastore already registered with the workspace
+datastore = workspace.datastores['my_adlsgen2']
+step1_output_data = OutputFileDatasetConfig(name="processed_data", destination=(datastore, "mypath/{run-id}/{output-name}")).as_upload()
+
+step1 = PythonScriptStep(
+ name="generate_data",
+ script_name="step1.py",
+ runconfig = aml_run_config,
+ arguments = ["--output_path", step1_output_data]
+)
+
+step2 = PythonScriptStep(
+ name="read_pipeline_data",
+ script_name="step2.py",
+ compute_target=compute,
+ runconfig = aml_run_config,
+ arguments = ["--pd", step1_output_data.as_input()]
+
+)
+
+pipeline = Pipeline(workspace=ws, steps=[step1, step2])
+```
+
+## Register `OutputFileDatasetConfig` objects for reuse
+
+If you'd like to make your `OutputFileDatasetConfig` available for longer than the duration of your experiment, register it to your workspace to share and reuse across experiments.
+
+```python
+step1_output_ds = step1_output_data.register_on_complete(name='processed_data',
+ description = 'files from step1`)
+```
+
+## Delete `OutputFileDatasetConfig` contents when no longer needed
+
+Azure does not automatically delete intermediate data written with `OutputFileDatasetConfig`. To avoid storage charges for large amounts of unneeded data, you should either:
+
+* Programmatically delete intermediate data at the end of a pipeline job, when it is no longer needed
+* Use blob storage with a short-term storage policy for intermediate data (see [Optimize costs by automating Azure Blob Storage access tiers](/azure/storage/blobs/lifecycle-management-overview))
+* Regularly review and delete no-longer-needed data
+
+For more information, see [Plan and manage costs for Azure Machine Learning](../concept-plan-manage-cost.md).
+
+## Next steps
+
+* [Create an Azure machine learning dataset](how-to-create-register-datasets.md)
+* [Create and run machine learning pipelines with Azure Machine Learning SDK](how-to-create-machine-learning-pipelines.md)
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-train-with-datasets.md
+
+ Title: Train with machine learning datasets
+
+description: Learn how to make your data available to your local or remote compute for model training with Azure Machine Learning datasets.
++++++ Last updated : 10/21/2021++
+#Customer intent: As an experienced Python developer, I need to make my data available to my local or remote compute target to train my machine learning models.
++
+# Train models with Azure Machine Learning datasets
++
+In this article, you learn how to work with [Azure Machine Learning datasets](/python/api/azureml-core/azureml.core.dataset%28class%29) to train machine learning models. You can use datasets in your local or remote compute target without worrying about connection strings or data paths.
+
+* For structured data, see [Consume datasets in machine learning training scripts](#consume-datasets-in-machine-learning-training-scripts).
+
+* For unstructured data, see [Mount files to remote compute targets](#mount-files-to-remote-compute-targets).
+
+Azure Machine Learning datasets provide a seamless integration with Azure Machine Learning training functionality like [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrunconfig), [HyperDrive](/python/api/azureml-train-core/azureml.train.hyperdrive), and [Azure Machine Learning pipelines](./how-to-create-machine-learning-pipelines.md).
+
+If you aren't ready to make your data available for model training, but want to load your data to your notebook for data exploration, see how to [explore the data in your dataset](how-to-create-register-datasets.md).
+
+## Prerequisites
+
+To create and train with datasets, you need:
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* An [Azure Machine Learning workspace](../quickstart-create-resources.md).
+
+* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install) (>= 1.13.0), which includes the `azureml-datasets` package.
++
+> [!Note]
+> Some Dataset classes have dependencies on the [azureml-dataprep](https://pypi.org/project/azureml-dataprep/) package. For Linux users, these classes are supported only on the following distributions: Red Hat Enterprise Linux, Ubuntu, Fedora, and CentOS.
+
+## Consume datasets in machine learning training scripts
+
+If you have structured data not yet registered as a dataset, create a TabularDataset and use it directly in your training script for your local or remote experiment.
+
+In this example, you create an unregistered [TabularDataset](/python/api/azureml-core/azureml.data.tabulardataset) and specify it as a script argument in the [ScriptRunConfig](/python/api/azureml-core/azureml.core.script_run_config.scriptrunconfig) object for training. If you want to reuse this TabularDataset with other experiments in your workspace, see [how to register datasets to your workspace](how-to-create-register-datasets.md).
+
+### Create a TabularDataset
+
+The following code creates an unregistered TabularDataset from a web url.
+
+```Python
+from azureml.core.dataset import Dataset
+
+web_path ='https://dprepdata.blob.core.windows.net/demo/Titanic.csv'
+titanic_ds = Dataset.Tabular.from_delimited_files(path=web_path)
+```
+
+TabularDataset objects provide the ability to load the data in your TabularDataset into a pandas or Spark DataFrame so that you can work with familiar data preparation and training libraries without having to leave your notebook.
+
+### Access dataset in training script
+
+The following code configures a script argument `--input-data` that you'll specify when you configure your training run (see next section). When the tabular dataset is passed in as the argument value, Azure ML will resolve that to ID of the dataset, which you can then use to access the dataset in your training script (without having to hardcode the name or ID of the dataset in your script). It then uses the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--nullout-of-range-datetime--null--) method to load that dataset into a pandas dataframe for further data exploration and preparation prior to training.
+
+> [!Note]
+> If your original data source contains NaN, empty strings or blank values, when you use `to_pandas_dataframe()`, then those values are replaced as a *Null* value.
+
+If you need to load the prepared data into a new dataset from an in-memory pandas dataframe, write the data to a local file, like a parquet, and create a new dataset from that file. Learn more about [how to create datasets](how-to-create-register-datasets.md).
+
+```Python
+%%writefile $script_folder/train_titanic.py
+
+import argparse
+from azureml.core import Dataset, Run
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--input-data", type=str)
+args = parser.parse_args()
+
+run = Run.get_context()
+ws = run.experiment.workspace
+
+# get the input dataset by ID
+dataset = Dataset.get_by_id(ws, id=args.input_data)
+
+# load the TabularDataset to pandas DataFrame
+df = dataset.to_pandas_dataframe()
+```
+
+### Configure the training run
+
+A [ScriptRunConfig](/python/api/azureml-core/azureml.core.scriptrun) object is used to configure and submit the training run.
+
+This code creates a ScriptRunConfig object, `src`, that specifies:
+
+* A script directory for your scripts. All the files in this directory are uploaded into the cluster nodes for execution.
+* The training script, *train_titanic.py*.
+* The input dataset for training, `titanic_ds`, as a script argument. Azure ML will resolve this to corresponding ID of the dataset when it's passed to your script.
+* The compute target for the run.
+* The environment for the run.
+
+```python
+from azureml.core import ScriptRunConfig
+
+src = ScriptRunConfig(source_directory=script_folder,
+ script='train_titanic.py',
+ # pass dataset as an input with friendly name 'titanic'
+ arguments=['--input-data', titanic_ds.as_named_input('titanic')],
+ compute_target=compute_target,
+ environment=myenv)
+
+# Submit the run configuration for your training run
+run = experiment.submit(src)
+run.wait_for_completion(show_output=True)
+```
+
+## Mount files to remote compute targets
+
+If you have unstructured data, create a [FileDataset](/python/api/azureml-core/azureml.data.filedataset) and either mount or download your data files to make them available to your remote compute target for training. Learn about when to use [mount vs. download](#mount-vs-download) for your remote training experiments.
+
+The following example,
+
+* Creates an input FileDataset, `mnist_ds`, for your training data.
+* Specifies where to write training results, and to promote those results as a FileDataset.
+* Mounts the input dataset to the compute target.
+
+> [!Note]
+> If you are using a custom Docker base image, you will need to install fuse via `apt-get install -y fuse` as a dependency for dataset mount to work. Learn how to [build a custom build image](../how-to-deploy-custom-container.md).
+
+For the notebook example, see [How to configure a training run with data input and output](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/work-with-data/datasets-tutorial/scriptrun-with-data-input-output/how-to-use-scriptrun.ipynb).
+
+### Create a FileDataset
+
+The following example creates an unregistered FileDataset, `mnist_data` from web urls. This FileDataset is the input data for your training run.
+
+Learn more about [how to create datasets](how-to-create-register-datasets.md) from other sources.
+
+```Python
+
+from azureml.core.dataset import Dataset
+
+web_paths = [
+ 'http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/t10k-images-idx3-ubyte.gz',
+ 'http://yann.lecun.com/exdb/mnist/t10k-labels-idx1-ubyte.gz'
+ ]
+
+mnist_ds = Dataset.File.from_files(path = web_paths)
+
+```
+### Where to write training output
+
+You can specify where to write your training results with an [OutputFileDatasetConfig object](/python/api/azureml-core/azureml.data.output_dataset_config.outputfiledatasetconfig).
+
+OutputFileDatasetConfig objects allow you to:
+
+* Mount or upload the output of a run to cloud storage you specify.
+* Save the output as a FileDataset to these supported storage types:
+ * Azure blob
+ * Azure file share
+ * Azure Data Lake Storage generations 1 and 2
+* Track the data lineage between training runs.
+
+The following code specifies that training results should be saved as a FileDataset in the `outputdataset` folder in the default blob datastore, `def_blob_store`.
+
+```python
+from azureml.core import Workspace
+from azureml.data import OutputFileDatasetConfig
+
+ws = Workspace.from_config()
+
+def_blob_store = ws.get_default_datastore()
+output = OutputFileDatasetConfig(destination=(def_blob_store, 'sample/outputdataset'))
+```
+
+### Configure the training run
+
+We recommend passing the dataset as an argument when mounting via the `arguments` parameter of the `ScriptRunConfig` constructor. By doing so, you get the data path (mounting point) in your training script via arguments. This way, you're able to use the same training script for local debugging and remote training on any cloud platform.
+
+The following example creates a ScriptRunConfig that passes in the FileDataset via `arguments`. After you submit the run, data files referred to by the dataset `mnist_ds` are mounted to the compute target, and training results are saved to the specified `outputdataset` folder in the default datastore.
+
+```python
+from azureml.core import ScriptRunConfig
+
+input_data= mnist_ds.as_named_input('input').as_mount()# the dataset will be mounted on the remote compute
+
+src = ScriptRunConfig(source_directory=script_folder,
+ script='dummy_train.py',
+ arguments=[input_data, output],
+ compute_target=compute_target,
+ environment=myenv)
+
+# Submit the run configuration for your training run
+run = experiment.submit(src)
+run.wait_for_completion(show_output=True)
+```
+
+### Simple training script
+
+The following script is submitted through the ScriptRunConfig. It reads the `mnist_ds ` dataset as input, and writes the file to the `outputdataset` folder in the default blob datastore, `def_blob_store`.
+
+```Python
+%%writefile $source_directory/dummy_train.py
+
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License.
+import sys
+import os
+
+print("*********************************************************")
+print("Hello Azure ML!")
+
+mounted_input_path = sys.argv[1]
+mounted_output_path = sys.argv[2]
+
+print("Argument 1: %s" % mounted_input_path)
+print("Argument 2: %s" % mounted_output_path)
+
+with open(mounted_input_path, 'r') as f:
+ content = f.read()
+ with open(os.path.join(mounted_output_path, 'output.csv'), 'w') as fw:
+ fw.write(content)
+```
+
+## Mount vs download
+
+Mounting or downloading files of any format are supported for datasets created from Azure Blob storage, Azure Files, Azure Data Lake Storage Gen1, Azure Data Lake Storage Gen2, Azure SQL Database, and Azure Database for PostgreSQL.
+
+When you **mount** a dataset, you attach the files referenced by the dataset to a directory (mount point) and make it available on the compute target. Mounting is supported for Linux-based computes, including Azure Machine Learning Compute, virtual machines, and HDInsight. If your data size exceeds the compute disk size, downloading isn't possible. For this scenario, we recommend mounting since only the data files used by your script are loaded at the time of processing.
+
+When you **download** a dataset, all the files referenced by the dataset will be downloaded to the compute target. Downloading is supported for all compute types. If your script processes all files referenced by the dataset, and your compute disk can fit your full dataset, downloading is recommended to avoid the overhead of streaming data from storage services. For multi-node downloads, see [how to avoid throttling](#troubleshooting).
+
+> [!NOTE]
+> The download path name should not be longer than 255 alpha-numeric characters for Windows OS. For Linux OS, the download path name should not be longer than 4,096 alpha-numeric characters. Also, for Linux OS the file name (which is the last segment of the download path `/path/to/file/{filename}`) should not be longer than 255 alpha-numeric characters.
+
+The following code mounts `dataset` to the temp directory at `mounted_path`
+
+```python
+import tempfile
+mounted_path = tempfile.mkdtemp()
+
+# mount dataset onto the mounted_path of a Linux-based compute
+mount_context = dataset.mount(mounted_path)
+
+mount_context.start()
+
+import os
+print(os.listdir(mounted_path))
+print (mounted_path)
+```
+
+## Get datasets in machine learning scripts
+
+Registered datasets are accessible both locally and remotely on compute clusters like the Azure Machine Learning compute. To access your registered dataset across experiments, use the following code to access your workspace and get the dataset that was used in your previously submitted run. By default, the [`get_by_name()`](/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--) method on the `Dataset` class returns the latest version of the dataset that's registered with the workspace.
+
+```Python
+%%writefile $script_folder/train.py
+
+from azureml.core import Dataset, Run
+
+run = Run.get_context()
+workspace = run.experiment.workspace
+
+dataset_name = 'titanic_ds'
+
+# Get a dataset by name
+titanic_ds = Dataset.get_by_name(workspace=workspace, name=dataset_name)
+
+# Load a TabularDataset into pandas DataFrame
+df = titanic_ds.to_pandas_dataframe()
+```
+
+## Access source code during training
+
+Azure Blob storage has higher throughput speeds than an Azure file share and will scale to large numbers of jobs started in parallel. For this reason, we recommend configuring your runs to use Blob storage for transferring source code files.
+
+The following code example specifies in the run configuration which blob datastore to use for source code transfers.
+
+```python
+# workspaceblobstore is the default blob storage
+src.run_config.source_directory_data_store = "workspaceblobstore"
+```
+
+## Notebook examples
+++ For more dataset examples and concepts, see the [dataset notebooks](https://aka.ms/dataset-tutorial).++ See how to [parametrize datasets in your ML pipelines](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-dataset-and-pipelineparameter.ipynb).+
+## Troubleshooting
+
+**Dataset initialization failed: Waiting for mount point to be ready has timed out**:
+ * If you don't have any outbound [network security group](/azure/virtual-network/network-security-groups-overview) rules and are using `azureml-sdk>=1.12.0`, update `azureml-dataset-runtime` and its dependencies to be the latest for the specific minor version, or if you're using it in a run, recreate your environment so it can have the latest patch with the fix.
+ * If you're using `azureml-sdk<1.12.0`, upgrade to the latest version.
+ * If you have outbound NSG rules, make sure there's an outbound rule that allows all traffic for the service tag `AzureResourceMonitor`.
+
+**Dataset initialization failed: StreamAccessException was caused by ThrottlingException**
+
+For multi-node file downloads, all nodes may attempt to download all files in the file dataset from the Azure Storage service, which results in a throttling error. To avoid throttling, initially set the environment variable `AZUREML_DOWNLOAD_CONCURRENCY` to a value of eight times the number of CPU cores divided by the number of nodes. Setting up a value for this environment variable may require some experimentation, so the aforementioned guidance is a starting point.
+
+The following example assumes 32 cores and 4 nodes.
+
+```python
+from azureml.core.environment import Environment
+myenv = Environment(name="myenv")
+myenv.environment_variables = {"AZUREML_DOWNLOAD_CONCURRENCY":64}
+```
+
+### AzureFile storage
+
+**Unable to upload project files to working directory in AzureFile because the storage is overloaded**:
+
+* If you're using file share for other workloads, such as data transfer, the recommendation is to use blobs so that file share is free to be used for submitting runs.
+
+* Another option is to split the workload between two different workspaces.
+
+**ConfigException: Could not create a connection to the AzureFileService due to missing credentials. Either an Account Key or SAS token needs to be linked the default workspace blob store.**
+
+To ensure your storage access credentials are linked to the workspace and the associated file datastore, complete the following steps:
+
+1. Navigate to your workspace in the [Azure portal](https://portal.azure.com).
+1. Select the storage link on the workspace **Overview** page.
+1. On the storage page, select **Access keys** on the left side menu.
+1. Copy the key.
+1. Navigate to the [Azure Machine Learning studio](https://ml.azure.com) for your workspace.
+1. In the studio, select the file datastore for which you want to provide authentication credentials.
+1. Select **Update authentication** .
+1. Paste the key from the previous steps.
+1. Select **Save**.
+
+### Passing data as input
+
+**TypeError: FileNotFound: No such file or directory**: This error occurs if the file path you provide isn't where the file is located. You need to make sure the way you refer to the file is consistent with where you mounted your dataset on your compute target. To ensure a deterministic state, we recommend using the abstract path when mounting a dataset to a compute target. For example, in the following code we mount the dataset under the root of the filesystem of the compute target, `/tmp`.
+
+```python
+# Note the leading / in '/tmp/dataset'
+script_params = {
+ '--data-folder': dset.as_named_input('dogscats_train').as_mount('/tmp/dataset'),
+}
+```
+
+If you don't include the leading forward slash, '/', you'll need to prefix the working directory for example, `/mnt/batch/.../tmp/dataset` on the compute target to indicate where you want the dataset to be mounted.
++
+## Next steps
+
+* [Auto train machine learning models](../how-to-configure-auto-train.md#data-source-and-format) with TabularDatasets.
+
+* [Train image classification models](https://aka.ms/filedataset-samplenotebook) with FileDatasets.
+
+* [Train with datasets using pipelines](./how-to-create-machine-learning-pipelines.md).
machine-learning How To Trigger Published Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-trigger-published-pipeline.md
+
+ Title: Trigger Azure Machine Learning pipelines
+
+description: Triggered pipelines allow you to automate routine, time-consuming tasks such as data processing, training, and monitoring.
+++++ Last updated : 08/12/2022++
+#Customer intent: As a Python coding data scientist, I want to improve my operational efficiency by scheduling my training pipeline of my model using the latest data.
++
+# Trigger machine learning pipelines
++
+In this article, you'll learn how to programmatically schedule a pipeline to run on Azure. You can create a schedule based on elapsed time or on file-system changes. Time-based schedules can be used to take care of routine tasks, such as monitoring for data drift. Change-based schedules can be used to react to irregular or unpredictable changes, such as new data being uploaded or old data being edited. After learning how to create schedules, you'll learn how to retrieve and deactivate them. Finally, you'll learn how to use other Azure services, Azure Logic App and Azure Data Factory, to run pipelines. An Azure Logic App allows for more complex triggering logic or behavior. Azure Data Factory pipelines allow you to call a machine learning pipeline as part of a larger data orchestration pipeline.
+
+## Prerequisites
+
+* An Azure subscription. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
+
+* A Python environment in which the Azure Machine Learning SDK for Python is installed. For more information, see [Create and manage reusable environments for training and deployment with Azure Machine Learning.](how-to-use-environments.md)
+
+* A Machine Learning workspace with a published pipeline. You can use the one built in [Create and run machine learning pipelines with Azure Machine Learning SDK](./how-to-create-machine-learning-pipelines.md).
+
+## Trigger pipelines with Azure Machine Learning SDK for Python
+
+To schedule a pipeline, you'll need a reference to your workspace, the identifier of your published pipeline, and the name of the experiment in which you wish to create the schedule. You can get these values with the following code:
+
+```Python
+import azureml.core
+from azureml.core import Workspace
+from azureml.pipeline.core import Pipeline, PublishedPipeline
+from azureml.core.experiment import Experiment
+
+ws = Workspace.from_config()
+
+experiments = Experiment.list(ws)
+for experiment in experiments:
+ print(experiment.name)
+
+published_pipelines = PublishedPipeline.list(ws)
+for published_pipeline in published_pipelines:
+ print(f"{published_pipeline.name},'{published_pipeline.id}'")
+
+experiment_name = "MyExperiment"
+pipeline_id = "aaaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
+```
+
+## Create a schedule
+
+To run a pipeline on a recurring basis, you'll create a schedule. A `Schedule` associates a pipeline, an experiment, and a trigger. The trigger can either be a`ScheduleRecurrence` that describes the wait between jobs or a Datastore path that specifies a directory to watch for changes. In either case, you'll need the pipeline identifier and the name of the experiment in which to create the schedule.
+
+At the top of your Python file, import the `Schedule` and `ScheduleRecurrence` classes:
+
+```python
+
+from azureml.pipeline.core.schedule import ScheduleRecurrence, Schedule
+```
+
+### Create a time-based schedule
+
+The `ScheduleRecurrence` constructor has a required `frequency` argument that must be one of the following strings: "Minute", "Hour", "Day", "Week", or "Month". It also requires an integer `interval` argument specifying how many of the `frequency` units should elapse between schedule starts. Optional arguments allow you to be more specific about starting times, as detailed in the [ScheduleRecurrence SDK docs](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedulerecurrence).
+
+Create a `Schedule` that begins a job every 15 minutes:
+
+```python
+recurrence = ScheduleRecurrence(frequency="Minute", interval=15)
+recurring_schedule = Schedule.create(ws, name="MyRecurringSchedule",
+ description="Based on time",
+ pipeline_id=pipeline_id,
+ experiment_name=experiment_name,
+ recurrence=recurrence)
+```
+
+### Create a change-based schedule
+
+Pipelines that are triggered by file changes may be more efficient than time-based schedules. When you want to do something before a file is changed, or when a new file is added to a data directory, you can preprocess that file. You can monitor any changes to a datastore or changes within a specific directory within the datastore. If you monitor a specific directory, changes within subdirectories of that directory will _not_ trigger a job.
+
+> [!NOTE]
+> Change-based schedules only supports monitoring Azure Blob storage.
+
+To create a file-reactive `Schedule`, you must set the `datastore` parameter in the call to [Schedule.create](/python/api/azureml-pipeline-core/azureml.pipeline.core.schedule.schedule#create-workspace--name--pipeline-id--experiment-name--recurrence-none--description-none--pipeline-parameters-none--wait-for-provisioning-false--wait-timeout-3600--datastore-none--polling-interval-5--data-path-parameter-name-none--continue-on-step-failure-none--path-on-datastore-noneworkflow-provider-noneservice-endpoint-none-). To monitor a folder, set the `path_on_datastore` argument.
+
+The `polling_interval` argument allows you to specify, in minutes, the frequency at which the datastore is checked for changes.
+
+If the pipeline was constructed with a [DataPath](/python/api/azureml-core/azureml.data.datapath.datapath) [PipelineParameter](/python/api/azureml-pipeline-core/azureml.pipeline.core.pipelineparameter), you can set that variable to the name of the changed file by setting the `data_path_parameter_name` argument.
+
+```python
+datastore = Datastore(workspace=ws, name="workspaceblobstore")
+
+reactive_schedule = Schedule.create(ws, name="MyReactiveSchedule", description="Based on input file change.",
+ pipeline_id=pipeline_id, experiment_name=experiment_name, datastore=datastore, data_path_parameter_name="input_data")
+```
+
+### Optional arguments when creating a schedule
+
+In addition to the arguments discussed previously, you may set the `status` argument to `"Disabled"` to create an inactive schedule. Finally, the `continue_on_step_failure` allows you to pass a Boolean that will override the pipeline's default failure behavior.
+
+## View your scheduled pipelines
+
+In your Web browser, navigate to Azure Machine Learning. From the **Endpoints** section of the navigation panel, choose **Pipeline endpoints**. This takes you to a list of the pipelines published in the Workspace.
++
+In this page you can see summary information about all the pipelines in the Workspace: names, descriptions, status, and so forth. Drill in by clicking in your pipeline. On the resulting page, there are more details about your pipeline and you may drill down into individual jobs.
+
+## Deactivate the pipeline
+
+If you have a `Pipeline` that is published, but not scheduled, you can disable it with:
+
+```python
+pipeline = PublishedPipeline.get(ws, id=pipeline_id)
+pipeline.disable()
+```
+
+If the pipeline is scheduled, you must cancel the schedule first. Retrieve the schedule's identifier from the portal or by running:
+
+```python
+ss = Schedule.list(ws)
+for s in ss:
+ print(s)
+```
+
+Once you have the `schedule_id` you wish to disable, run:
+
+```python
+def stop_by_schedule_id(ws, schedule_id):
+ s = next(s for s in Schedule.list(ws) if s.id == schedule_id)
+ s.disable()
+ return s
+
+stop_by_schedule_id(ws, schedule_id)
+```
+
+If you then run `Schedule.list(ws)` again, you should get an empty list.
+
+## Use Azure Logic Apps for complex triggers
+
+More complex trigger rules or behavior can be created using an [Azure Logic App](/azure/logic-apps/logic-apps-overview).
+
+To use an Azure Logic App to trigger a Machine Learning pipeline, you'll need the REST endpoint for a published Machine Learning pipeline. [Create and publish your pipeline](./how-to-create-machine-learning-pipelines.md). Then find the REST endpoint of your `PublishedPipeline` by using the pipeline ID:
+
+```python
+# You can find the pipeline ID in Azure Machine Learning studio
+
+published_pipeline = PublishedPipeline.get(ws, id="<pipeline-id-here>")
+published_pipeline.endpoint
+```
+
+## Create a Logic App
+
+Now create an [Azure Logic App](/azure/logic-apps/logic-apps-overview) instance. If you wish, [use an integration service environment (ISE)](/azure/logic-apps/connect-virtual-network-vnet-isolated-environment) and [set up a customer-managed key](/azure/logic-apps/customer-managed-keys-integration-service-environment) for use by your Logic App.
+
+Once your Logic App has been provisioned, use these steps to configure a trigger for your pipeline:
+
+1. [Create a system-assigned managed identity](/azure/logic-apps/create-managed-service-identity) to give the app access to your Azure Machine Learning Workspace.
+
+1. Navigate to the Logic App Designer view and select the Blank Logic App template.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="media/how-to-trigger-published-pipeline/blank-template.png" alt-text="Blank template":::
+
+1. In the Designer, search for **blob**. Select the **When a blob is added or modified (properties only)** trigger and add this trigger to your Logic App.
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="media/how-to-trigger-published-pipeline/add-trigger.png" alt-text="Add trigger":::
+
+1. Fill in the connection info for the Blob storage account you wish to monitor for blob additions or modifications. Select the Container to monitor.
+
+ Choose the **Interval** and **Frequency** to poll for updates that work for you.
+
+ > [!NOTE]
+ > This trigger will monitor the selected Container but won't monitor subfolders.
+
+1. Add an HTTP action that will run when a new or modified blob is detected. Select **+ New Step**, then search for and select the HTTP action.
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="media/how-to-trigger-published-pipeline/search-http.png" alt-text="Search for HTTP action":::
+
+ Use the following settings to configure your action:
+
+ | Setting | Value |
+ |||
+ | HTTP action | POST |
+ | URI |the endpoint to the published pipeline that you found as a [Prerequisite](#prerequisites) |
+ | Authentication mode | Managed Identity |
+
+1. Set up your schedule to set the value of any [DataPath PipelineParameters](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/intro-to-pipelines/aml-pipelines-showcasing-datapath-and-pipelineparameter.ipynb) you may have:
+
+ ```json
+ {
+ "DataPathAssignments": {
+ "input_datapath": {
+ "DataStoreName": "<datastore-name>",
+ "RelativePath": "@{triggerBody()?['Name']}"
+ }
+ },
+ "ExperimentName": "MyRestPipeline",
+ "ParameterAssignments": {
+ "input_string": "sample_string3"
+ },
+ "RunSource": "SDK"
+ }
+ ```
+
+ Use the `DataStoreName` you added to your workspace as a [Prerequisite](#prerequisites).
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="media/how-to-trigger-published-pipeline/http-settings.png" alt-text="HTTP settings":::
+
+1. Select **Save** and your schedule is now ready.
+
+> [!IMPORTANT]
+> If you are using Azure role-based access control (Azure RBAC) to manage access to your pipeline, [set the permissions for your pipeline scenario (training or scoring)](../how-to-assign-roles.md#common-scenarios).
+
+## Call machine learning pipelines from Azure Data Factory pipelines
+
+In an Azure Data Factory pipeline, the *Machine Learning Execute Pipeline* activity runs an Azure Machine Learning pipeline. You can find this activity in the Data Factory's authoring page under the *Machine Learning* category:
++
+## Next steps
+
+In this article, you used the Azure Machine Learning SDK for Python to schedule a pipeline in two different ways. One schedule recurs based on elapsed clock time. The other schedule jobs if a file is modified on a specified `Datastore` or within a directory on that store. You saw how to use the portal to examine the pipeline and individual jobs. You learned how to disable a schedule so that the pipeline stops running. Finally, you created an Azure Logic App to trigger a pipeline.
+
+For more information, see:
+
+> [!div class="nextstepaction"]
+> [Use Azure Machine Learning Pipelines for batch scoring](../tutorial-pipeline-batch-scoring-classification.md)
+
+* Learn more about [pipelines](../concept-ml-pipelines.md)
+* Learn more about [exploring Azure Machine Learning with Jupyter](../samples-notebooks.md)
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-automlstep-in-pipelines.md
+
+ Title: Use automated ML in ML pipelines
+
+description: The AutoMLStep allows you to use automated machine learning in your pipelines.
++++++ Last updated : 10/21/2021++++
+# Use automated ML in an Azure Machine Learning pipeline in Python
++
+Azure Machine Learning's automated ML capability helps you discover high-performing models without you reimplementing every possible approach. Combined with Azure Machine Learning pipelines, you can create deployable workflows that can quickly discover the algorithm that works best for your data. This article will show you how to efficiently join a data preparation step to an automated ML step. Automated ML can quickly discover the algorithm that works best for your data, while putting you on the road to MLOps and model lifecycle operationalization with pipelines.
+
+## Prerequisites
+
+* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* An Azure Machine Learning workspace. See [Create workspace resources](../quickstart-create-resources.md).
+
+* Familiarity with Azure's [automated machine learning](../concept-automated-ml.md) and [machine learning pipelines](../concept-ml-pipelines.md) facilities and SDK.
+
+## Review automated ML's central classes
+
+Automated ML in a pipeline is represented by an `AutoMLStep` object. The `AutoMLStep` class is a subclass of `PipelineStep`. A graph of `PipelineStep` objects defines a `Pipeline`.
+
+There are several subclasses of `PipelineStep`. In addition to the `AutoMLStep`, this article will show a `PythonScriptStep` for data preparation and another for registering the model.
+
+The preferred way to initially move data _into_ an ML pipeline is with `Dataset` objects. To move data _between_ steps and possible save data output from runs, the preferred way is with [`OutputFileDatasetConfig`](/python/api/azureml-core/azureml.data.outputfiledatasetconfig) and [`OutputTabularDatasetConfig`](/python/api/azureml-core/azureml.data.output_dataset_config.outputtabulardatasetconfig) objects. To be used with `AutoMLStep`, the `PipelineData` object must be transformed into a `PipelineOutputTabularDataset` object. For more information, see [Input and output data from ML pipelines](how-to-move-data-in-out-of-pipelines.md).
+
+The `AutoMLStep` is configured via an `AutoMLConfig` object. `AutoMLConfig` is a flexible class, as discussed in [Configure automated ML experiments in Python](../how-to-configure-auto-train.md#configure-your-experiment-settings).
+
+A `Pipeline` runs in an `Experiment`. The pipeline `Run` has, for each step, a child `StepRun`. The outputs of the automated ML `StepRun` are the training metrics and highest-performing model.
+
+To make things concrete, this article creates a simple pipeline for a classification task. The task is predicting Titanic survival, but we won't be discussing the data or task except in passing.
+
+## Get started
+
+### Retrieve initial dataset
+
+Often, an ML workflow starts with pre-existing baseline data. This is a good scenario for a registered dataset. Datasets are visible across the workspace, support versioning, and can be interactively explored. There are many ways to create and populate a dataset, as discussed in [Create Azure Machine Learning datasets](how-to-create-register-datasets.md). Since we'll be using the Python SDK to create our pipeline, use the SDK to download baseline data and register it with the name 'titanic_ds'.
+
+```python
+from azureml.core import Workspace, Dataset
+
+ws = Workspace.from_config()
+if not 'titanic_ds' in ws.datasets.keys() :
+ # create a TabularDataset from Titanic training data
+ web_paths = ['https://dprepdata.blob.core.windows.net/demo/Titanic.csv',
+ 'https://dprepdata.blob.core.windows.net/demo/Titanic2.csv']
+ titanic_ds = Dataset.Tabular.from_delimited_files(path=web_paths)
+
+ titanic_ds.register(workspace = ws,
+ name = 'titanic_ds',
+ description = 'Titanic baseline data',
+ create_new_version = True)
+
+titanic_ds = Dataset.get_by_name(ws, 'titanic_ds')
+```
+
+The code first logs in to the Azure Machine Learning workspace defined in **config.json** (for an explanation, see [Create a workspace configuration file](../how-to-configure-environment.md#workspace). If there isn't already a dataset named `'titanic_ds'` registered, then it creates one. The code downloads CSV data from the Web, uses them to instantiate a `TabularDataset` and then registers the dataset with the workspace. Finally, the function `Dataset.get_by_name()` assigns the `Dataset` to `titanic_ds`.
+
+### Configure your storage and compute target
+
+Additional resources that the pipeline will need are storage and, generally, Azure Machine Learning compute resources.
+
+```python
+from azureml.core import Datastore
+from azureml.core.compute import AmlCompute, ComputeTarget
+
+datastore = ws.get_default_datastore()
+
+compute_name = 'cpu-cluster'
+if not compute_name in ws.compute_targets :
+ print('creating a new compute target...')
+ provisioning_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
+ min_nodes=0,
+ max_nodes=1)
+ compute_target = ComputeTarget.create(ws, compute_name, provisioning_config)
+
+ compute_target.wait_for_completion(
+ show_output=True, min_node_count=None, timeout_in_minutes=20)
+
+ # Show the result
+ print(compute_target.get_status().serialize())
+
+compute_target = ws.compute_targets[compute_name]
+```
++
+The intermediate data between the data preparation and the automated ML step can be stored in the workspace's default datastore, so we don't need to do more than call `get_default_datastore()` on the `Workspace` object.
+
+After that, the code checks if the AzureML compute target `'cpu-cluster'` already exists. If not, we specify that we want a small CPU-based compute target. If you plan to use automated ML's deep learning features (for instance, text featurization with DNN support) you should choose a compute with strong GPU support, as described in [GPU optimized virtual machine sizes](/azure/virtual-machines/sizes-gpu).
+
+The code blocks until the target is provisioned and then prints some details of the just-created compute target. Finally, the named compute target is retrieved from the workspace and assigned to `compute_target`.
+
+### Configure the training run
+
+The runtime context is set by creating and configuring a `RunConfiguration` object. Here we set the compute target.
+
+```python
+from azureml.core.runconfig import RunConfiguration
+from azureml.core.conda_dependencies import CondaDependencies
+
+aml_run_config = RunConfiguration()
+# Use just-specified compute target ("cpu-cluster")
+aml_run_config.target = compute_target
+
+# Specify CondaDependencies obj, add necessary packages
+aml_run_config.environment.python.conda_dependencies = CondaDependencies.create(
+ conda_packages=['pandas','scikit-learn'],
+ pip_packages=['azureml-sdk[automl]', 'pyarrow'])
+```
+
+## Prepare data for automated machine learning
+
+### Write the data preparation code
+
+The baseline Titanic dataset consists of mixed numerical and text data, with some values missing. To prepare it for automated machine learning, the data preparation pipeline step will:
+
+- Fill missing data with either random data or a category corresponding to "Unknown"
+- Transform categorical data to integers
+- Drop columns that we don't intend to use
+- Split the data into training and testing sets
+- Write the transformed data to the `OutputFileDatasetConfig` output paths
+
+```python
+%%writefile dataprep.py
+from azureml.core import Run
+
+import pandas as pd
+import numpy as np
+import argparse
+
+RANDOM_SEED=42
+
+def prepare_age(df):
+ # Fill in missing Age values from distribution of present Age values
+ mean = df["Age"].mean()
+ std = df["Age"].std()
+ is_null = df["Age"].isnull().sum()
+ # compute enough (== is_null().sum()) random numbers between the mean, std
+ rand_age = np.random.randint(mean - std, mean + std, size = is_null)
+ # fill NaN values in Age column with random values generated
+ age_slice = df["Age"].copy()
+ age_slice[np.isnan(age_slice)] = rand_age
+ df["Age"] = age_slice
+ df["Age"] = df["Age"].astype(int)
+
+ # Quantize age into 5 classes
+ df['Age_Group'] = pd.qcut(df['Age'],5, labels=False)
+ df.drop(['Age'], axis=1, inplace=True)
+ return df
+
+def prepare_fare(df):
+ df['Fare'].fillna(0, inplace=True)
+ df['Fare_Group'] = pd.qcut(df['Fare'],5,labels=False)
+ df.drop(['Fare'], axis=1, inplace=True)
+ return df
+
+def prepare_genders(df):
+ genders = {"male": 0, "female": 1, "unknown": 2}
+ df['Sex'] = df['Sex'].map(genders)
+ df['Sex'].fillna(2, inplace=True)
+ df['Sex'] = df['Sex'].astype(int)
+ return df
+
+def prepare_embarked(df):
+ df['Embarked'].replace('', 'U', inplace=True)
+ df['Embarked'].fillna('U', inplace=True)
+ ports = {"S": 0, "C": 1, "Q": 2, "U": 3}
+ df['Embarked'] = df['Embarked'].map(ports)
+ return df
+
+parser = argparse.ArgumentParser()
+parser.add_argument('--output_path', dest='output_path', required=True)
+args = parser.parse_args()
+
+titanic_ds = Run.get_context().input_datasets['titanic_ds']
+df = titanic_ds.to_pandas_dataframe().drop(['PassengerId', 'Name', 'Ticket', 'Cabin'], axis=1)
+df = prepare_embarked(prepare_genders(prepare_fare(prepare_age(df))))
+
+df.to_csv(os.path.join(args.output_path,"prepped_data.csv"))
+
+print(f"Wrote prepped data to {args.output_path}/prepped_data.csv")
+```
+
+The above code snippet is a complete, but minimal, example of data preparation for the Titanic data. The snippet starts with a Jupyter "magic command" to output the code to a file. If you aren't using a Jupyter notebook, remove that line and create the file manually.
+
+The various `prepare_` functions in the above snippet modify the relevant column in the input dataset. These functions work on the data once it has been changed into a Pandas `DataFrame` object. In each case, missing data is either filled with representative random data or categorical data indicating "Unknown." Text-based categorical data is mapped to integers. No-longer-needed columns are overwritten or dropped.
+
+After the code defines the data preparation functions, the code parses the input argument, which is the path to which we want to write our data. (These values will be determined by `OutputFileDatasetConfig` objects that will be discussed in the next step.) The code retrieves the registered `'titanic_cs'` `Dataset`, converts it to a Pandas `DataFrame`, and calls the various data preparation functions.
+
+Since the `output_path` is a directory, the call to `to_csv()` specifies the filename `prepped_data.csv`.
+
+### Write the data preparation pipeline step (`PythonScriptStep`)
+
+The data preparation code described above must be associated with a `PythonScripStep` object to be used with a pipeline. The path to which the CSV output is written is generated by a `OutputFileDatasetConfig` object. The resources prepared earlier, such as the `ComputeTarget`, the `RunConfig`, and the `'titanic_ds' Dataset` are used to complete the specification.
+
+```python
+from azureml.data import OutputFileDatasetConfig
+from azureml.pipeline.steps import PythonScriptStep
+
+prepped_data_path = OutputFileDatasetConfig(name="output_path")
+
+dataprep_step = PythonScriptStep(
+ name="dataprep",
+ script_name="dataprep.py",
+ compute_target=compute_target,
+ runconfig=aml_run_config,
+ arguments=["--output_path", prepped_data_path],
+ inputs=[titanic_ds.as_named_input('titanic_ds')],
+ allow_reuse=True
+)
+```
+
+The `prepped_data_path` object is of type `OutputFileDatasetConfig` which points to a directory. Notice that it's specified in the `arguments` parameter. If you review the previous step, you'll see that within the data preparation code, the value of the argument `'--output_path'` is the directory path at which the CSV file was written.
+
+## Train with AutoMLStep
+
+Configuring an automated ML pipeline step is done with the `AutoMLConfig` class. This flexible class is described in [Configure automated ML experiments in Python](../how-to-configure-auto-train.md). Data input and output are the only aspects of configuration that require special attention in an ML pipeline. Input and output for `AutoMLConfig` in pipelines is discussed in detail below. Beyond data, an advantage of ML pipelines is the ability to use different compute targets for different steps. You might choose to use a more powerful `ComputeTarget` only for the automated ML process. Doing so is as straightforward as assigning a more powerful `RunConfiguration` to the `AutoMLConfig` object's `run_configuration` parameter.
+
+### Send data to `AutoMLStep`
+
+In an ML pipeline, the input data must be a `Dataset` object. The highest-performing way is to provide the input data in the form of `OutputTabularDatasetConfig` objects. You create an object of that type with the `read_delimited_files()` on a `OutputFileDatasetConfig`, such as the `prepped_data_path`, such as the `prepped_data_path` object.
+
+```python
+# type(prepped_data) == OutputTabularDatasetConfig
+prepped_data = prepped_data_path.read_delimited_files()
+```
+
+Another option is to use `Dataset` objects registered in the workspace:
+
+```python
+prepped_data = Dataset.get_by_name(ws, 'Data_prepared')
+```
+
+Comparing the two techniques:
+
+| Technique | Benefits and drawbacks |
+|-|-|
+|`OutputTabularDatasetConfig`| Higher performance |
+|| Natural route from `OutputFileDatasetConfig` |
+|| Data isn't persisted after pipeline run |
+|| |
+| Registered `Dataset` | Lower performance |
+| | Can be generated in many ways |
+| | Data persists and is visible throughout workspace |
+| | [Notebook showing registered `Dataset` technique](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/continuous-retraining/auto-ml-continuous-retraining.ipynb)
++
+### Specify automated ML outputs
+
+The outputs of the `AutoMLStep` are the final metric scores of the higher-performing model and that model itself. To use these outputs in further pipeline steps, prepare `OutputFileDatasetConfig` objects to receive them.
+
+```python
+from azureml.pipeline.core import TrainingOutput, PipelineData
+
+metrics_data = PipelineData(name='metrics_data',
+ datastore=datastore,
+ pipeline_output_name='metrics_output',
+ training_output=TrainingOutput(type='Metrics'))
+
+model_data = PipelineData(name='best_model_data',
+ datastore=datastore,
+ pipeline_output_name='model_output',
+ training_output=TrainingOutput(type='Model'))
+```
+
+The snippet above creates the two `PipelineData` objects for the metrics and model output. Each is named, assigned to the default datastore retrieved earlier, and associated with the particular `type` of `TrainingOutput` from the `AutoMLStep`. Because we assign `pipeline_output_name` on these `PipelineData` objects, their values will be available not just from the individual pipeline step, but from the pipeline as a whole, as will be discussed below in the section "Examine pipeline results."
+
+### Configure and create the automated ML pipeline step
+
+Once the inputs and outputs are defined, it's time to create the `AutoMLConfig` and `AutoMLStep`. The details of the configuration will depend on your task, as described in [Configure automated ML experiments in Python](../how-to-configure-auto-train.md). For the Titanic survival classification task, the following snippet demonstrates a simple configuration.
+
+```python
+from azureml.train.automl import AutoMLConfig
+from azureml.pipeline.steps import AutoMLStep
+
+# Change iterations to a reasonable number (50) to get better accuracy
+automl_settings = {
+ "iteration_timeout_minutes" : 10,
+ "iterations" : 2,
+ "experiment_timeout_hours" : 0.25,
+ "primary_metric" : 'AUC_weighted'
+}
+
+automl_config = AutoMLConfig(task = 'classification',
+ path = '.',
+ debug_log = 'automated_ml_errors.log',
+ compute_target = compute_target,
+ run_configuration = aml_run_config,
+ featurization = 'auto',
+ training_data = prepped_data,
+ label_column_name = 'Survived',
+ **automl_settings)
+
+train_step = AutoMLStep(name='AutoML_Classification',
+ automl_config=automl_config,
+ passthru_automl_config=False,
+ outputs=[metrics_data,model_data],
+ enable_default_model_output=False,
+ enable_default_metrics_output=False,
+ allow_reuse=True)
+```
+The snippet shows an idiom commonly used with `AutoMLConfig`. Arguments that are more fluid (hyperparameter-ish) are specified in a separate dictionary while the values less likely to change are specified directly in the `AutoMLConfig` constructor. In this case, the `automl_settings` specify a brief run: the run will stop after only 2 iterations or 15 minutes, whichever comes first.
+
+The `automl_settings` dictionary is passed to the `AutoMLConfig` constructor as kwargs. The other parameters aren't complex:
+
+- `task` is set to `classification` for this example. Other valid values are `regression` and `forecasting`
+- `path` and `debug_log` describe the path to the project and a local file to which debug information will be written
+- `compute_target` is the previously defined `compute_target` that, in this example, is an inexpensive CPU-based machine. If you're using AutoML's Deep Learning facilities, you would want to change the compute target to be GPU-based
+- `featurization` is set to `auto`. More details can be found in the [Data Featurization](../how-to-configure-auto-train.md#data-featurization) section of the automated ML configuration document
+- `label_column_name` indicates which column we are interested in predicting
+- `training_data` is set to the `OutputTabularDatasetConfig` objects made from the outputs of the data preparation step
+
+The `AutoMLStep` itself takes the `AutoMLConfig` and has, as outputs, the `PipelineData` objects created to hold the metrics and model data.
+
+>[!Important]
+> You must set `enable_default_model_output` and `enable_default_metrics_output` to `True` only if you are using `AutoMLStepRun`.
+
+In this example, the automated ML process will perform cross-validations on the `training_data`. You can control the number of cross-validations with the `n_cross_validations` argument. If you've already split your training data as part of your data preparation steps, you can set `validation_data` to its own `Dataset`.
+
+You might occasionally see the use `X` for data features and `y` for data labels. This technique is deprecated and you should use `training_data` for input.
+
+## Register the model generated by automated ML
+
+The last step in a simple ML pipeline is registering the created model. By adding the model to the workspace's model registry, it will be available in the portal and can be versioned. To register the model, write another `PythonScriptStep` that takes the `model_data` output of the `AutoMLStep`.
+
+### Write the code to register the model
+
+A model is registered in a `Workspace`. You're probably familiar with using `Workspace.from_config()` to log on to your workspace on your local machine, but there's another way to get the workspace from within a running ML pipeline. The `Run.get_context()` retrieves the active `Run`. This `run` object provides access to many important objects, including the `Workspace` used here.
+
+```python
+%%writefile register_model.py
+from azureml.core.model import Model, Dataset
+from azureml.core.run import Run, _OfflineRun
+from azureml.core import Workspace
+import argparse
+
+parser = argparse.ArgumentParser()
+parser.add_argument("--model_name", required=True)
+parser.add_argument("--model_path", required=True)
+args = parser.parse_args()
+
+print(f"model_name : {args.model_name}")
+print(f"model_path: {args.model_path}")
+
+run = Run.get_context()
+ws = Workspace.from_config() if type(run) == _OfflineRun else run.experiment.workspace
+
+model = Model.register(workspace=ws,
+ model_path=args.model_path,
+ model_name=args.model_name)
+
+print("Registered version {0} of model {1}".format(model.version, model.name))
+```
+
+### Write the PythonScriptStep code
+
+The model-registering `PythonScriptStep` uses a `PipelineParameter` for one of its arguments. Pipeline parameters are arguments to pipelines that can be easily set at run-submission time. Once declared, they're passed as normal arguments.
+
+```python
+
+from azureml.pipeline.core.graph import PipelineParameter
+
+# The model name with which to register the trained model in the workspace.
+model_name = PipelineParameter("model_name", default_value="TitanicSurvivalInitial")
+
+register_step = PythonScriptStep(script_name="register_model.py",
+ name="register_model",
+ allow_reuse=False,
+ arguments=["--model_name", model_name, "--model_path", model_data],
+ inputs=[model_data],
+ compute_target=compute_target,
+ runconfig=aml_run_config)
+```
+
+## Create and run your automated ML pipeline
+
+Creating and running a pipeline that contains an `AutoMLStep` is no different than a normal pipeline.
+
+```python
+from azureml.pipeline.core import Pipeline
+from azureml.core import Experiment
+
+pipeline = Pipeline(ws, [dataprep_step, train_step, register_step])
+
+experiment = Experiment(workspace=ws, name='titanic_automl')
+
+run = experiment.submit(pipeline, show_output=True)
+run.wait_for_completion()
+```
+
+The code above combines the data preparation, automated ML, and model-registering steps into a `Pipeline` object. It then creates an `Experiment` object. The `Experiment` constructor will retrieve the named experiment if it exists or create it if necessary. It submits the `Pipeline` to the `Experiment`, creating a `Run` object that will asynchronously run the pipeline. The `wait_for_completion()` function blocks until the run completes.
+
+### Examine pipeline results
+
+Once the `run` completes, you can retrieve `PipelineData` objects that have been assigned a `pipeline_output_name`. You can download the results and load them for further processing.
+
+```python
+metrics_output_port = run.get_pipeline_output('metrics_output')
+model_output_port = run.get_pipeline_output('model_output')
+
+metrics_output_port.download('.', show_progress=True)
+model_output_port.download('.', show_progress=True)
+```
+
+Downloaded files are written to the subdirectory `azureml/{run.id}/`. The metrics file is JSON-formatted and can be converted into a Pandas dataframe for examination.
+
+For local processing, you may need to install relevant packages, such as Pandas, Pickle, the AzureML SDK, and so forth. For this example, it's likely that the best model found by automated ML will depend on XGBoost.
+
+```bash
+!pip install xgboost==0.90
+```
+
+```python
+import pandas as pd
+import json
+
+metrics_filename = metrics_output._path_on_datastore
+# metrics_filename = path to downloaded file
+with open(metrics_filename) as f:
+ metrics_output_result = f.read()
+
+deserialized_metrics_output = json.loads(metrics_output_result)
+df = pd.DataFrame(deserialized_metrics_output)
+df
+```
+
+The code snippet above shows the metrics file being loaded from its location on the Azure datastore. You can also load it from the downloaded file, as shown in the comment. Once you've deserialized it and converted it to a Pandas DataFrame, you can see detailed metrics for each of the iterations of the automated ML step.
+
+The model file can be deserialized into a `Model` object that you can use for inferencing, further metrics analysis, and so forth.
+
+```python
+import pickle
+
+model_filename = model_output._path_on_datastore
+# model_filename = path to downloaded file
+
+with open(model_filename, "rb" ) as f:
+ best_model = pickle.load(f)
+
+# ... inferencing code not shown ...
+```
+
+For more information on loading and working with existing models, see [Use an existing model with Azure Machine Learning](how-to-deploy-and-where.md).
+
+### Download the results of an automated ML run
+
+If you've been following along with the article, you'll have an instantiated `run` object. But you can also retrieve completed `Run` objects from the `Workspace` by way of an `Experiment` object.
+
+The workspace contains a complete record of all your experiments and runs. You can either use the portal to find and download the outputs of experiments or use code. To access the records from a historic run, use Azure Machine Learning to find the ID of the run in which you are interested. With that ID, you can choose the specific `run` by way of the `Workspace` and `Experiment`.
+
+```python
+# Retrieved from Azure Machine Learning web UI
+run_id = 'aaaaaaaa-bbbb-cccc-dddd-0123456789AB'
+experiment = ws.experiments['titanic_automl']
+run = next(run for run in ex.get_runs() if run.id == run_id)
+```
+
+You would have to change the strings in the above code to the specifics of your historical run. The snippet above assumes that you've assigned `ws` to the relevant `Workspace` with the normal `from_config()`. The experiment of interest is directly retrieved and then the code finds the `Run` of interest by matching the `run.id` value.
+
+Once you have a `Run` object, you can download the metrics and model.
+
+```python
+automl_run = next(r for r in run.get_children() if r.name == 'AutoML_Classification')
+outputs = automl_run.get_outputs()
+metrics = outputs['default_metrics_AutoML_Classification']
+model = outputs['default_model_AutoML_Classification']
+
+metrics.get_port_data_reference().download('.')
+model.get_port_data_reference().download('.')
+```
+
+Each `Run` object contains `StepRun` objects that contain information about the individual pipeline step run. The `run` is searched for the `StepRun` object for the `AutoMLStep`. The metrics and model are retrieved using their default names, which are available even if you don't pass `PipelineData` objects to the `outputs` parameter of the `AutoMLStep`.
+
+Finally, the actual metrics and model are downloaded to your local machine, as was discussed in the "Examine pipeline results" section above.
+
+## Next Steps
+
+- Run this Jupyter notebook showing a [complete example of automated ML in a pipeline](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/machine-learning-pipelines/nyc-taxi-data-regression-model-building/nyc-taxi-data-regression-model-building.ipynb) that uses regression to predict taxi fares
+- [Create automated ML experiments without writing code](../how-to-use-automated-ml-for-ml-models.md)
+- Explore a variety of [Jupyter notebooks demonstrating automated ML](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning)
+- Read about integrating your pipeline in to [End-to-end MLOps](./concept-model-management-and-deployment.md) or investigate the [MLOps GitHub repository](https://github.com/Microsoft/MLOpspython)
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
az ml environment download -n myenv -d downloaddir
## Next steps
-* After you have a trained model, learn [how and where to deploy models](/azure/machine-learning/how-to-deploy-managed-online-endpoints).
-* View the [`Environment` class SDK reference](/python/api/azureml-core/azureml.core.environment%28class%29).
+* After you have a trained model, learn [how and where to deploy models](../how-to-deploy-managed-online-endpoints.md).
+* View the [`Environment` class SDK reference](/python/api/azureml-core/azureml.core.environment%28class%29).
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-labeled-dataset.md
+
+ Title: Create and explore datasets with labels
+
+description: Learn how to export data labels from your Azure Machine Learning labeling projects and use them for machine learning tasks.
++++++ Last updated : 08/17/2022
+#Customer intent: As an experienced Python developer, I need to export my data labels and use them for machine learning tasks.
++
+# Create and explore Azure Machine Learning dataset with labels
+
+In this article, you'll learn how to export the data labels from an Azure Machine Learning data labeling project and load them into popular formats such as, a pandas dataframe for data exploration.
+
+## What are datasets with labels
+
+Azure Machine Learning datasets with labels are referred to as labeled datasets. These specific datasets are [TabularDatasets](/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset) with a dedicated label column and are only created as an output of Azure Machine Learning data labeling projects. Create a data labeling project [for image labeling](../how-to-create-image-labeling-projects.md) or [text labeling](../how-to-create-text-labeling-projects.md). Machine Learning supports data labeling projects for image classification, either multi-label or multi-class, and object identification together with bounded boxes.
+
+## Prerequisites
+
+* An Azure subscription. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), or access to [Azure Machine Learning studio](https://ml.azure.com/).
+* A Machine Learning workspace. See [Create workspace resources](../quickstart-create-resources.md).
+* Access to an Azure Machine Learning data labeling project. If you don't have a labeling project, first create one for [image labeling](../how-to-create-image-labeling-projects.md) or [text labeling](../how-to-create-text-labeling-projects.md).
+
+## Export data labels
+
+When you complete a data labeling project, you can [export the label data from a labeling project](../how-to-create-image-labeling-projects.md#export-the-labels). Doing so, allows you to capture both the reference to the data and its labels, and export them in [COCO format](http://cocodataset.org/#format-data) or as an Azure Machine Learning dataset.
+
+Use the **Export** button on the **Project details** page of your labeling project.
+
+![Export button in studio UI](./media/how-to-use-labeled-dataset/export-button.png)
+
+### COCO
+
+ The COCO file is created in the default blob store of the Azure Machine Learning workspace in a folder within *export/coco*.
+
+>[!NOTE]
+>In object detection projects, the exported "bbox": [x,y,width,height]" values in COCO file are normalized. They are scaled to 1. Example : a bounding box at (10, 10) location, with 30 pixels width , 60 pixels height, in a 640x480 pixel image will be annotated as (0.015625. 0.02083, 0.046875, 0.125). Since the coordintes are normalized, it will show as '0.0' as "width" and "height" for all images. The actual width and height can be obtained using Python library like OpenCV or Pillow(PIL).
+
+### Azure Machine Learning dataset
+
+You can access the exported Azure Machine Learning dataset in the **Datasets** section of your Azure Machine Learning studio. The dataset **Details** page also provides sample code to access your labels from Python.
+
+![Exported dataset](../media/how-to-create-labeling-projects/exported-dataset.png)
+
+> [!TIP]
+> Once you have exported your labeled data to an Azure Machine Learning dataset, you can use AutoML to build computer vision models trained on your labeled data. Learn more at [Set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md)
+
+## Explore labeled datasets via pandas dataframe
+
+Load your labeled datasets into a pandas dataframe to leverage popular open-source libraries for data exploration with the [`to_pandas_dataframe()`](/python/api/azureml-core/azureml.data.tabulardataset#to-pandas-dataframe-on-error--nullout-of-range-datetime--null--) method from the `azureml-dataprep` class.
+
+Install the class with the following shell command:
+
+```shell
+pip install azureml-dataprep
+```
+
+In the following code, the `animal_labels` dataset is the output from a labeling project previously saved to the workspace.
+The exported dataset is a [TabularDataset](/python/api/azureml-core/azureml.data.tabular_dataset.tabulardataset). If you plan to use [download()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-download) or [mount()](/python/api/azureml-core/azureml.data.tabulardataset#azureml-data-tabulardataset-mount) methods, be sure to set the parameter `stream column ='image_url'`.
+
+> [!NOTE]
+> The public preview methods download() and mount() are [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview features, and may change at any time.
++
+```Python
+import azureml.core
+from azureml.core import Dataset, Workspace
+
+# get animal_labels dataset from the workspace
+animal_labels = Dataset.get_by_name(workspace, 'animal_labels')
+animal_pd = animal_labels.to_pandas_dataframe()
+
+# download the images to local
+download_path = animal_labels.download(stream_column='image_url')
+
+import matplotlib.pyplot as plt
+import matplotlib.image as mpimg
+
+#read images from downloaded path
+img = mpimg.imread(download_path[0])
+imgplot = plt.imshow(img)
+```
+
+## Next steps
+
+* Learn to [train image classification models in Azure](../tutorial-train-deploy-notebook.md)
+* [Set up AutoML to train computer vision models with Python (preview)](../how-to-auto-train-image-models.md)
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-synapsesparkstep.md
+
+ Title: Use Apache Spark in a machine learning pipeline (preview)
+
+description: Link your Azure Synapse Analytics workspace to your Azure machine learning pipeline to use Apache Spark for data manipulation.
+++++ Last updated : 10/21/2021++
+#Customer intent: As a user of both Azure Machine Learning pipelines and Azure Synapse Analytics, I'd like to use Apache Spark for the data preparation of my pipeline
++
+# How to use Apache Spark (powered by Azure Synapse Analytics) in your machine learning pipeline (preview)
+++++
+In this article, you'll learn how to use Apache Spark pools powered by Azure Synapse Analytics as the compute target for a data preparation step in an Azure Machine Learning pipeline. You'll learn how a single pipeline can use compute resources suited for the specific step, such as data preparation or training. You'll see how data is prepared for the Spark step and how it's passed to the next step.
+++
+## Prerequisites
+
+* Create an [Azure Machine Learning workspace](../quickstart-create-resources.md) to hold all your pipeline resources.
+
+* [Configure your development environment](../how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](../concept-compute-instance.md) with the SDK already installed.
+
+* Create an Azure Synapse Analytics workspace and Apache Spark pool (see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](/azure/synapse-analytics/quickstart-create-apache-spark-pool-studio)).
+
+## Link your Azure Machine Learning workspace and Azure Synapse Analytics workspace
+
+You create and administer your Apache Spark pools in an Azure Synapse Analytics workspace. To integrate an Apache Spark pool with an Azure Machine Learning workspace, you must [link to the Azure Synapse Analytics workspace](../how-to-link-synapse-ml-workspaces.md).
+
+Once your Azure Machine Learning workspace and your Azure Synapse Analytics workspaces are linked, you can attach an Apache Spark pool via
+* [Azure Machine Learning studio](../how-to-link-synapse-ml-workspaces.md#attach-a-pool-via-the-studio)
+* Python SDK ([as elaborated below](#attach-your-apache-spark-pool-as-a-compute-target-for-azure-machine-learning))
+* Azure Resource Manager (ARM) template (see this [Example ARM template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.machinelearningservices/machine-learning-linkedservice-create/azuredeploy.json)).
+ * You can use the command line to follow the ARM template, add the linked service, and attach the Apache Spark pool with the following code:
+
+ ```azurecli
+ az deployment group create --name --resource-group <rg_name> --template-file "azuredeploy.json" --parameters @"azuredeploy.parameters.json"
+ ```
+
+> [!Important]
+> To link to the Azure Synapse Analytics workspace successfully, you must have the Owner role in the Azure Synapse Analytics workspace resource. Check your access in the Azure portal.
+>
+> The linked service will get a system-assigned managed identity (SAI) when you create it. You must assign this link service SAI the "Synapse Apache Spark administrator" role from Synapse Studio so that it can submit the Spark job (see [How to manage Synapse RBAC role assignments in Synapse Studio](/azure/synapse-analytics/security/how-to-manage-synapse-rbac-role-assignments)).
+>
+> You must also give the user of the Azure Machine Learning workspace the role "Contributor" from Azure portal of resource management.
+
+## Retrieve the link between your Azure Synapse Analytics workspace and your Azure Machine Learning workspace
+
+You can retrieve linked services in your workspace with code such as:
+
+```python
+from azureml.core import Workspace, LinkedService, SynapseWorkspaceLinkedServiceConfiguration
+
+ws = Workspace.from_config()
+
+for service in LinkedService.list(ws) :
+ print(f"Service: {service}")
+
+# Retrieve a known linked service
+linked_service = LinkedService.get(ws, 'synapselink1')
+```
+
+First, `Workspace.from_config()` accesses your Azure Machine Learning workspace using the configuration in `config.json` (see [Create a workspace configuration file](../how-to-configure-environment.md#workspace)). Then, the code prints all of the linked services available in the Workspace. Finally, `LinkedService.get()` retrieves a linked service named `'synapselink1'`.
+
+## Attach your Apache spark pool as a compute target for Azure Machine Learning
+
+To use your Apache spark pool to power a step in your machine learning pipeline, you must attach it as a `ComputeTarget` for the pipeline step, as shown in the following code.
+
+```python
+from azureml.core.compute import SynapseCompute, ComputeTarget
+
+attach_config = SynapseCompute.attach_configuration(
+ linked_service = linked_service,
+ type="SynapseSpark",
+ pool_name="spark01") # This name comes from your Synapse workspace
+
+synapse_compute=ComputeTarget.attach(
+ workspace=ws,
+ name='link1-spark01',
+ attach_configuration=attach_config)
+
+synapse_compute.wait_for_completion()
+```
+
+The first step is to configure the `SynapseCompute`. The `linked_service` argument is the `LinkedService` object you created or retrieved in the previous step. The `type` argument must be `SynapseSpark`. The `pool_name` argument in `SynapseCompute.attach_configuration()` must match that of an existing pool in your Azure Synapse Analytics workspace. For more information on creating an Apache spark pool in the Azure Synapse Analytics workspace, see [Quickstart: Create a serverless Apache Spark pool using Synapse Studio](/azure/synapse-analytics/quickstart-create-apache-spark-pool-studio). The type of `attach_config` is `ComputeTargetAttachConfiguration`.
+
+Once the configuration is created, you create a machine learning `ComputeTarget` by passing in the `Workspace`, `ComputeTargetAttachConfiguration`, and the name by which you'd like to refer to the compute within the machine learning workspace. The call to `ComputeTarget.attach()` is asynchronous, so the sample blocks until the call completes.
+
+## Create a `SynapseSparkStep` that uses the linked Apache Spark pool
+
+The sample notebook [Spark job on Apache spark pool](https://github.com/azure/machinelearningnotebooks/blob/master/how-to-use-azureml/azure-synapse/spark_job_on_synapse_spark_pool.ipynb) defines a simple machine learning pipeline. First, the notebook defines a data preparation step powered by the `synapse_compute` defined in the previous step. Then, the notebook defines a training step powered by a compute target better suited for training. The sample notebook uses the Titanic survival database to demonstrate data input and output; it doesn't actually clean the data or make a predictive model. Since there's no real training in this sample, the training step uses an inexpensive, CPU-based compute resource.
+
+Data flows into a machine learning pipeline by way of `DatasetConsumptionConfig` objects, which can hold tabular data or sets of files. The data often comes from files in blob storage in a workspace's datastore. The following code shows some typical code for creating input for a machine learning pipeline:
+
+```python
+from azureml.core import Dataset
+
+datastore = ws.get_default_datastore()
+file_name = 'Titanic.csv'
+
+titanic_tabular_dataset = Dataset.Tabular.from_delimited_files(path=[(datastore, file_name)])
+step1_input1 = titanic_tabular_dataset.as_named_input("tabular_input")
+
+# Example only: it wouldn't make sense to duplicate input data, especially one as tabular and the other as files
+titanic_file_dataset = Dataset.File.from_files(path=[(datastore, file_name)])
+step1_input2 = titanic_file_dataset.as_named_input("file_input").as_hdfs()
+```
+
+The above code assumes that the file `Titanic.csv` is in blob storage. The code shows how to read the file as a `TabularDataset` and as a `FileDataset`. This code is for demonstration purposes only, as it would be confusing to duplicate inputs or to interpret a single data source as both a table-containing resource and just as a file.
+
+> [!IMPORTANT]
+> In order to use a `FileDataset` as input, your `azureml-core` version must be at least `1.20.0`. How to specify this using the `Environment` class is discussed below.
+
+When a step completes, you may choose to store output data using code similar to:
+
+```python
+from azureml.data import HDFSOutputDatasetConfig
+step1_output = HDFSOutputDatasetConfig(destination=(datastore,"test")).register_on_complete(name="registered_dataset")
+```
+
+In this case, the data would be stored in the `datastore` in a file called `test` and would be available within the machine learning workspace as a `Dataset` with the name `registered_dataset`.
+
+In addition to data, a pipeline step may have per-step Python dependencies. Individual `SynapseSparkStep` objects can specify their precise Azure Synapse Apache Spark configuration, as well. This is shown in the following code, which specifies that the `azureml-core` package version must be at least `1.20.0`. (As mentioned previously, this requirement for `azureml-core` is needed to use a `FileDataset` as an input.)
+
+```python
+from azureml.core.environment import Environment
+from azureml.pipeline.steps import SynapseSparkStep
+
+env = Environment(name="myenv")
+env.python.conda_dependencies.add_pip_package("azureml-core>=1.20.0")
+
+step_1 = SynapseSparkStep(name = 'synapse-spark',
+ file = 'dataprep.py',
+ source_directory="./code",
+ inputs=[step1_input1, step1_input2],
+ outputs=[step1_output],
+ arguments = ["--tabular_input", step1_input1,
+ "--file_input", step1_input2,
+ "--output_dir", step1_output],
+ compute_target = 'link1-spark01',
+ driver_memory = "7g",
+ driver_cores = 4,
+ executor_memory = "7g",
+ executor_cores = 2,
+ num_executors = 1,
+ environment = env)
+```
+
+The above code specifies a single step in the Azure machine learning pipeline. This step's `environment` specifies a specific `azureml-core` version and could add other conda or pip dependencies as necessary.
+
+The `SynapseSparkStep` will zip and upload from the local computer the subdirectory `./code`. That directory will be recreated on the compute server and the step will run the file `dataprep.py` from that directory. The `inputs` and `outputs` of that step are the `step1_input1`, `step1_input2`, and `step1_output` objects previously discussed. The easiest way to access those values within the `dataprep.py` script is to associate them with named `arguments`.
+
+The next set of arguments to the `SynapseSparkStep` constructor control Apache Spark. The `compute_target` is the `'link1-spark01'` that we attached as a compute target previously. The other parameters specify the memory and cores we'd like to use.
+
+The sample notebook uses the following code for `dataprep.py`:
+
+```python
+import os
+import sys
+import azureml.core
+from pyspark.sql import SparkSession
+from azureml.core import Run, Dataset
+
+print(azureml.core.VERSION)
+print(os.environ)
+
+import argparse
+parser = argparse.ArgumentParser()
+parser.add_argument("--tabular_input")
+parser.add_argument("--file_input")
+parser.add_argument("--output_dir")
+args = parser.parse_args()
+
+# use dataset sdk to read tabular dataset
+run_context = Run.get_context()
+dataset = Dataset.get_by_id(run_context.experiment.workspace,id=args.tabular_input)
+sdf = dataset.to_spark_dataframe()
+sdf.show()
+
+# use hdfs path to read file dataset
+spark= SparkSession.builder.getOrCreate()
+sdf = spark.read.option("header", "true").csv(args.file_input)
+sdf.show()
+
+sdf.coalesce(1).write\
+.option("header", "true")\
+.mode("append")\
+.csv(args.output_dir)
+```
+
+This "data preparation" script doesn't do any real data transformation, but illustrates how to retrieve data, convert it to a spark dataframe, and how to do some basic Apache Spark manipulation. You can find the output in Azure Machine Learning Studio by opening the child job, choosing the **Outputs + logs** tab, and opening the `logs/azureml/driver/stdout` file, as shown in the following figure.
++
+## Use the `SynapseSparkStep` in a pipeline
+
+The following example uses the output from the `SynapseSparkStep` created in the [previous section](#create-a-synapsesparkstep-that-uses-the-linked-apache-spark-pool). Other steps in the pipeline may have their own unique environments and run on different compute resources appropriate to the task at hand. The sample notebook runs the "training step" on a small CPU cluster:
+
+```python
+from azureml.core.compute import AmlCompute
+
+cpu_cluster_name = "cpucluster"
+
+if cpu_cluster_name in ws.compute_targets:
+ cpu_cluster = ComputeTarget(workspace=ws, name=cpu_cluster_name)
+ print('Found existing cluster, use it.')
+else:
+ compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2', max_nodes=1)
+ cpu_cluster = ComputeTarget.create(ws, cpu_cluster_name, compute_config)
+ print('Allocating new CPU compute cluster')
+
+cpu_cluster.wait_for_completion(show_output=True)
+
+step2_input = step1_output.as_input("step2_input").as_download()
+
+step_2 = PythonScriptStep(script_name="train.py",
+ arguments=[step2_input],
+ inputs=[step2_input],
+ compute_target=cpu_cluster_name,
+ source_directory="./code",
+ allow_reuse=False)
+```
+
+The code above creates the new compute resource if necessary. Then, the `step1_output` result is converted to input for the training step. The `as_download()` option means that the data will be moved onto the compute resource, resulting in faster access. If the data was so large that it wouldn't fit on the local compute hard drive, you would use the `as_mount()` option to stream the data via the FUSE filesystem. The `compute_target` of this second step is `'cpucluster'`, not the `'link1-spark01'` resource you used in the data preparation step. This step uses a simple program `train.py` instead of the `dataprep.py` you used in the previous step. You can see the details of `train.py` in the sample notebook.
+
+Once you've defined all of your steps, you can create and run your pipeline.
+
+```python
+from azureml.pipeline.core import Pipeline
+
+pipeline = Pipeline(workspace=ws, steps=[step_1, step_2])
+pipeline_run = pipeline.submit('synapse-pipeline', regenerate_outputs=True)
+```
+
+The above code creates a pipeline consisting of the data preparation step on Apache Spark pools powered by Azure Synapse Analytics (`step_1`) and the training step (`step_2`). Azure calculates the execution graph by examining the data dependencies between the steps. In this case, there's only a straightforward dependency that `step2_input` necessarily requires `step1_output`.
+
+The call to `pipeline.submit` creates, if necessary, an Experiment called `synapse-pipeline` and asynchronously begins a Job within it. Individual steps within the pipeline are run as Child Jobs of this main job and can be monitored and reviewed in the Experiments page of Studio.
+
+## Next steps
+
+* [Publish and track machine learning pipelines](how-to-deploy-pipelines.md)
+* [Monitor Azure Machine Learning](../monitor-azure-machine-learning.md)
+* [Use automated ML in an Azure Machine Learning pipeline in Python](how-to-use-automlstep-in-pipelines.md)
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-version-track-datasets.md
+
+ Title: Dataset versioning
+
+description: Learn how to version machine learning datasets and how versioning works with machine learning pipelines.
+++++ Last updated : 08/17/2022++
+#Customer intent: As a data scientist, I want to version and track datasets so I can use and share them across multiple machine learning experiments.
++
+# Version and track Azure Machine Learning datasets
++
+In this article, you'll learn how to version and track Azure Machine Learning datasets for reproducibility. Dataset versioning is a way to bookmark the state of your data so that you can apply a specific version of the dataset for future experiments.
+
+Typical versioning scenarios:
+
+* When new data is available for retraining
+* When you're applying different data preparation or feature engineering approaches
+
+## Prerequisites
+
+For this tutorial, you need:
+
+- [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install). This SDK includes the [azureml-datasets](/python/api/azureml-core/azureml.core.dataset) package.
+
+- An [Azure Machine Learning workspace](../concept-workspace.md). Retrieve an existing one by running the following code, or [create a new workspace](../quickstart-create-resources.md).
+
+ ```Python
+ import azureml.core
+ from azureml.core import Workspace
+
+ ws = Workspace.from_config()
+ ```
+- An [Azure Machine Learning dataset](how-to-create-register-datasets.md).
+
+<a name="register"></a>
+
+## Register and retrieve dataset versions
+
+By registering a dataset, you can version, reuse, and share it across experiments and with colleagues. You can register multiple datasets under the same name and retrieve a specific version by name and version number.
+
+### Register a dataset version
+
+The following code registers a new version of the `titanic_ds` dataset by setting the `create_new_version` parameter to `True`. If there's no existing `titanic_ds` dataset registered with the workspace, the code creates a new dataset with the name `titanic_ds` and sets its version to 1.
+
+```Python
+titanic_ds = titanic_ds.register(workspace = workspace,
+ name = 'titanic_ds',
+ description = 'titanic training data',
+ create_new_version = True)
+```
+
+### Retrieve a dataset by name
+
+By default, the [get_by_name()](/python/api/azureml-core/azureml.core.dataset.dataset#get-by-name-workspace--name--version--latest--) method on the `Dataset` class returns the latest version of the dataset registered with the workspace.
+
+The following code gets version 1 of the `titanic_ds` dataset.
+
+```Python
+from azureml.core import Dataset
+# Get a dataset by name and version number
+titanic_ds = Dataset.get_by_name(workspace = workspace,
+ name = 'titanic_ds',
+ version = 1)
+```
+
+<a name="best-practice"></a>
+
+## Versioning best practice
+
+When you create a dataset version, you're *not* creating an extra copy of data with the workspace. Because datasets are references to the data in your storage service, you have a single source of truth, managed by your storage service.
+
+>[!IMPORTANT]
+> If the data referenced by your dataset is overwritten or deleted, calling a specific version of the dataset does *not* revert the change.
+
+When you load data from a dataset, the current data content referenced by the dataset is always loaded. If you want to make sure that each dataset version is reproducible, we recommend that you not modify data content referenced by the dataset version. When new data comes in, save new data files into a separate data folder and then create a new dataset version to include data from that new folder.
+
+The following image and sample code show the recommended way to structure your data folders and to create dataset versions that reference those folders:
+
+![Folder structure](./media/how-to-version-track-datasets/folder-image.png)
+
+```Python
+from azureml.core import Dataset
+
+# get the default datastore of the workspace
+datastore = workspace.get_default_datastore()
+
+# create & register weather_ds version 1 pointing to all files in the folder of week 27
+datastore_path1 = [(datastore, 'Weather/week 27')]
+dataset1 = Dataset.File.from_files(path=datastore_path1)
+dataset1.register(workspace = workspace,
+ name = 'weather_ds',
+ description = 'weather data in week 27',
+ create_new_version = True)
+
+# create & register weather_ds version 2 pointing to all files in the folder of week 27 and 28
+datastore_path2 = [(datastore, 'Weather/week 27'), (datastore, 'Weather/week 28')]
+dataset2 = Dataset.File.from_files(path = datastore_path2)
+dataset2.register(workspace = workspace,
+ name = 'weather_ds',
+ description = 'weather data in week 27, 28',
+ create_new_version = True)
+
+```
+
+<a name="pipeline"></a>
+
+## Version an ML pipeline output dataset
+
+You can use a dataset as the input and output of each [ML pipeline](../concept-ml-pipelines.md) step. When you rerun pipelines, the output of each pipeline step is registered as a new dataset version.
+
+ML pipelines populate the output of each step into a new folder every time the pipeline reruns. This behavior allows the versioned output datasets to be reproducible. Learn more about [datasets in pipelines](./how-to-create-machine-learning-pipelines.md#steps).
+
+```Python
+from azureml.core import Dataset
+from azureml.pipeline.steps import PythonScriptStep
+from azureml.pipeline.core import Pipeline, PipelineData
+from azureml.core. runconfig import CondaDependencies, RunConfiguration
+
+# get input dataset
+input_ds = Dataset.get_by_name(workspace, 'weather_ds')
+
+# register pipeline output as dataset
+output_ds = PipelineData('prepared_weather_ds', datastore=datastore).as_dataset()
+output_ds = output_ds.register(name='prepared_weather_ds', create_new_version=True)
+
+conda = CondaDependencies.create(
+ pip_packages=['azureml-defaults', 'azureml-dataprep[fuse,pandas]'],
+ pin_sdk_version=False)
+
+run_config = RunConfiguration()
+run_config.environment.docker.enabled = True
+run_config.environment.python.conda_dependencies = conda
+
+# configure pipeline step to use dataset as the input and output
+prep_step = PythonScriptStep(script_name="prepare.py",
+ inputs=[input_ds.as_named_input('weather_ds')],
+ outputs=[output_ds],
+ runconfig=run_config,
+ compute_target=compute_target,
+ source_directory=project_folder)
+```
+
+<a name="track"></a>
+
+## Track data in your experiments
+
+Azure Machine Learning tracks your data throughout your experiment as input and output datasets.
+
+The following are scenarios where your data is tracked as an **input dataset**.
+
+* As a `DatasetConsumptionConfig` object through either the `inputs` or `arguments` parameter of your `ScriptRunConfig` object when submitting the experiment job.
+
+* When methods like, get_by_name() or get_by_id() are called in your script. For this scenario, the name assigned to the dataset when you registered it to the workspace is the name displayed.
+
+The following are scenarios where your data is tracked as an **output dataset**.
+
+* Pass an `OutputFileDatasetConfig` object through either the `outputs` or `arguments` parameter when submitting an experiment job. `OutputFileDatasetConfig` objects can also be used to persist data between pipeline steps. See [Move data between ML pipeline steps.](how-to-move-data-in-out-of-pipelines.md)
+
+* Register a dataset in your script. For this scenario, the name assigned to the dataset when you registered it to the workspace is the name displayed. In the following example, `training_ds` is the name that would be displayed.
+
+ ```Python
+ training_ds = unregistered_ds.register(workspace = workspace,
+ name = 'training_ds',
+ description = 'training data'
+ )
+ ```
+
+* Submit child job with an unregistered dataset in script. This results in an anonymous saved dataset.
+
+### Trace datasets in experiment jobs
+
+For each Machine Learning experiment, you can easily trace the datasets used as input with the experiment `Job` object.
+
+The following code uses the [`get_details()`](/python/api/azureml-core/azureml.core.run.run#get-details--) method to track which input datasets were used with the experiment run:
+
+```Python
+# get input datasets
+inputs = run.get_details()['inputDatasets']
+input_dataset = inputs[0]['dataset']
+
+# list the files referenced by input_dataset
+input_dataset.to_path()
+```
+
+You can also find the `input_datasets` from experiments by using the [Azure Machine Learning studio]().
+
+The following image shows where to find the input dataset of an experiment on Azure Machine Learning studio. For this example, go to your **Experiments** pane and open the **Properties** tab for a specific run of your experiment, `keras-mnist`.
+
+![Input datasets](./media/how-to-version-track-datasets/input-datasets.png)
+
+Use the following code to register models with datasets:
+
+```Python
+model = run.register_model(model_name='keras-mlp-mnist',
+ model_path=model_path,
+ datasets =[('training data',train_dataset)])
+```
+
+After registration, you can see the list of models registered with the dataset by using Python or go to the [studio](https://ml.azure.com/).
+
+The following view is from the **Datasets** pane under **Assets**. Select the dataset and then select the **Models** tab for a list of the models that are registered with the dataset.
+
+![Input datasets models](./media/how-to-version-track-datasets/dataset-models.png)
+
+## Next steps
+
+* [Train with datasets](how-to-train-with-datasets.md)
+* [More sample dataset notebooks](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/work-with-data/)
machine-learning Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/introduction.md
Title: Machine Learning CLI (v1)
+ Title: Machine Learning SDK & CLI (v1)
description: Learn about the machine learning extension for the Azure CLI (v1).
marketplace Add In Submission Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/add-in-submission-guide.md
This article is a step-by-step guide that will detail how to submit your app to
- **Will your app be listed in the Apple Store?** If so, update your Apple Developer ID in Account Settings in Partner Center before publishing the app. You'll see a warning or a note to remind you to enter this information on screen. If you donΓÇÖt enter this information, your app will not be available for acquisition on iOS mobile devices, but the app will appear to be available to use on iOS devices after you acquire the app on another type of device. > [!NOTE]
- > Only users with Developer Account Owner or Developer Account Manager roles can update Apple ID in [Account Settings](/azure/marketplace/manage-account).
+ > Only users with Developer Account Owner or Developer Account Manager roles can update Apple ID in [Account Settings](./manage-account.md).
- **Does your app use Azure Active Directory or SSO (Azure AD/SSO)?** If so, select the box that asks about this. - **Does your app require additional purchases?**
Once you have answered those questions for yourself, select the submit button on
Expect a response within three to four business days from our reviewers if there are any issues related to your submission. > [!TIP]
-> After publishing an offer, the [owner](/azure/marketplace/user-roles) of your developer account is notified of the publishing status and required actions through email and the Action Center in Partner Center. For more information about Action Center, see [Action Center Overview](/partner-center/action-center-overview).
+> After publishing an offer, the [owner](./user-roles.md) of your developer account is notified of the publishing status and required actions through email and the Action Center in Partner Center. For more information about Action Center, see [Action Center Overview](/partner-center/action-center-overview).
marketplace Azure App Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-managed.md
Previously updated : 06/28/2022 Last updated : 08/18/2022 # Configure a managed application plan
In the **Version** box provide the current version of the technical configuratio
### Upload a package file
+Make sure your offer is compliant with our recommended practices by using the [ARM template test toolkit](/azure/azure-resource-manager/templates/test-toolkit#validate-templates-for-azure-marketplace) before uploading the package file.
+ Under **Package file (.zip)**, drag your package file to the gray box or select the **browse for your file(s)** link. > [!NOTE]
marketplace Azure App Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-solution.md
In the **Version** box provide the current version of the technical configuratio
### Upload a package file
+Make sure your offer is compliant with our recommended practices by using the [ARM template test toolkit](/azure/azure-resource-manager/templates/test-toolkit#validate-templates-for-azure-marketplace) before uploading the package file.
+ Under **Package file (.zip)**, drag your package file to the gray box or select the **browse for your file(s)** link. > [!NOTE]
marketplace Manage Account Settings And Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/manage-account-settings-and-profile.md
After you set your payout hold status to **On**, all payouts will be on hold unt
## Multi-user account management
-Partner Center uses [Azure Active Directory](/azure/active-directory/fundamentals/active-directory-whatis) (Azure AD) for multi-user account access and management. Your organization's Azure AD is automatically associated with your Partner Center account as part of the enrollment process.
+Partner Center uses [Azure Active Directory](../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) for multi-user account access and management. Your organization's Azure AD is automatically associated with your Partner Center account as part of the enrollment process.
## Manage users
To create a brand new Azure AD tenant with your Partner Center account:
1. Select **Create** to confirm the new domain and account info. 1. Sign in with your new Azure AD global administrator username and password to begin [adding and managing users](#manage-users).
-For more information about creating new tenants inside your Azure portal, rather than via the Partner Center portal, see the article [Create a new tenant in Azure Active Directory](/azure/active-directory/fundamentals/active-directory-access-create-new-tenant).
+For more information about creating new tenants inside your Azure portal, rather than via the Partner Center portal, see the article [Create a new tenant in Azure Active Directory](../active-directory/fundamentals/active-directory-access-create-new-tenant.md).
### Remove a tenant
When you remove a tenant, all users that were added to the Partner Center accoun
On the **Agreements** page (under **Account Settings**), you can see a list of the publishing agreements that you've authorized. These agreements are listed according to name and version number, including the date it was accepted and the name of the user that accepted the agreement.
-**Actions needed** might appear at the top of this page if there are agreement updates that need your attention. To accept an updated agreement, first read the linked Agreement Version, then select **Accept agreement**.
+**Actions needed** might appear at the top of this page if there are agreement updates that need your attention. To accept an updated agreement, first read the linked Agreement Version, then select **Accept agreement**.
marketplace Monetize Addins Through Microsoft Commercial Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/monetize-addins-through-microsoft-commercial-marketplace.md
To begin submitting your SaaS offer, you must create an account in the Commercia
You must register a SaaS application using the Microsoft Azure Portal. After a successful registration, you will receive an Azure Active Directory (Azure AD) security token that you can use to access the SaaS fulfillment APIs. Any application that wants to use the capabilities of Azure AD must first be registered in an Azure AD tenant. This registration process involves giving Azure AD details about your application, such as the URL where it's located, the URL to send replies after a user is authenticated, the URI that identifies the app, and so on.
-For details about how to register, see [Register an Azure AD-secured app](/azure/marketplace/partner-center-portal/pc-saas-registration#register-an-azure-ad-secured-app).
+For details about how to register, see [Register an Azure AD-secured app](./partner-center-portal/pc-saas-registration.md#register-an-azure-ad-secured-app).
### Create your licensing database
Your app should have three states:
- User signed in, no license associated - User signed in, license associated
-For information about authenticating with Azure AD from within your add-in, see [Office Dialog API](/office/dev/add-ins/develop/auth-with-office-dialog-api) and [Microsoft identity platform](/azure/active-directory/develop/v2-overview).
+For information about authenticating with Azure AD from within your add-in, see [Office Dialog API](/office/dev/add-ins/develop/auth-with-office-dialog-api) and [Microsoft identity platform](../active-directory/develop/v2-overview.md).
### Code sample: Move from paid apps to paid web apps with free apps
Review the information on the [Welcome to Microsoft Partner Center](https://part
### Where can I find documentation about integrating with Azure Active Directory?
-For extensive documentation, samples, and guidance, see [Microsoft identity platform overview](/azure/active-directory/develop/v2-overview).
+For extensive documentation, samples, and guidance, see [Microsoft identity platform overview](../active-directory/develop/v2-overview.md).
We recommend that you have a subscription dedicated to your Azure Marketplace publishing, to isolate the work from other initiatives. Then you can start deploying your SaaS application in this subscription to start the development work. You can also check for [Azure AD service updates](https://azure.microsoft.com/updates/?product=active-directory). ### How does my app authenticate a user with Azure AD?
-Office provides the [Office Dialog API](/office/dev/add-ins/develop/auth-with-office-dialog-api) to enable you to authenticate users from within your add-in. For more information, see [Microsoft identity platform](/azure/active-directory/develop/v2-overview).
+Office provides the [Office Dialog API](/office/dev/add-ins/develop/auth-with-office-dialog-api) to enable you to authenticate users from within your add-in. For more information, see [Microsoft identity platform](../active-directory/develop/v2-overview.md).
### What reports will I receive from Commercial Marketplace about my SaaS offer?
marketplace Plan Azure App Managed App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-managed-app.md
Previously updated : 06/03/2022 Last updated : 08/18/2022 # Plan an Azure managed application for an Azure application offer
Maximum file sizes supported are:
- Up to 1 Gb in total compressed .zip archive size - Up to 1 Gb for any individual uncompressed file within the .zip archive
+> [!TIP]
+> Make sure your offer is compliant with our recommended practices by using the [ARM template test toolkit](/azure/azure-resource-manager/templates/test-toolkit#validate-templates-for-azure-marketplace) before publishing your Azure Application.
+ ## Azure regions You can publish your plan to the Azure public region, Azure Government region, or both. Before publishing to [Azure Government](../azure-government/documentation-government-manage-marketplace-partners.md), test and validate your plan in the environment as certain endpoints may differ. To set up and test your plan, request a trial account from [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/).
marketplace Plan Azure App Solution Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-app-solution-template.md
Previously updated : 05/25/2022 Last updated : 08/18/2022 # Plan a solution template for an Azure application offer
The solution template plan type requires an [Azure Resource Manager template (AR
## Deployment package
-The deployment package contains all of the template files needed for this plan, as well as any additional resources, packaged as a .zip file.
+The deployment package contains all the template files needed for this plan, as well as any additional resources, packaged as a .zip file.
All Azure applications must include these two files in the root folder of a .zip archive:
Maximum file sizes supported are:
- Up to 1 Gb in total compressed .zip archive size - Up to 1 Gb for any individual uncompressed file within the .zip archive
+> [!TIP]
+> Make sure your offer is compliant with our recommended practices by using the [ARM template test toolkit](/azure/azure-resource-manager/templates/test-toolkit#validate-templates-for-azure-marketplace) before publishing your Azure Application.
+ ## Azure regions You can publish your plan to the Azure public region, Azure Government region, or both. Before publishing to [Azure Government](../azure-government/documentation-government-manage-marketplace-partners.md), test and validate your plan in the environment as certain endpoints may differ. To set up and test your plan, request a trial account from [Microsoft Azure Government trial](https://azure.microsoft.com/global-infrastructure/government/request/).
marketplace Plan Azure Application Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-azure-application-offer.md
Previously updated : 06/29/2022 Last updated : 08/18/2022 # Tutorial: Plan an Azure Application offer
Review the following resources as you plan your Azure application offer for the
- [Azure CLI](../azure-resource-manager/managed-applications/cli-samples.md) - [Azure PowerShell](../azure-resource-manager/managed-applications/powershell-samples.md) - [Managed application solutions](../azure-resource-manager/managed-applications/sample-projects.md)
+- Testing resources
+ - [ARM template test toolkit](/azure/azure-resource-manager/templates/test-toolkit#validate-templates-for-azure-marketplace)
The video [Building Solution Templates, and Managed Applications for Azure Marketplace](/Events/Build/2018/BRK3603) gives a comprehensive introduction to the Azure application offer type:
migrate Tutorial Migrate Aws Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-aws-virtual-machines.md
After you've verified that the test migration works as expected, you can migrate
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
migrate Tutorial Migrate Gcp Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-gcp-virtual-machines.md
After you've verified that the test migration works as expected, you can migrate
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
migrate Tutorial Migrate Hyper V https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-hyper-v.md
Title: Migrate Hyper-V VMs to Azure with Azure Migrate Server Migration description: Learn how to migrate on-premises Hyper-V VMs to Azure with Azure Migrate Server Migration--++ ms. Last updated 06/20/2022
With discovery completed, you can begin replication of Hyper-V VMs to Azure.
- **OS disk**: Specify the OS (boot) disk for the VM. The OS disk is the disk that has the operating system bootloader and installer. - **Availability Set**: If the VM should be in an Azure availability set after migration, specify the set. The set must be in the target resource group you specify for the migration.
-1. In **Disks**, specify the VM disks that needs to be replicated to Azure. Then click **Next**.
+1. In **Disks**, specify the VM disks that need to be replicated to Azure. Then click **Next**.
- You can exclude disks from replication. - If you exclude disks, won't be present on the Azure VM after migration.
After you've verified that the test migration works as expected, you can migrate
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
After you've verified that the test migration works as expected, you can migrate
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
migrate Tutorial Migrate Vmware Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-agent.md
Assign the Virtual Machine Contributor role to the account, so that you have per
### Assign permissions to register the Replication Appliance in Azure AD
-If you are following the least privilege principle, assign the **Application Developer** Azure AD role to the user registering the Replication Appliance. Follow the [Assign administrator and non-administrator roles to users with Azure Active Directory](/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal) guide to do so.
+If you are following the least privilege principle, assign the **Application Developer** Azure AD role to the user registering the Replication Appliance. Follow the [Assign administrator and non-administrator roles to users with Azure Active Directory](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md) guide to do so.
> [!IMPORTANT] > If the user registering the Replication Appliance is an Azure AD Global administrator, that user already has the required permissions.
After you've verified that the test migration works as expected, you can migrate
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
After you've verified that the test migration works as expected, you can migrate
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
migrate Tutorial Migrate Vmware Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware-powershell.md
Write-Output $MigrateJob.State
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
migrate Tutorial Migrate Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-vmware.md
After you've verified that the test migration works as expected, you can migrate
- For increased security: - Lock down and limit inbound traffic access with [Microsoft Defender for Cloud - Just in time administration](../security-center/security-center-just-in-time.md). - Restrict network traffic to management endpoints with [Network Security Groups](../virtual-network/network-security-groups-overview.md).
- - Deploy [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) to help secure disks, and keep data safe from theft and unauthorized access.
+ - Deploy [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) to help secure disks, and keep data safe from theft and unauthorized access.
- Read more about [securing IaaS resources](https://azure.microsoft.com/services/virtual-machines/secure-well-managed-iaas/), and visit the [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/). - For monitoring and management: - Consider deploying [Azure Cost Management](../cost-management-billing/cost-management-billing-overview.md) to monitor resource usage and spending.
mysql How To Server Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/how-to-server-logs-cli.md
Here are the details for the above command
LastModifiedTime | Name | ResourceGroup | SizeInKb | TypePropertiesType | Url ||||||
-2022-08-01T11:09:48+00:00 | mysql-slow-serverlogdemo-2022073111.log | myresourcegroup | 10947 | slowlog | https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022073111.log?
-2022-08-02T11:10:00+00:00 | mysql-slow-serverlogdemo-2022080111.log | myresourcegroup | 10927 | slowlog | https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022080111.log?
-2022-08-03T11:10:12+00:00 | mysql-slow-serverlogdemo-2022080211.log | myresourcegroup | 10936 | slowlog | https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022080211.log?
-2022-08-03T11:12:00+00:00 | mysql-slow-serverlogdemo-2022080311.log | myresourcegroup | 8920 | slowlog | https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022080311.log?
+2022-08-01T11:09:48+00:00 | mysql-slow-serverlogdemo-2022073111.log | myresourcegroup | 10947 | slowlog | `https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022073111.log?`
+2022-08-02T11:10:00+00:00 | mysql-slow-serverlogdemo-2022080111.log | myresourcegroup | 10927 | slowlog | `https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022080111.log?`
+2022-08-03T11:10:12+00:00 | mysql-slow-serverlogdemo-2022080211.log | myresourcegroup | 10936 | slowlog | `https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022080211.log?`
+2022-08-03T11:12:00+00:00 | mysql-slow-serverlogdemo-2022080311.log | myresourcegroup | 8920 | slowlog | `https://00000000000.file.core.windows.net/0000000serverlog/slowlogs/mysql-slow-serverlogdemo-2022080311.log?`
Above list shows LastModifiedTime, Name, ResourceGroup, SizeInKb and Download Url of the Server Logs available.
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
description: This article explains how to use the NSG flow logs feature of Azure Network Watcher. documentationcenter: na-+ na Last updated 01/04/2021-+
Also, when a NSG is deleted, by default the associated flow log resource is dele
**Flow Logging Costs**: NSG flow logging is billed on the volume of logs produced. High traffic volume can result in large flow log volume and the associated costs. NSG Flow log pricing does not include the underlying costs of storage. Using the retention policy feature with NSG Flow Logging means incurring separate storage costs for extended periods of time. If you want to retain data forever and do not want to apply any retention policy, set retention (days) to 0. For more information, see [Network Watcher Pricing](https://azure.microsoft.com/pricing/details/network-watcher/) and [Azure Storage Pricing](https://azure.microsoft.com/pricing/details/storage/) for additional details.
-**Issues with User-defined Inbound TCP rules**: [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md) are implemented as a [Stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). However, due to current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless fashion. Due to this, flows affected by user-defined inbound rules become non-terminating. Additionally byte and packet counts are not recorded for these flows. Consequently the number of bytes and packets reported in NSG Flow Logs (and Traffic Analytics) could be different from actual numbers. This can be resolved by setting the [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property on the associated virtual networks to a non-null value.
+**Issues with User-defined Inbound TCP rules**: [Network Security Groups (NSGs)](../virtual-network/network-security-groups-overview.md) are implemented as a [Stateful firewall](https://en.wikipedia.org/wiki/Stateful_firewall?oldformat=true). However, due to current platform limitations, user-defined rules that affect inbound TCP flows are implemented in a stateless fashion. Due to this, flows affected by user-defined inbound rules become non-terminating. Additionally byte and packet counts are not recorded for these flows. Consequently the number of bytes and packets reported in NSG Flow Logs (and Traffic Analytics) could be different from actual numbers. This can be resolved by setting the [FlowTimeoutInMinutes](/powershell/module/az.network/set-azvirtualnetwork) property on the associated virtual networks to a non-null value. Default stateful behavior can be achieved by setting FlowTimeoutInMinutes to 4 minutes. For long running connections, where you do not want flows disconnecting from a service or destination, FlowTimeoutInMinutes can be set to a value upto 30 minutes.
+```powershell
+$virtualNetwork = Get-AzVirtualNetwork -Name VnetName -ResourceGroupName RgName
+$virtualNetwork.FlowTimeoutInMinutes = 4
+$virtualNetwork | Set-AzVirtualNetwork
+```
**Inbound flows logged from internet IPs to VMs without public IPs**: VMs that don't have a public IP address assigned via a public IP address associated with the NIC as an instance-level public IP, or that are part of a basic load balancer back-end pool, use [default SNAT](../load-balancer/load-balancer-outbound-connections.md) and have an IP address assigned by Azure to facilitate outbound connectivity. As a result, you might see flow log entries for flows from internet IP addresses, if the flow is destined to a port in the range of ports assigned for SNAT. While Azure won't allow these flows to the VM, the attempt is logged and appears in Network Watcher's NSG flow log by design. We recommend that unwanted inbound internet traffic be explicitly blocked with NSG.
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Deutsche Telekom](https://cloud.telekom.de/de/infrastruktur/microsoft-azure/azure-networking)|[Network connectivity to Azure: 2-Hr assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerkoptimierung_2_stunden?search=telekom&page=1); [Cloud Transformation with Azure: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_cloudtransformation_1_tag?search=telekom&page=1)|[Managed ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_cloud_connect_implementation?search=telekom&page=1)|||[Azure Networking and Security: 1-Day Workshop](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_netzwerke_und_sicherheit_1_tag?search=telekom&page=1); [Intraselect SecureConnect: 1-Week Implementation](https://appsource.microsoft.com/de-de/marketplace/consulting-services/telekomdeutschlandgmbh1617272539503.azure_intraselect_secure_connect_implementation?tab=Overview)| |[Equinix](https://www.equinix.com/)|Cloud Optimized WAN Workshop|[ExpressRoute Connectivity Strategy Workshop](https://www.equinix.se/resources/data-sheets/expressroute-strategy-workshop); [Equinix Cloud Exchange Fabric](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/equinix.equinix_ecx_fabric?tab=Overview)|||| |[Federated Wireless](https://www.federatedwireless.com/caas/)||||[Federated Wireless Connectivity-as-a-Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/federatedwireless1580839623708.fw_caas?tab=Overview)|
-|[HCL](https://www.hcltech.com/)|[HCL Cloud Network Transformation- One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.clo?tab=Overview)|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
+|[HCL](https://www.hcltech.com/)|HCL Cloud Network Transformation- One Day Assessment|[1-Hour Briefing of HCL Azure ExpressRoute Service](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazureexpressroute?tab=Overview)|[HCL Azure Virtual WAN Services - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclmanagedazurevitualwan?search=vWAN&page=1)|[HCL Azure Private LTE offering - One Day Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/hcl-technologies.hclazureprivatelteoffering)|
|[IIJ](https://www.iij.ad.jp/biz/cloudex/)|[ExpressRoute implementation: 1-Hour Briefing](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxm_consulting)|[ExpressRoute: 2-Week Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/internet_initiative_japan_inc.iij_cxmer_consulting)|||| |[Infosys](https://www.infosys.com/services/microsoft-cloud-business/pages/index.aspx)|[Infosys Integrate+ for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/infosysltd.infosys-integrate-for-azure?tab=Overview)||||| |[Interxion](https://www.interxion.com/products/interconnection/cloud-connect/support-your-cloud-strategy/)|[Azure Networking Assessment - Five Days](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/interxionhq.inxn_azure_networking_assessment)|||||
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
This article assumes that you're creating a new cluster. If you need a basic ARO
## Minimum Required FQDN - Proxied through ARO service
-This list is based on the list of FQDNs found in the OpenShift docs here: https://docs.openshift.com/container-platform/4.6/installing/install_config/configuring-firewall.html
+This list is based on the list of FQDNs found in the OpenShift docs here: https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html
The following FQDNs are proxied through the service, and will not need additional firewall rules. They are here for informational purposes.
The following FQDNs are proxied through the service, and will not need additiona
| **`*.table.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). | > [!NOTE]
-> For many customers exposing *.blob, *.table and other large address spaces creates a potential data exfiltration concern. You may want to consider using the [OpenShift Egress Firewall](https://docs.openshift.com/container-platform/4.6/networking/openshift_sdn/configuring-egress-firewall.html) to protect applications deployed in the cluster from reaching these destinations and use Azure Private Link for specific application needs.
+> For many customers exposing *.blob, *.table and other large address spaces creates a potential data exfiltration concern. You may want to consider using the [OpenShift Egress Firewall](https://docs.openshift.com/container-platform/latest/networking/openshift_sdn/configuring-egress-firewall.html) to protect applications deployed in the cluster from reaching these destinations and use Azure Private Link for specific application needs.
In OpenShift Container Platform, customers can opt out of reporting health and u
### OTHER POSSIBLE OPENSHIFT REQUIREMENTS -- **`quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images.
+- **`*.quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images. If your firewall cannot use wildcards, you can find the [full list of subdomains in the Red Hat documentation.](https://docs.openshift.com/container-platform/latest/installing/install_config/configuring-firewall.html)
- **`mirror.openshift.com`**: Required to access mirrored installation content and images. This site is also a source of release image signatures. - **`*.apps.<cluster_name>.<base_domain>`** (OR EQUIVALENT ARO URL): When allowlisting domains, this is used in your corporate network to reach applications deployed in OpenShift, or to access the OpenShift console. - **`api.openshift.com`**: Used by the cluster for release graph parsing. https://access.redhat.com/labs/ocpupgradegraph/ can be used as an alternative.
postgresql Quickstart App Stacks Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/quickstart-app-stacks-nodejs.md
recommendations: false Previously updated : 08/11/2022 Last updated : 08/18/2022 # Node.js app to connect and query Hyperscale (Citus) [!INCLUDE[applies-to-postgresql-hyperscale](../includes/applies-to-postgresql-hyperscale.md)]
-In this article, you'll connect to a Hyperscale (Citus) server group using a Node.js application. We'll see how to use SQL statements to query, insert, update, and delete data in the database. The steps in this article assume that you're familiar with developing using Node.js, and are new to working with Hyperscale (Citus).
+In this article, you'll connect to a Hyperscale (Citus) server group using a Node.js application. We'll see how to use SQL statements to query, insert, update and delete data in the database. The steps in this article assume that you're familiar with developing using Node.js and are new to working with Hyperscale (Citus).
> [!TIP] >
-> The process of creating a NodeJS app with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
+> The process of creating a NodeJS application with Hyperscale (Citus) is the same as working with ordinary PostgreSQL.
## Setup
In this article, you'll connect to a Hyperscale (Citus) server group using a Nod
Install [pg](https://www.npmjs.com/package/pg), which is a PostgreSQL client for Node.js. To do so, run the node package manager (npm) for JavaScript from your command line to install the pg client.
-```dotnetcli
+```bash
npm install pg ``` Verify the installation by listing the packages installed.
-```dotnetcli
+```bash
npm list ``` ### Get database connection information
-To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See below screenshot.
+To get the database credentials, you can use the **Connection strings** tab in the Azure portal. See the screenshot below.
![Diagram showing NodeJS connection string.](../media/howto-app-stacks/01-python-connection-string.png) ### Running JavaScript code in Node.js
-You may launch Node.js from the Bash shell, Terminal, or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it.
+You may launch Node.js from the Bash shell, Terminal or Windows Command Prompt by typing `node`, then run the example JavaScript code interactively by copy and pasting it onto the prompt. Alternatively, you may save the JavaScript code into a text file and launch `node filename.js` with the file name as a parameter to run it.
-## Connect, create table, insert data
+## Connect, create table and insert data
All examples in this article need to connect to the database. Let's put the connection logic into its own module for reuse. We'll use the
-[pg.Client](https://github.com/brianc/node-postgres/wiki/Client) object to
+[pg.Client](https://node-postgres.com/) object to
interface with the PostgreSQL server. [!INCLUDE[why-connection-pooling](includes/why-connection-pooling.md)]
-Create a `citus.js` with the common connection code:
+Create a folder called `db` and inside this folder create `citus.js` file with the common connection code:
```javascript
-// citus.js
+/**
+* file: db/citus.js
+*/
const { Pool } = require('pg');
-module.exports = new Promise((resolve, reject) => {
- const pool = new Pool({
- host: 'c.citustest.postgres.database.azure.com',
- port: 5432,
- user: 'citus',
- password: 'Password123$',
- database: 'citus',
- ssl: true,
- connectionTimeoutMillis: 0,
- idleTimeoutMillis: 0,
- min: 10,
- max: 20,
- });
-
- resolve({ pool });
+
+const pool = new Pool({
+ max: 300,
+ connectionTimeoutMillis: 5000,
+
+ host: 'c.citustest.postgres.database.azure.com',
+ port: 5432,
+ user: 'citus',
+ password: 'Password123$',
+ database: 'citus',
+ ssl: true,
});+
+module.exports = {
+ pool,
+};
``` Next, use the following code to connect and load the data using CREATE TABLE
-and INSERT INTO SQL statements.
+and INSERT INTO SQL statements.
```javascript
-//create.js
+/**
+* file: create.js
+*/
-async function queryDatabase() {
-
- const q = `
- DROP TABLE IF EXISTS pharmacy;
- CREATE TABLE pharmacy (pharmacy_id integer,pharmacy_name text,city text,state text,zip_code integer);
- INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);
- INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);
- CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);
- `;
- const { pool } = await postgresql;
-
- const client = await pool.connect();
-
- var stream = client.query(q).then(() => {
- console.log('Created tables and inserted rows');
- client.end(console.log('Closed client connection'));
- })
- .catch(err => console.log(err))
- .then(() => {
- console.log('Finished execution, exiting now');
- process.exit();
- });
- await pool.end();
+const { pool } = require('./db/citus');
+async function queryDatabase() {
+ const queryString = `
+ DROP TABLE IF EXISTS pharmacy;
+ CREATE TABLE pharmacy (pharmacy_id integer,pharmacy_name text,city text,state text,zip_code integer);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (0,'Target','Sunnyvale','California',94001);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (1,'CVS','San Francisco','California',94002);
+ INSERT INTO pharmacy (pharmacy_id,pharmacy_name,city,state,zip_code) VALUES (2,'Walgreens','San Diego','California',94003);
+ CREATE INDEX idx_pharmacy_id ON pharmacy(pharmacy_id);
+ `;
+
+ try {
+ /* Real application code would probably request a dedicated client with
+ pool.connect() and run multiple queries with the client. In this
+ example, we're running only one query, so we use the pool.query()
+ helper method to run it on the first available idle client.
+ */
+
+ await pool.query(queryString);
+ console.log('Created the Pharmacy table and inserted rows.');
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
}+ queryDatabase(); ```
+To execute the code above, run `node create.js`. This command will create a new "pharmacy" table and insert some sample data.
+ ## Super power of Distributed Tables
-Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](quickstart-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
+Hyperscale (Citus) gives you [the super power of distributing tables](overview.md#the-superpower-of-distributed-tables) across multiple nodes for scalability. The command below enables you to distribute a table. You can learn more about `create_distributed_table` and the distribution column [here](howto-build-scalable-apps-concepts.md#distribution-column-also-known-as-shard-key).
> [!TIP] >
Hyperscale (Citus) gives you [the super power of distributing tables](overview.m
Use the following code to connect to the database and distribute the table. ```javascript
-const postgresql = require('./citus');
+/**
+* file: distribute-table.js
+*/
+
+const { pool } = require('./db/citus');
-// Connect with a connection pool.
async function queryDatabase() {
- const q = `select create_distributed_table('pharmacy','pharmacy_id');`;
-
- const { pool } = await postgresql;
- // resolve the pool.connect() promise
- const client = await pool.connect();
- var stream = await client.query(q).then(() => {
- console.log('Distributed pharmacy table');
- client.end(console.log('Closed client connection'));
- })
- .catch(err => console.log(err))
- .then(() => {
- console.log('Finished execution, exiting now');
- process.exit();
- });
- await pool.end();
+ const queryString = `
+ SELECT create_distributed_table('pharmacy', 'pharmacy_id');
+ `;
+
+ try {
+ await pool.query(queryString);
+ console.log('Distributed pharmacy table.');
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
}
-// Use a self-calling function so we can use async / await.
queryDatabase(); ```
queryDatabase();
Use the following code to connect and read the data using a SELECT SQL statement. ```javascript
-// read.js
+/**
+* file: read.js
+*/
+
+const { pool } = require('./db/citus');
-const postgresql = require('./citus');
-// Connect with a connection pool.
async function queryDatabase() {
- const q = 'SELECT * FROM pharmacy;';
- const { pool } = await postgresql;
- // resolve the pool.connect() promise
- const client = await pool.connect();
- var stream = await client.query(q).then(res => {
- const rows = res.rows;
- rows.map(row => {
- console.log(`Read: ${JSON.stringify(row)}`);
- });
- process.exit();
- })
- .catch(err => {
- console.log(err);
- throw err;
- })
- .then(() => {
- console.log('Finished execution, exiting now');
- process.exit();
- });
- await pool.end();
+ const queryString = `
+ SELECT * FROM pharmacy;
+ `;
+
+ try {
+ const res = await pool.query(queryString);
+ console.log(res.rows);
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
} queryDatabase();
queryDatabase();
Use the following code to connect and read the data using a UPDATE SQL statement. ```javascript
-//update.js
+/**
+* file: update.js
+*/
-const postgresql = require('./citus');
+const { pool } = require('./db/citus');
-// Connect with a connection pool.
async function queryDatabase() {
- const q = `
- UPDATE pharmacy SET city = 'guntur'
- WHERE pharmacy_id = 1 ;
- `;
- const { pool } = await postgresql;
- // resolve the pool.connect() promise
- const client = await pool.connect();
- var stream = await client.query(q).then(result => {
- console.log('Update completed');
- console.log(`Rows affected: ${result.rowCount}`);
- process.exit();
- })
- .catch(err => {
- console.log(err);
- throw err;
- });
- await pool.end();
+ const queryString = `
+ UPDATE pharmacy SET city = 'Long Beach'
+ WHERE pharmacy_id = 1;
+ `;
+
+ try {
+ const result = await pool.query(queryString);
+ console.log('Update completed.');
+ console.log(`Rows affected: ${result.rowCount}`);
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
} queryDatabase();
queryDatabase();
Use the following code to connect and read the data using a DELETE SQL statement. ```javascript
-//delete.js
+/**
+* file: delete.js
+*/
-const postgresql = require('./citus');
+const { pool } = require('./db/citus');
-// Connect with a connection pool.
async function queryDatabase() {
- const q = `DELETE FROM pharmacy WHERE pharmacy_name = 'Target';`;
- const { pool } = await postgresql;
- // resolve the pool.connect() promise
- const client = await pool.connect();
- var stream = await client.query(q).then(result => {
- console.log('Delete completed');
- console.log(`Rows affected: ${result.rowCount}`);
- })
- .catch(err => {
- console.log(err);
- throw err;
- })
- .then(() => {
- console.log('Finished execution, exiting now');
- process.exit();
- });
- await pool.end();
+ const queryString = `
+ DELETE FROM pharmacy
+ WHERE pharmacy_name = 'Target';
+ `;
+
+ try {
+ const result = await pool.query(queryString);
+ console.log('Delete completed.');
+ console.log(`Rows affected: ${result.rowCount}`);
+ } catch (err) {
+ console.log(err.stack);
+ } finally {
+ pool.end();
+ }
} queryDatabase();
The COPY command can yield [tremendous throughput](https://www.citusdata.com/blo
### COPY command to load data from a file
-Before running below code, install
+Before running code below, install
[pg-copy-streams](https://www.npmjs.com/package/pg-copy-streams). To do so, run the node package manager (npm) for JavaScript from your command line.
-```dotnetcli
+```bash
npm install pg-copy-streams ```
The following code is an example for copying data from a CSV file to a database
It requires the file [pharmacies.csv](https://download.microsoft.com/download/d/8/d/d8d5673e-7cbf-4e13-b3e9-047b05fc1d46/pharmacies.csv). ```javascript
-//copycsv.js
+/**
+* file: copycsv.js
+*/
-const inputFile = require('path').join(__dirname, '/pharmacies.csv')
+const inputFile = require('path').join(__dirname, '/pharmacies.csv');
+const fileStream = require('fs').createReadStream(inputFile);
const copyFrom = require('pg-copy-streams').from;
-const postgresql = require('./citus');
-
-// Connect with a connection pool.
-async function queryDatabase() {
- const { pool } = await postgresql;
- // resolve the pool.connect() promise
- const client = await pool.connect();
+const { pool } = require('./db/citus');
- const q = `
- COPY pharmacy FROM STDIN WITH (FORMAT CSV, HEADER true, NULL '');
+async function importCsvDatabase() {
+ return new Promise((resolve, reject) => {
+ const queryString = `
+ COPY pharmacy FROM STDIN WITH (FORMAT CSV, HEADER true, NULL '');
`;
- var fileStream = require('fs').createReadStream(inputFile)
- fileStream.on('error', (error) => {
- console.log(`Error in reading file: ${error}`)
- process.exit();
- });
-
- var stream = await client.query(copyFrom(q))
- .on('error', (error) => {
- console.log(`Error in copy command: ${error}`)
- })
- .on('end', () => {
- // TODO: this is never reached
- console.log(`Completed loading data into pharmacy`)
- client.end()
- process.exit();
- });
-
- console.log('Copying from CSV...');
- fileStream.pipe(stream);
-
- console.log("inserted csv successfully");
-
- await pool.end();
- process.exit();
+ fileStream.on('error', reject);
+
+ pool
+ .connect()
+ .then(client => {
+ const stream = client
+ .query(copyFrom(queryString))
+ .on('error', reject)
+ .on('end', () => {
+ reject(new Error('Connection closed!'));
+ })
+ .on('finish', () => {
+ client.release();
+ resolve();
+ });
+
+ fileStream.pipe(stream);
+ })
+ .catch(err => {
+ reject(new Error(err));
+ });
+ });
}
-queryDatabase();
+(async () => {
+ console.log('Copying from CSV...');
+ await importCsvDatabase();
+ await pool.end();
+ console.log('Inserted csv successfully');
+})();
``` ### COPY command to load data in-memory
-Before running the below code, install
-[through2](https://www.npmjs.com/package/through2). This package allows pipe
+Before running the code below, install
+[through2](https://www.npmjs.com/package/through2) package. This package allows pipe
chaining. Install it with node package manager (npm) for JavaScript like this:
-```dotnetcli
+```bash
npm install through2 ``` The following code is an example for copying in-memory data to a table. ```javascript
-//copyinmemory.js
+/**
+ * file: copyinmemory.js
+ */
+ const through2 = require('through2'); const copyFrom = require('pg-copy-streams').from;
-const postgresql = require('./citus');
+const { pool } = require('./db/citus');
+
+async function importInMemoryDatabase() {
+ return new Promise((resolve, reject) => {
+ pool
+ .connect()
+ .then(client => {
+ const stream = client
+ .query(copyFrom('COPY pharmacy FROM STDIN'))
+ .on('error', reject)
+ .on('end', () => {
+ reject(new Error('Connection closed!'));
+ })
+ .on('finish', () => {
+ client.release();
+ resolve();
+ });
+
+ const internDataset = [
+ ['100', 'Target', 'Sunnyvale', 'California', '94001'],
+ ['101', 'CVS', 'San Francisco', 'California', '94002'],
+ ];
+
+ let started = false;
+ const internStream = through2.obj((arr, _enc, cb) => {
+ const rowText = (started ? '\n' : '') + arr.join('\t');
+ started = true;
+ cb(null, rowText);
+ });
-// Connect with a connection pool.
-async function queryDatabase() {
- const { pool } = await postgresql;
- // resolve the pool.connect() promise
- const client = await pool.connect();
- var stream = client.query(copyFrom(`COPY pharmacy FROM STDIN `));
-
- var interndataset = [
- ['0', 'Target', 'Sunnyvale', 'California', '94001'],
- ['1', 'CVS', 'San Francisco', 'California', '94002']
- ];
-
- var started = false;
- var internmap = through2.obj(function (arr, enc, cb) {
- var rowText = (started ? '\n' : '') + arr.join('\t');
- started = true;
- cb(null, rowText);
- });
- interndataset.forEach(function (r) { internmap.write(r); })
-
- internmap.end();
- internmap.pipe(stream);
- console.log("inserted inmemory data successfully ");
-
- await pool.end();
- process.exit();
-}
+ internStream.on('error', reject).pipe(stream);
-queryDatabase();
+ internDataset.forEach((record) => {
+ internStream.write(record);
+ });
+
+ internStream.end();
+ })
+ .catch(err => {
+ reject(new Error(err));
+ });
+ });
+}
+(async () => {
+ await importInMemoryDatabase();
+ await pool.end();
+ console.log('Inserted inmemory data successfully.');
+})();
``` ## Next steps
postgresql Concepts Single To Flexible https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/concepts-single-to-flexible.md
Previously updated : 08/12/2022 Last updated : 08/17/2022
The migration tool is agnostic of source and target PostgreSQL versions. Here ar
| Source Postgres version (Single Server) | Suggested Target Postgres version (Flexible server) | Remarks | |:|:-|:--|
-| Postgres 9.5 (Retired) | Postgres 12 | You can even directly migrate to Postgres 14. Verify your application compatibility. |
-| Postgres 9.6 (Retired) | Postgres 12 | You can even directly migrate to Postgres 14. Verify your application compatibility. |
+| Postgres 9.5 (Retired) | Postgres 13 | You can even directly migrate to Postgres 14. Verify your application compatibility. |
+| Postgres 9.6 (Retired) | Postgres 13 | You can even directly migrate to Postgres 14. Verify your application compatibility. |
| Postgres 10 (Retiring Nov'22) | Postgres 14 | Verify your application compatibility. | | Postgres 11 | Postgres 14 | Verify your application compatibility. | | Postgres 11 | Postgres 11 | You can choose to migrate to the same version in Flexible Server. You can then upgrade to a higher version in Flexible Server |
The migration tool is agnostic of source and target PostgreSQL versions. Here ar
>[!NOTE] > Migration initiation from Single Server is enabled in preview in these regions: Central US, West US, South Central US, North Central US, East Asia, Switzerland North, Australia South East, UAE North, UK West and Canada East. However, you can use the migration wizard from the Flexible Server side in all regions.
+>[!IMPORTANT]
+> We continue to add support for more regions with Flexible Server. If Flexible Server is not available in your preferred region, you can either choose an alternative region or you can wait until the Flexible server is enabled in that region.
+ ## Overview The migration tool provides an inline experience to migrate databases from Single Server (source) to Flexible Server (target).
private-link Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/availability.md
The following tables list the Private Link services and the regions where they'r
|:-|:--|:-|:--| |Azure Machine Learning | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure Machine Learning.](../machine-learning/how-to-configure-private-link.md) | |Azure Bot Service | All public regions | Supported only on Direct Line App Service extension | GA </br> [Learn how to create a private endpoint for Azure Bot Service](/azure/bot-service/dl-network-isolation-concept) |
-| Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](/azure/cognitive-services/cognitive-services-virtual-networks#use-private-endpoints) |
+| Azure Cognitive Services | All public regions<br/>All Government regions | | GA <br/> [Use private endpoints.](../cognitive-services/cognitive-services-virtual-networks.md#use-private-endpoints) |
### Analytics
The following tables list the Private Link services and the regions where they'r
|Supported services |Available regions | Other considerations | Status | |:-|:--|:-|:--| | Azure IoT Hub | All public regions | | GA <br/> [Learn how to create a private endpoint for Azure IoT Hub.](../iot-hub/virtual-network-support.md) |
-| Azure Digital Twins | All public regions supported by Azure Digital Twins | | Preview <br/> [Learn how to create a private endpoint for Azure Digital Twins.](/azure/api-management/private-endpoint) |
+| Azure Digital Twins | All public regions supported by Azure Digital Twins | | Preview <br/> [Learn how to create a private endpoint for Azure Digital Twins.](../api-management/private-endpoint.md) |
### Management and Governance
The following tables list the Private Link services and the regions where they'r
| Azure Automation | All public regions<br/> All Government regions | | GA </br> [Learn how to create a private endpoint for Azure Automation.](../automation/how-to/private-link-security.md)| |Azure Backup | All public regions<br/> All Government regions | | GA <br/> [Learn how to create a private endpoint for Azure Backup.](../backup/private-endpoints.md) | | Microsoft Purview | Southeast Asia, Australia East, Brazil South, North Europe, West Europe, Canada Central, East US, East US 2, EAST US 2 EUAP, South Central US, West Central US, West US 2, Central India, UK South | [Select for known limitations](../purview/catalog-private-link-troubleshoot.md#known-limitations) | GA <br/> [Learn how to create a private endpoint for Microsoft Purview.](../purview/catalog-private-link.md) |
-| Azure Migrate | All public regions<br/> All Government regions | | GA </br> [Discover and assess servers for migration using Private Link.](/azure/migrate/discover-and-assess-using-private-endpoints) |
+| Azure Migrate | All public regions<br/> All Government regions | | GA </br> [Discover and assess servers for migration using Private Link.](../migrate/discover-and-assess-using-private-endpoints.md) |
### Security
The following tables list the Private Link services and the regions where they'r
Learn more about Azure Private Link service: - [What is Azure Private Link?](private-link-overview.md)-- [Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
+- [Create a Private Endpoint using the Azure portal](create-private-endpoint-portal.md)
private-link Private Endpoint Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-overview.md
The following table shows an example of a dual port NSG rule:
- The following services may require all destination ports to be open when leveraging a private endpoint and adding NSG security filters:
- - Cosmos DB - For more information see, [Service port ranges](/azure/cosmos-db/sql/sql-sdk-connection-modes#service-port-ranges).
+ - Cosmos DB - For more information see, [Service port ranges](../cosmos-db/sql/sql-sdk-connection-modes.md#service-port-ranges).
### UDR
The following table shows an example of a dual port NSG rule:
## Next steps - For more information about private endpoints and Private Link, see [What is Azure Private Link?](private-link-overview.md).-- To get started with creating a private endpoint for a web app, see [Quickstart: Create a private endpoint by using the Azure portal](create-private-endpoint-portal.md).
+- To get started with creating a private endpoint for a web app, see [Quickstart: Create a private endpoint by using the Azure portal](create-private-endpoint-portal.md).
purview How To Data Owner Policy Authoring Generic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-owner-policy-authoring-generic.md
Before authoring data policies in the Microsoft Purview governance portal, you'l
## Create a new policy This section describes the steps to create a new policy in Microsoft Purview.
-Ensure you have the *Policy Author* permission as described [here](/azure/purview/how-to-data-owner-policy-authoring-generic.md#permissions-for-policy-authoring-and-publishing)
+Ensure you have the *Policy Author* permission as described [here](#permissions-for-policy-authoring-and-publishing)
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
Now that you have created your policy, you will need to publish it for it to bec
## Publish a policy A newly created policy is in the **draft** state. The process of publishing associates the new policy with one or more data sources under governance. This is called "binding" a policy to a data source.
-Ensure you have the *Data Source Admin* permission as described [here](/azure/purview/how-to-data-owner-policy-authoring-generic.md#permissions-for-policy-authoring-and-publishing)
+Ensure you have the *Data Source Admin* permission as described [here](#permissions-for-policy-authoring-and-publishing)
The steps to publish a policy are as follows:
The steps to publish a policy are as follows:
## Update or delete a policy Steps to update or delete a policy in Microsoft Purview are as follows.
-Ensure you have the *Policy Author* permission as described [here](/azure/purview/how-to-data-owner-policy-authoring-generic.md#permissions-for-policy-authoring-and-publishing)
+Ensure you have the *Policy Author* permission as described [here](#permissions-for-policy-authoring-and-publishing)
1. Sign in to the [Microsoft Purview governance portal](https://web.purview.azure.com/resource/).
Ensure you have the *Policy Author* permission as described [here](/azure/purvie
For specific guides on creating policies, you can follow these tutorials: - [Enable Microsoft Purview data owner policies on all data sources in a subscription or a resource group](./how-to-data-owner-policies-resource-group.md)-- [Enable Microsoft Purview data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
+- [Enable Microsoft Purview data owner policies on an Azure Storage account](./how-to-data-owner-policies-storage.md)
purview How To Deploy Profisee Purview Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-deploy-profisee-purview-integration.md
Previously updated : 07/15/2022 Last updated : 08/18/2022 # Microsoft Purview - Profisee MDM Integration
-Master data management (MDM) is a key pillar of any unified data governance solution. Microsoft Purview supports master data management with our partner [Profisee](https://profisee.com/profisee-advantage/). This tutorial compiles reference and integration deployment materials in one place; firstly to put Purview Unified Data Governance and MDM in the context of an Azure data estate; and more importantly, to get you started on your MDM journey with Microsoft Purview through our integration with Profisee.
+Master data management (MDM) is a key pillar of any unified data governance solution. Microsoft Purview supports master data management with our partner [Profisee](https://profisee.com/profisee-advantage/). This tutorial compiles reference and integration deployment materials in one place; firstly to put Microsoft Purview Unified Data Governance and MDM in the context of an Azure data estate; and more importantly, to get you started on your MDM journey with Microsoft Purview through our integration with Profisee.
## Why Data Governance and Master Data Management (MDM) are essential to the modern Data Estate?
More Details on [Profisee MDM](https://profisee.com/master-data-management-what-
Microsoft Purview and Profisee MDM are often discussed as being a ΓÇÿBetter TogetherΓÇÖ value proposition due to the complementary nature of the solutions. Microsoft Purview excels at cataloging data sources and defining data standards, while Profisee MDM enforces those standards across master data drawn from multiple siloed sources. It's clear not only that either system has independent value to offer, but also that each reinforces the other for a natural ΓÇÿBetter TogetherΓÇÖ synergy that goes deeper than the independent offerings. - Common technical foundation ΓÇô Profisee was born out of Microsoft technologies using common tools, databases & infrastructure so any ΓÇÿMicrosoft shopΓÇÖ will find the Profisee solution familiar. In fact, for many years Profisee MDM was built on Microsoft Master Data Services (MDS) and now that MDS is nearing end of life, Profisee is the premier upgrade/replacement solution for MDS.
- - Developer collaboration and joint development ΓÇô Profisee and Purview developers have collaborated extensively to ensure a good complementary fit between their respective solutions to deliver a seamless integration that meets the needs of their customers.
- - Joint sales and deployments ΓÇô Profisee has more MDM deployments on Azure, and jointly with Purview, than any other MDM vendor, and can be purchased through Azure Marketplace. In FY2023 Profisee is the only MDM vendor with a Top Tier Microsoft partner certification available as an IaaS/CaaS or SaaS offering through Azure Marketplace.
+ - Developer collaboration and joint development ΓÇô Profisee and Microsoft Purview developers have collaborated extensively to ensure a good complementary fit between their respective solutions to deliver a seamless integration that meets the needs of their customers.
+ - Joint sales and deployments ΓÇô Profisee has more MDM deployments on Azure, and jointly with Microsoft Purview, than any other MDM vendor, and can be purchased through Azure Marketplace. In FY2023 Profisee is the only MDM vendor with a Top Tier Microsoft partner certification available as an IaaS/CaaS or SaaS offering through Azure Marketplace.
- Rapid and reliable deployment ΓÇô Rapid and reliable deployment is critical for any enterprise software and Gartner points out that Profisee has more implementations taking under 90 days than any other MDM vendor.
- - Inherently multi-domain ΓÇô Profisee offers an inherently multi-domain approach to MDM where there are no limitations to the number of specificity of master data domains. This design aligns well with customers looking to modernize their data estate who may start with a limited number of domains, but ultimately will benefit from maximizing domain coverage (matched to their data governance coverage) across their whole data estate.
+ - Inherently multi-domain ΓÇô Profisee offers a multi-domain approach to MDM where there are no limitations to the number of specificity of master data domains. This design aligns well with customers looking to modernize their data estate who may start with a limited number of domains, but ultimately will benefit from maximizing domain coverage (matched to their data governance coverage) across their whole data estate.
- Engineered for Azure ΓÇô Profisee has been engineered to be cloud-native with options for both SaaS and managed IaaS/CaaS deployments on Azure (see next section) ## Profisee MDM: Deployment Flexibility ΓÇô Turnkey SaaS Experience or IaaS/CaaS Flexibility
Profisee MDM has been engineered for a cloud-native experience and may be deploy
### Turnkey SaaS Experience A fully managed instance of Profisee MDM hosted by Profisee in the Azure cloud. Full turn-key service for the easiest and fastest MDM deployment. Profisee MDM SaaS can be purchased on [Azure Marketplace Profisee MDM - SaaS](https://portal.azure.com/#view/Microsoft_Azure_Marketplace/GalleryItemDetailsBladeNopdl/id/profisee.profisee_saas_private/product~/).-- **Platform and Management in one** ΓÇô Leverage a true, end-to-end SaaS platform with one agreement and no third parties.
+- **Platform and Management in one** ΓÇô Use a true, end-to-end SaaS platform with one agreement and no third parties.
- **Industry-leading Cloud service** ΓÇô Hosted on Azure for industry-leading scalability and availability. - **The fastest path to Trusted Data** ΓÇô Deploy in minutes with minimal technical knowledge. Leave the networking, firewalls and storage to us so you can deploy in minutes.
The reference architecture shows how both Microsoft Purview and Profisee MDM wor
:::image type="content" alt-text="Diagram of Profisee-Purview Reference Architecture." source="./medim-reference-architecture.png":::
-1. Scan & classify metadata from LOB systems ΓÇô uses pre-built Purview connectors to scan data sources and populate the Purview Data Catalog
-2. Publish master data model to Purview ΓÇô any master data entities created in Profisee MDM are seamlessly published into Purview to further populate the Purview Data Catalog and ensure Purview is ΓÇÿawareΓÇÖ of this critical source of data
-3. Enrich master data model with governance details ΓÇô Governance Data Stewards can enrich master data entity definitions with data dictionary and glossary information as well as ownership and sensitive data classifications, etc. in Purview
-4. Leverage enriched governance data for data stewardship ΓÇô any definitions and metadata available on Purview are visible in real-time in Profisee as guidance for the MDM Data Stewards
+1. Scan & classify metadata from LOB systems ΓÇô uses pre-built Microsoft Purview connectors to scan data sources and populate the Microsoft Purview Data Catalog
+2. Publish master data model to Microsoft Purview ΓÇô any master data entities created in Profisee MDM are seamlessly published into Microsoft Purview to further populate the Microsoft Purview Data Catalog and ensure Microsoft Purview is ΓÇÿawareΓÇÖ of this critical source of data
+3. Enrich master data model with governance details ΓÇô Governance Data Stewards can enrich master data entity definitions with data dictionary and glossary information as well as ownership and sensitive data classifications, etc. in Microsoft Purview
+4. Apply enriched governance data for data stewardship ΓÇô any definitions and metadata available in Microsoft Purview are visible in real-time in Profisee as guidance for the MDM Data Stewards
5. Load source data from business applications ΓÇô Azure Data Factory extracts data from source systems with 100+ pre-built connectors and/or REST gateway
- Transactional and unstructured data is loaded to downstream analytics solution ΓÇô All ΓÇÿrawΓÇÖ source data can be loaded to analytics database such as Synapse (Synapse is generally the preferred analytic database but other such as Snowflake are also common). Analysis on this raw information without proper master (ΓÇÿgoldenΓÇÖ) data will be subject to inaccuracy as data overlaps, mismatches and conflicts won't yet have been resolved.
+ Transactional and unstructured data is loaded to downstream analytics solution ΓÇô All ΓÇÿrawΓÇÖ source data can be loaded to analytics database such as Synapse (Synapse is generally the preferred analytic database but others such as Snowflake are also common). Analysis on this raw information without proper master (ΓÇÿgoldenΓÇÖ) data will be subject to inaccuracy as data overlaps, mismatches and conflicts won't yet have been resolved.
7. Master data from source systems is loaded to Profisee MDM application ΓÇô Multiple streams of ΓÇÿmasterΓÇÖ data is loaded to Profisee MDM. Master data is the data that defines a domain entity such as customer, product, asset, location, vendor, patient, household, menu item, ingredient, and so on. This data is typically present in multiple systems and resolving differing definitions and matching and merging this data across systems is critical to the ability to use any cross-system data in a meaningful way.
-8. Master data is standardized, matched, merged, enriched and validated according to governance rules ΓÇô Although data quality and governance rules may be defined in other systems (such as Purview), Profisee MDM is where they're enforced. Source records are matched and merged both within and across source systems to create the most complete and correct record possible. Data quality rules check each record for compliance with business and technical requirements.
+8. Master data is standardized, matched, merged, enriched and validated according to governance rules ΓÇô Although data quality and governance rules may be defined in other systems (such as Microsoft Purview), Profisee MDM is where they're enforced. Source records are matched and merged both within and across source systems to create the most complete and correct record possible. Data quality rules check each record for compliance with business and technical requirements.
9. Extra data stewardship to review and confirm matches, data quality, and data validation issues, as required ΓÇô Any record failing validation or matching with only a low probability score is subject to remediation. To remediate failed validations, a workflow process assigns records requiring review to Data Stewards who are experts in their business data domain. Once records have been verified or corrected, they're ready to use as a ΓÇÿgolden recordΓÇÖ master. 10. Direct access to curated master data including secure data access for reporting in Power BI ΓÇô Power BI users may report directly on master data through a dedicated Power BI Connector that recognizes and enforces role-based security and hides various system fields for simplicity. 11. High-quality, curated master data published to downstream analytics solution ΓÇô Verified master data can be published out to any target system using Azure Data Factory. Master data including the parent-child lineage of merged records published into Azure Synapse (or wherever the ΓÇÿrawΓÇÖ source transactional data was loaded). With this combination of properly curated master data plus transactional data, we have a solid foundation of trusted data for further analysis.
The reference architecture shows how both Microsoft Purview and Profisee MDM wor
- [MDM on Azure Overview](/azure/cloud-adoption-framework/scenarios/cloud-scale-analytics/govern-master-data) ## Microsoft Purview - Profisee integration deployment on Azure Kubernetes Service (AKS)
-Go to [https://github.com/Profisee/kubernetes](https://github.com/Profisee/kubernetes) and select Microsoft Purview [**Azure ARM**]. The deployment process detailed below is owned and hosted by you on your Azure subscription as an IaaS / CaaS (container-as-a-service) AKS Cluster.
-1. [Create a user-assigned managed identity in Azure](/azure/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
+1. Get the license file from Profisee by raising a support ticket on [https://support.profisee.com/](https://support.profisee.com/). Only pre-requisite for this step is your need to pre-determine the DNS resolved URL your Profisee setup on Azure. In other words, keep the DNS HOST NAME of the load balancer used in the deployment. It will be something like "[profisee_name].[region].cloudapp.azure.com".
+For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Supply this DNSHOSTNAME to Profisee support when you raise the support ticket and Profisee will revert with the license file. You'll need to supply this file during the next configuration steps below.
+
+1. [Create a user-assigned managed identity in Azure](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md#create-a-user-assigned-managed-identity). You must have a managed identity created to run the deployment. This managed identity must have the following permissions when running a deployment. After the deployment is done, the managed identity can be deleted. Based on your ARM template choices, you'll need some or all of the following roles and permissions assigned to your managed identity:
- Contributor role to the resource group where AKS will be deployed. It can either be assigned directly to the resource group **OR** at the subscription level and down. - DNS Zone Contributor role to the particular DNS zone where the entry will be created **OR** Contributor role to the DNS Zone resource group. This DNS role is needed only if updating DNS hosted in Azure. - Application Administrator role in Azure Active Directory so the required permissions that are needed for the application registration can be assigned. - Managed Identity Contributor and User Access Administrator at the subscription level. Required in order for the ARM template managed identity to be able to create a Key Vault specific managed identity that will be used by Profisee to pull the values stored in the Key Vault.
- - Data Curator Role added for the Microsoft Purview account for the Microsoft Purview specific application registration.
+
+ :::image type="content" alt-text="Screenshot of Profisee Managed Identity Azure Role Assignments." source="./media/how-to-deploy-profisee-purview/profisee-managed-identity-azure-role-assignments.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-managed-identity-azure-role-assignments.png":::
+
+1. [Create an application registration](/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) that will act as the login identity once Profisee is installed. It needs to be a part of the Azure Active Directory that will be used to sign in to Profisee. Save the **Application (client) ID** for use later.
+ - Set authentication to match the settings below:
+ - Support ID tokens (used for implicit and hybrid flows)
+ - Set the redirect URL to: https://\<your-deployment-url>/profisee/auth/signin-microsoft
+ - Your deployment URL is the URL you'll have provided Profisee in step 1
+
+1. [Create a service principal](/active-directory/develop/howto-create-service-principal-portal#register-an-application-with-azure-ad-and-create-a-service-principal) that Microsoft Purview will use to take some actions on itself during this Profisee deployment. To create a service principal, create an application like you did in the previous step, then [create an application secret](/active-directory/develop/howto-create-service-principal-portal#option-2-create-a-new-application-secret). Save the **Object ID** for the application, and the **Value** of the secret you created for later use.
+ - Give this service principal (using the name or Object ID to locate it) **Data Curator** permissions on the root collection of your Microsoft Purview account.
1. Go to [https://github.com/Profisee/kubernetes](https://github.com/Profisee/kubernetes) and select Microsoft Purview [**Azure ARM**](https://github.com/profisee/kubernetes/blob/master/Azure-ARM/README.md#deploy-profisee-platform-on-to-aks-using-arm-template). - The ARM template will deploy Profisee on a load balanced AKS (Azure Kubernetes Service) infrastructure using an ingress controller.
- - The readme includes troubleshooting steps.l
+ - The readme includes troubleshooting steps.
- Read all the steps and troubleshooting wiki page carefully.
-1. Get the license file from Profisee by raising a support ticket on [https://support.profisee.com/](https://support.profisee.com/). Only pre-requisite for this step is your need to pre-determine the DNS resolved URL your Profisee setup on Azure. In other words, keep the DNS HOST NAME of the load balancer used in the deployment. It will be something like "[profisee_name].[region].cloudapp.azure.com".
-For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Supply this DNSHOSTNAME to Profisee support when you raise the support ticket and Profisee will revert with the license file. You'll need to supply this file during the next configuration steps below.
- 1. Select "Deploy to Azure" [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fprofisee%2Fkubernetes%2Fmaster%2FAzure-ARM%2Fazuredeploy.json/createUIDefinitionUri/https%3A%2F%2Fraw.githubusercontent.com%2Fprofisee%2Fkubernetes%2Fmaster%2FAzure-ARM%2FcreateUIDefinition.json)
For example, DNSHOSTNAME="purviewprofisee.southcentralus.cloudapp.azure.com". Su
### Stages of a typical Microsoft Purview - Profisee deployment run -- Profisee ARM Deployment Wizard - Managed Identity for installation; its role assignments and permissions should look like the image below.-
- :::image type="content" alt-text="Image 1 - Screenshot of Profisee Managed Identity Azure Role Assignments." source="./media/how-to-deploy-profisee-purview/profisee-managed-identity-azure-role-assignments.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-managed-identity-azure-role-assignments.png":::
+1. On the basics page, select the [user-assigned managed identity you created earlier](#microsoft-purviewprofisee-integration-deployment-on-azure-kubernetes-service-aks) to deploy the resources.
-- Profisee ARM Deployment Wizard - App Registration Configuration
+1. For your Profisee configuration, you can have your information stored in Key Vault or supply the details during deployment.
+ 1. Choose your Profisee version, and provide your admin user account and license.
+ 1. Select to configure using Microsoft Purview.
+ 1. For the Application Registration Client ID, provide the [**application (client) ID**](/active-directory/develop/howto-create-service-principal-portal#get-tenant-and-app-id-values-for-signing-in) for the [application registration you created earlier](#microsoft-purviewprofisee-integration-deployment-on-azure-kubernetes-service-aks).
+ 1. Select your Microsoft Purview account.
+ 1. Add the **object ID** for the [service principal you created earlier](#microsoft-purviewprofisee-integration-deployment-on-azure-kubernetes-service-aks).
+ 1. Add the value for the secret you created for that service principal.
+ 1. Give your web application a name.
- :::image type="content" alt-text="Image 2 - Screenshot of Profisee Azure ARM Wizard App Registration Configuration." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-app-reg-config.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-app-reg-config.png":::
+ :::image type="content" alt-text="Screenshot of the Profisee page of the Azure ARM Wizard, with all values filled out." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-a-profisee.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-a-profisee.png":::
-- Profisee ARM Deployment Wizard - Profisee Configuration and supplying Admin account username
+1. On the Kubernetes page, you may choose an older version of Kubernetes if needed, but leave the field **blank** to deploy the **latest** version.
- :::image type="content" alt-text="Image 3 - Screenshot of Profisee Azure ARM Wizard Step1 Profisee." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-a-profisee.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-a-profisee.png":::
+ :::image type="content" alt-text="Screenshot of the Kubernetes configuration page in the ARM deployment wizard, configured with the smallest standard size and default network settings." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-b-kubernetes.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-b-kubernetes.png":::
-- Profisee ARM Deployment Wizard - Kubernetes Configuration - You may choose an older version of Kubernetes but leave the field BLANK to deploy the LATEST version.
+ >[!TIP]
+ > In most cases, leaving the version field blank is sufficient, unless there is a reason you need to deploy using an older version of Kubernetes AKS specifically.
- :::image type="content" alt-text="Image 4 - Screenshot of Profisee Azure ARM_Wizard Step2 Kubernetes." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-b-kubernetes.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-b-kubernetes.png":::
+1. On the SQL configuration page you can choose to deploy a new Azure SQL server, or use an existing Azure SQL Server. You'll provide login details and a database name to use for this deployment.
-- Profisee ARM Deployment Wizard - SQL Server
+ :::image type="content" alt-text="Screenshot of SQL configuration page in the ARM deployment wizard, with Yes, create a new SQL Server selected and details provided." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-c-sqlserver.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-c-sqlserver.png":::
- :::image type="content" alt-text="Image 5 - Screenshot of Profisee Azure ARM Wizard Step3 SQLServer." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-c-sqlserver.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-c-sqlserver.png":::
+1. On the storage configuration page, you can choose to create a new storage account or use an existing one. You'll need to provide an access key and the name of an existing file share if you choose an existing account.
-- Profisee ARM Deployment Wizard - Azure DNS
-Recommended: Keep it to "Yes, use default Azure DNS". Choosing Yes, the deployer automatically creates a Let's Encrypt certificate for HTTP/TLS. Of you choose "No" you'll need to supply various networking configuration parameters and your own HTTPS/TLS certificate.
+ :::image type="content" alt-text="Screenshot of ARM deployment wizard storage account page, with details provided." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-e-storage.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-e-storage.png":::
- :::image type="content" alt-text="Image 6 - Screenshot of Profisee Azure ARM Wizard Step4 AzureDNS." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-d-azure-dns.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-d-azure-dns.png":::
+1. On the networking configuration page, you'll choose to either use the default Azure DNS or provide your own DNS host name.
-- Profisee ARM Deployment Wizard - Azure Storage
+ >[!TIP]
+ > **Yes, use default Azure DNS** is the recommended configuration. Choosing Yes, the deployer automatically creates a Let's Encrypt certificate for HTTP/TLS. If you choose **No** you'll need to supply various networking configuration parameters and your own HTTPS/TLS certificate.
- :::image type="content" alt-text="Image 7 - Screenshot of Profisee Azure ARM Wizard Step5 Storage." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-e-storage.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-e-storage.png":::
+ :::image type="content" alt-text="Screenshot of the ARM deployment Networking page, with Yes use default Azure DNS selected." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-d-azure-dns.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-d-azure-dns.png":::
-- Profisee ARM Deployment Wizard - Final Validation
+ >[!WARNING]
+ > The default Azure DNS URL (for example URL="https://purviewprofisee.southcentralus.cloudapp.azure.com/profisee") will be picked up by thr ARM template deployment wizard from the license file supplied to you by Profisee. If you intend to make changes and not use the default Azure DNS, make sure to communicate the full DNS and the fully qualified URL of the Profisee DNS to the Profisee support team so that they can re-generate and provide you the updated license file. Failure to do this will result in a malfunctioning installation of Profisee.
- :::image type="content" alt-text="Image 8 - Screenshot of Profisee Azure ARM Wizard_Step6 Final_Template Validation." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-f-final-template-validation.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-f-final-template-validation.png":::
+1. On the review + create page, review your details to ensure they're correct while the wizard validates your configuration. Once validation passes, select **Create**.
-- Around 5-10 Minutes into the ARM deployment
+ :::image type="content" alt-text="Screenshot of the review plus create page of the ARM deployment wizard, showing all details with a validation passed flag at the top of the page." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-f-final-template-validation.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-step-f-final-template-validation.png":::
- :::image type="content" alt-text="Image 9 - Screenshot of Profisee Azure ARM Wizard Deployment Progress Intermediate." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-mid.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-mid.png":::
+1. It will take around 45-50 Minutes for deployment to complete installing Profisee. During the deployment, you'll see the aspects that are in progress, and can refresh the page to review progress. The deployment will show as complete when all is finished. Completion of "InstallProfiseePlatform" stage also indicates deployment is complete!
-- Final Stages of Deployment. You need to wait around 45-50 minutes for the deployment to complete installing Profisee. Completion of "InstallProfiseePlatform" stage also indicates deployment is complete!
+ :::image type="content" alt-text="Screenshot of Profisee Azure ARM Wizard Deployment Progress, showing intermediate progress." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-mid.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-mid.png":::
- :::image type="content" alt-text="Image 10 - Screenshot of Profisee Azure ARM Wizard Deployment Complete." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-final.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-final.png":::
+ :::image type="content" alt-text="Screenshot of Profisee Azure ARM Wizard Deployment Progress, showing completed progress." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-final.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-deployment-progress-final.png":::
-- Open the resource group once deployment completes.
+1. Once deployment is completed, open the resource group where you deployed your integration.
- :::image type="content" alt-text="Image 11 - Screenshot of Profisee Azure ARM Wizard_Post Deploy_Click Open Resource Group." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-post-deploy-click-open-resource-group.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-post-deploy-click-open-resource-group.png":::
+ :::image type="content" alt-text="Screenshot of the resource group where the Profisee resources were deployed, with the deployment script highlighted." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-post-deploy-click-open-resource-group.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-post-deploy-click-open-resource-group.png":::
-- Fetch the final deployment URL. The final WEBURL is what you need to paste on your browser address bar and start enjoying Profisee-Purview integration! This URL will be the same that you'd have supplied to Profisee support while obtaining the license file. Unless you chose to change the URL format, it will look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/
+1. Under outputs, fetch the final deployment URL. The final WEBURL is what you need to paste on your browser address bar and start enjoying Profisee-Purview integration! This URL will be the same that you'd have supplied to Profisee support while obtaining the license file. Unless you chose to change the URL format, it will look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/
- :::image type="content" alt-text="Image 12 - Screenshot of Profisee Azure ARM Wizard Select Outputs Get FinalDeployment URL." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png":::
+ :::image type="content" alt-text="Screenshot of the outputs of the deployment script, showing the deployment WEB URL highlighted in the output." source="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-azure-arm-wizard-click-outputs-get-final-deployment-url.png":::
-- Populate and hydrate data to the newly installed Profisee environment by installing FastApp. Go to your Profisee deployment URL and select **/Profisee/api/client**. It should look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/api/client". Select the Downloads for "Profisee FastApp Studio" utility and the "Profisee Platform Tools". Install both these tools on your local client machine.
+1. Populate and hydrate data to the newly installed Profisee environment by installing FastApp. Go to your Profisee deployment URL and select **/Profisee/api/client**. It should look something like - "https://[profisee_name].[region].cloudapp.azure.com/profisee/api/client". Select the Downloads for "Profisee FastApp Studio" utility and the "Profisee Platform Tools". Install both these tools on your local client machine.
- :::image type="content" alt-text="Image 13 - Screenshot of Profisee Client Tools Download." source="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png":::
+ :::image type="content" alt-text="Screenshot of the Profisee Client Tools download, with the download links highlighted." source="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-download-fastapp-tools.png":::
-- Log in to FastApp Studio and perform the rest of the MDM Administration and configuration management for Profisee. Once you log in with the administrator email address supplied during the setup; you should be able to see the administration menu on the left pane of the Profisee FastApp Studio. Navigate to these menus and perform the rest of your MDM journey using FastApp tool. Being able to see the administration menu as seen in the image below confirms successful installation of Profisee on Azure Platform.
+1. Log in to FastApp Studio and perform the rest of the MDM Administration and configuration management for Profisee. Once you log in with the administrator email address supplied during the setup; you should be able to see the administration menu on the left pane of the Profisee FastApp Studio. Navigate to these menus and perform the rest of your MDM journey using FastApp tool. Being able to see the administration menu as seen in the image below confirms successful installation of Profisee on Azure Platform.
- :::image type="content" alt-text="Image 14 - Screenshot of Profisee FastApp Studio once you sign in." source="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png":::
+ :::image type="content" alt-text="Screenshot of the Profisee FastApp Studio once you sign in, showing the Accounts and Teams menu selected, and the FastApps link highlighted." source="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png" lightbox="./media/how-to-deploy-profisee-purview/profisee-fastapp-studio-home-screen.png":::
-- As a final validation step to ensure successful installation and for checking whether Profisee has been successfully connected to your Microsoft Purview instance, go to **/Profisee/api/governance/health** It should look something like - "https://[profisee_name].[region].cloudapp.azure.com//Profisee/api/governance/health". The output response will indicate the words **"Status": "Healthy"** on all the Purview subsystems.
+1. As a final validation step to ensure successful installation and for checking whether Profisee has been successfully connected to your Microsoft Purview instance, go to **/Profisee/api/governance/health** It should look something like - "https://[profisee_name].[region].cloudapp.azure.com//Profisee/api/governance/health". The output response will indicate the words **"Status": "Healthy"** on all the Microsoft Purview subsystems.
```json {
Recommended: Keep it to "Yes, use default Azure DNS". Choosing Yes, the deployer
} } ```+ An output response that looks similar as the above confirms successful installation, completes all the deployment steps; and validates whether Profisee has been successfully connected to your Microsoft Purview and indicates that the two systems are able to communicate properly. ## Next steps+ Through this guide, we learned of the importance of MDM in driving and supporting Data Governance in the context of the Azure data estate, and how to set up and deploy a Microsoft Purview-Profisee integration.
-For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
+For more usage details on Profisee MDM, register for scheduled trainings, live product demonstration and Q&A on [Profisee Academy Tutorials and Demos](https://profisee.com/demo/)!
route-server Quickstart Configure Route Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/quickstart-configure-route-server-cli.md
This article helps you configure Azure Route Server to peer with a Network Virtu
## Prerequisites * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* [Install the latest Azure CLI](/cli/azure/install-azure-cli), or make sure you can use [Azure Cloud Shell](/azure/cloud-shell/quickstart) in the portal.
+* [Install the latest Azure CLI](/cli/azure/install-azure-cli), or make sure you can use [Azure Cloud Shell](../cloud-shell/quickstart.md) in the portal.
* Review the [service limits for Azure Route Server](route-server-faq.md#limitations). ## Sign in to your Azure account and select your subscription.
After you've created the Azure Route Server, continue on to learn more about how
> [!div class="nextstepaction"] > [Azure ExpressRoute and Azure VPN support](expressroute-vpn-support.md)
-
search Cognitive Search Skill Deprecated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-deprecated.md
Previously updated : 09/16/2021 Last updated : 08/17/2022 # Deprecated Cognitive Skills in Azure Cognitive Search
-This document describes cognitive skills that are considered deprecated. Use the following guide for the contents:
+This document describes cognitive skills that are considered deprecated (retired). Use the following guide for the contents:
* Skill Name: The name of the skill that will be deprecated; it maps to the @odata.type attribute. * Last available api version: The last version of the Azure Cognitive Search public API through which skillsets containing the corresponding deprecated skill can be created/updated. Indexers with attached skillsets with these skills will continue to run even in future API versions until the "End of support" date, at which point they will start failing. * End of support: The day after which the corresponding skill is considered unsupported and will stop working. Previously created skillsets should still continue to function, but users are recommended to migrate away from a deprecated skill. * Recommendations: Migration path forward to use a supported skill. Users are advised to follow the recommendations to continue to receive support.
-If you're using the [Microsoft.Skills.Text.EntityRecognitionSkill](#microsoftskillstextentityrecognitionskill), this article will help you upgrade your skillset to use the [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) which is generally available and introduces new features.
+If you're using the [Microsoft.Skills.Text.EntityRecognitionSkill](#microsoftskillstextentityrecognitionskill) (Entity Recognition cognitive skill (v2)), this article will help you upgrade your skillset to use the [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) which is generally available and introduces new features.
-If you're using the [Microsoft.Skills.Text.SentimentSkill](#microsoftskillstextsentimentskill), this article will help you upgrade your skillset to use the [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md) which is generally available and introduces new features.
+If you're using the [Microsoft.Skills.Text.SentimentSkill](#microsoftskillstextsentimentskill) (Sentiment cognitive skill (v2)), this article will help you upgrade your skillset to use the [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md) which is generally available and introduces new features.
-If you're using the [Microsoft.Skills.Text.NamedEntityRecognitionSkill](#microsoftskillstextnamedentityrecognitionskill), this article will help you upgrade your skillset to use the [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) which is generally available and introduces new features.
+If you're using the [Microsoft.Skills.Text.NamedEntityRecognitionSkill](#microsoftskillstextnamedentityrecognitionskill) (Named Entity Recognition cognitive skill (v2)), this article will help you upgrade your skillset to use the [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md) which is generally available and introduces new features.
## Microsoft.Skills.Text.EntityRecognitionSkill
To migrate to the [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-se
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md) + [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md)
-+ [Sentiment Skill (V3)](cognitive-search-skill-sentiment-v3.md)
++ [Sentiment Skill (V3)](cognitive-search-skill-sentiment-v3.md)
search Cognitive Search Skill Entity Linking V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-linking-v3.md
Title: Entity Linking cognitive skill
+ Title: Entity Linking cognitive skill (v3)
description: Extract different linked entities from text in an enrichment pipeline in Azure Cognitive Search.
Previously updated : 12/09/2021 Last updated : 08/17/2022
-# Entity Linking cognitive skill
+# Entity Linking cognitive skill (v3)
-The **Entity Linking** skill returns a list of recognized entities with links to articles in a well-known knowledge base (Wikipedia).
+The **Entity Linking** skill (v3) returns a list of recognized entities with links to articles in a well-known knowledge base (Wikipedia).
> [!NOTE] > This skill is bound to the [Entity Linking](../cognitive-services/language-service/entity-linking/overview.md) machine learning models in [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md) and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
If the language code for the document is unsupported, a warning is returned and
## See also + [Built-in skills](cognitive-search-predefined-skills.md)
-+ [How to define a skillset](cognitive-search-defining-skillset.md)
++ [How to define a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Skill Entity Recognition V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-recognition-v3.md
Title: Entity Recognition (V3) cognitive skill
+ Title: Entity Recognition cognitive skill (v3)
description: Extract different types of entities using the machine learning models of Azure Cognitive Services for Language in an AI enrichment pipeline in Azure Cognitive Search.
Previously updated : 12/09/2021 Last updated : 08/17/2022
-# Entity Recognition cognitive skill (V3)
+# Entity Recognition cognitive skill (v3)
-The **Entity Recognition** skill extracts entities of different types from text. These entities fall under 14 distinct categories, ranging from people and organizations to URLs and phone numbers. This skill uses the [Named Entity Recognition](../cognitive-services/language-service/named-entity-recognition/overview.md) machine learning models provided by [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md).
+The **Entity Recognition** skill (v3) extracts entities of different types from text. These entities fall under 14 distinct categories, ranging from people and organizations to URLs and phone numbers. This skill uses the [Named Entity Recognition](../cognitive-services/language-service/named-entity-recognition/overview.md) machine learning models provided by [Azure Cognitive Services for Language](../cognitive-services/language-service/overview.md).
> [!NOTE] > This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
search Cognitive Search Skill Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-entity-recognition.md
Title: Entity Recognition cognitive skill
+ Title: Entity Recognition cognitive skill (v2)
description: Extract different types of entities from text in an enrichment pipeline in Azure Cognitive Search.
Previously updated : 09/24/2021 Last updated : 08/17/2022
-# Entity Recognition cognitive skill
+# Entity Recognition cognitive skill (v2)
-The **Entity Recognition** skill extracts entities of different types from text. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
+The **Entity Recognition** skill (v2) extracts entities of different types from text. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
> [!IMPORTANT]
-> The Entity Recognition skill is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
+> The Entity Recognition skill (v2) (**Microsoft.Skills.Text.EntityRecognitionSkill**) is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
> [!NOTE] > As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
If the language code for the document is unsupported, a warning is returned and
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md)
++ [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md)
search Cognitive Search Skill Named Entity Recognition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-named-entity-recognition.md
Title: Named Entity Recognition cognitive skill
+ Title: Named Entity Recognition cognitive skill (v2)
description: Extract named entities for person, location and organization from text in an AI enrichment pipeline in Azure Cognitive Search.
Previously updated : 08/12/2021 Last updated : 08/17/2022
-# Named Entity Recognition cognitive skill
+# Named Entity Recognition cognitive skill (v2)
-The **Named Entity Recognition** skill extracts named entities from text. Available entities include the types `person`, `location` and `organization`.
+The **Named Entity Recognition** skill (v2) extracts named entities from text. Available entities include the types `person`, `location` and `organization`.
> [!IMPORTANT]
-> Named entity recognition skill is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
+> Named entity recognition skill (v2) (**Microsoft.Skills.Text.NamedEntityRecognitionSkill**) is now discontinued replaced by [Microsoft.Skills.Text.V3.EntityRecognitionSkill](cognitive-search-skill-entity-recognition-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
> [!NOTE] > As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
If the language code for the document is unsupported, a warning is returned and
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md)
++ [Entity Recognition Skill (V3)](cognitive-search-skill-entity-recognition-v3.md)
search Cognitive Search Skill Sentiment V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-sentiment-v3.md
Title: Sentiment cognitive skill (V3)
+ Title: Sentiment cognitive skill (v3)
description: Provides sentiment labels for text in an AI enrichment pipeline in Azure Cognitive Search.
Previously updated : 12/09/2021 Last updated : 08/17/2022
-# Sentiment cognitive skill (V3)
+# Sentiment cognitive skill (v3)
-The V3 **Sentiment** skill evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This skill uses the machine learning models provided by version 3 of [Language Service](../cognitive-services/language-service/overview.md) in Cognitive Services. It also exposes [opinion mining capabilities](../cognitive-services/language-service/sentiment-opinion-mining/overview.md), which provides more granular information about the opinions related to attributes of products or services in text.
+The **Sentiment** skill (v3) evaluates unstructured text and for each record, provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This skill uses the machine learning models provided by version 3 of [Language Service](../cognitive-services/language-service/overview.md) in Cognitive Services. It also exposes [opinion mining capabilities](../cognitive-services/language-service/sentiment-opinion-mining/overview.md), which provides more granular information about the opinions related to attributes of products or services in text.
> [!NOTE] > This skill is bound to Cognitive Services and requires [a billable resource](cognitive-search-attach-cognitive-services.md) for transactions that exceed 20 documents per indexer per day. Execution of built-in skills is charged at the existing [Cognitive Services pay-as-you go price](https://azure.microsoft.com/pricing/details/cognitive-services/).
If a language is not supported, a warning is generated and no sentiment results
## See also + [Built-in skills](cognitive-search-predefined-skills.md)
-+ [How to define a skillset](cognitive-search-defining-skillset.md)
++ [How to define a skillset](cognitive-search-defining-skillset.md)
search Cognitive Search Skill Sentiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-skill-sentiment.md
Title: Sentiment cognitive skill
+ Title: Sentiment cognitive skill (v2)
description: Extract a positive-negative sentiment score from text in an AI enrichment pipeline in Azure Cognitive Search.
Previously updated : 09/17/2021 Last updated : 08/17/2022
-# Sentiment cognitive skill
+# Sentiment cognitive skill (v2)
-The **Sentiment** skill evaluates unstructured text along a positive-negative continuum, and for each record, returns a numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, and scores close to 0 indicate negative sentiment. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
+The **Sentiment** skill (v2) evaluates unstructured text along a positive-negative continuum, and for each record, returns a numeric score between 0 and 1. Scores close to 1 indicate positive sentiment, and scores close to 0 indicate negative sentiment. This skill uses the machine learning models provided by [Text Analytics](../cognitive-services/text-analytics/overview.md) in Cognitive Services.
> [!IMPORTANT]
-> The Sentiment skill is now discontinued replaced by [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
+> The Sentiment skill (v2) (**Microsoft.Skills.Text.SentimentSkill**) is now discontinued replaced by [Microsoft.Skills.Text.V3.SentimentSkill](cognitive-search-skill-sentiment-v3.md). Follow the recommendations in [Deprecated cognitive search skills](cognitive-search-skill-deprecated.md) to migrate to a supported skill.
> [!NOTE] > As you expand scope by increasing the frequency of processing, adding more documents, or adding more AI algorithms, you will need to [attach a billable Cognitive Services resource](cognitive-search-attach-cognitive-services.md). Charges accrue when calling APIs in Cognitive Services, and for image extraction as part of the document-cracking stage in Azure Cognitive Search. There are no charges for text extraction from documents.
If a language is not supported, a warning is generated and no sentiment score is
+ [Built-in skills](cognitive-search-predefined-skills.md) + [How to define a skillset](cognitive-search-defining-skillset.md)
-+ [Sentiment Skill (V3)](cognitive-search-skill-sentiment-v3.md)
++ [Sentiment Skill (V3)](cognitive-search-skill-sentiment-v3.md)
search Search Synapseml Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-synapseml-cognitive-services.md
Although Azure Cognitive Search has native [AI enrichment](cognitive-search-conc
You'll need the `synapseml` library and several Azure resources. If possible, use the same subscription and region for your Azure resources and put everything into one resource group for simple cleanup later. The following links are for portal installs. The sample data is imported from a public site. + [Azure Cognitive Search](search-create-service-portal.md) (any tier) <sup>1</sup>
-+ [Azure Cognitive Services](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>2</sup>
++ [Azure Cognitive Services](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#create-a-new-azure-cognitive-services-resource) (any tier) <sup>2</sup> + [Azure Databricks](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal?tabs=azure-portal) (any tier) <sup>3</sup> <sup>1</sup> You can use the free tier for this walkthrough but [choose a higher tier](search-sku-tier.md) if data volumes are large. You'll need the [API key](search-security-api-keys.md#find-existing-keys) for this resource.
-<sup>2</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](/azure/cognitive-services/cognitive-services-apis-create-account?tabs=multiservice%2Cwindows#get-the-keys-for-your-resource) and the region, and it'll work for both services.
+<sup>2</sup> This walkthrough uses Azure Forms Recognizer and Azure Translator. In the instructions below, you'll provide a [Cognitive Services multi-service key](../cognitive-services/cognitive-services-apis-create-account.md?tabs=multiservice%2cwindows#get-the-keys-for-your-resource) and the region, and it'll work for both services.
<sup>3</sup> In this walkthrough, Azure Databricks provides the computing platform. You could also use Azure Synapse Analytics or any other computing platform supported by `synapseml`. The Azure Databricks article listed in the prerequisites includes multiple steps. For this walkthrough, follow only the instructions in "Create a workspace".
display(df2)
Paste the following code into the third cell. No modifications are required, so run the code when you're ready.
-This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](/azure/applied-ai-services/form-recognizer/concept-invoice) of Azure Forms Analyzer.
+This code loads the [AnalyzeInvoices transformer](https://microsoft.github.io/SynapseML/docs/documentation/transformers/transformers_cognitive/#analyzeinvoices) and passes a reference to the data frame containing the invoices. It calls the pre-built [invoice model](../applied-ai-services/form-recognizer/concept-invoice.md) of Azure Forms Analyzer.
```python from synapse.ml.cognitive import AnalyzeInvoices
In this walkthrough, you learned about the [AzureSearchWriter](https://microsoft
As a next step, review the other SynapseML tutorials that produce transformed content you might want to explore through Azure Cognitive Search: > [!div class="nextstepaction"]
-> [Tutorial: Text Analytics with Cognitive Services](/azure/synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark)
+> [Tutorial: Text Analytics with Cognitive Services](../synapse-analytics/machine-learning/tutorial-text-analytics-use-mmlspark.md)
security Azure Disk Encryption Vms Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-disk-encryption-vms-vmss.md
- Title: Azure Disk Encryption for virtual machines and virtual machine scale sets
-description: Learn about Azure Disk encryption for virtual machines (VMs) and VM scale sets. Azure Disk encryption works for both Linux and Windows VMs.
---- Previously updated : 10/15/2019----
-# Azure Disk Encryption for virtual machines and virtual machine scale sets
-
-Azure Disk encryption can be applied to both Linux and Windows virtual machines, as well as to virtual machine scale sets.
-
-## Linux virtual machines
-
-The following articles provide guidance for encrypting Linux virtual machines.
-
-### Current version of Azure Disk Encryption
--- [Overview of Azure Disk Encryption for Linux virtual machines](../../virtual-machines/linux/disk-encryption-overview.md)-- [Azure Disk Encryption scenarios on Linux VMs](../../virtual-machines/linux/disk-encryption-linux.md)-- [Create and encrypt a Linux VM with Azure CLI](../../virtual-machines/linux/disk-encryption-cli-quickstart.md)-- [Create and encrypt a Linux VM with Azure PowerShell](../../virtual-machines/linux/disk-encryption-powershell-quickstart.md)-- [Create and encrypt a Linux VM with the Azure portal](../../virtual-machines/linux/disk-encryption-portal-quickstart.md)-- [Azure Disk Encryption Extension Schema for Linux](../../virtual-machines/extensions/azure-disk-enc-linux.md)-- [Creating and configuring a key vault for Azure Disk Encryption](../../virtual-machines/linux/disk-encryption-key-vault.md)-- [Azure Disk Encryption sample scripts](../../virtual-machines/linux/disk-encryption-sample-scripts.md)-- [Azure Disk Encryption troubleshooting](../../virtual-machines/linux/disk-encryption-troubleshooting.md)-- [Azure Disk Encryption frequently asked questions](../../virtual-machines/linux/disk-encryption-faq.yml)-
-### Azure disk encryption with Azure AD (previous version)
--- [Overview of Azure Disk Encryption with Azure AD for Linux virtual machines](../../virtual-machines/linux/disk-encryption-overview-aad.md)-- [Azure Disk Encryption with Azure AD scenarios on Linux VMs](../../virtual-machines/linux/disk-encryption-linux.md)-- [Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)](../../virtual-machines/linux/disk-encryption-key-vault-aad.md)-
-## Windows virtual machines
-
-The following articles provide guidance for encrypting Windows virtual machines.
-
-### Current version of Azure Disk Encryption
--- [Overview of Azure Disk Encryption for Windows virtual machines](../../virtual-machines/windows/disk-encryption-overview.md)-- [Azure Disk Encryption scenarios on Windows VMs](../../virtual-machines/windows/disk-encryption-windows.md)-- [Create and encrypt a Windows VM with Azure CLI](../../virtual-machines/windows/disk-encryption-cli-quickstart.md)-- [Create and encrypt a Windows VM with Azure PowerShell](../../virtual-machines/windows/disk-encryption-powershell-quickstart.md)-- [Create and encrypt a Windows VM with the Azure portal](../../virtual-machines/windows/disk-encryption-portal-quickstart.md)-- [Azure Disk Encryption Extension Schema for Windows](../../virtual-machines/extensions/azure-disk-enc-windows.md)-- [Creating and configuring a key vault for Azure Disk Encryption](../../virtual-machines/windows/disk-encryption-key-vault.md)-- [Azure Disk Encryption sample scripts](../../virtual-machines/windows/disk-encryption-sample-scripts.md)-- [Azure Disk Encryption troubleshooting](../../virtual-machines/windows/disk-encryption-troubleshooting.md)-- [Azure Disk Encryption frequently asked questions](../../virtual-machines/windows/disk-encryption-faq.yml)-
-### Azure disk encryption with Azure AD (previous version)
--- [Overview of Azure Disk Encryption with Azure AD for Windows virtual machines](../../virtual-machines/windows/disk-encryption-overview-aad.md)-- [Azure Disk Encryption with Azure AD scenarios on Windows VMs](../../virtual-machines/windows/disk-encryption-windows.md)-- [Creating and configuring a key vault for Azure Disk Encryption with Azure AD (previous release)](../../virtual-machines/windows/disk-encryption-key-vault-aad.md)-
-## Virtual machine scale sets
-
-The following articles provide guidance for encrypting virtual machine scale sets.
--- [Overview of Azure Disk Encryption for virtual machine scale sets](../../virtual-machine-scale-sets/disk-encryption-overview.md) -- [Encrypt a virtual machine scale sets using the Azure CLI](../../virtual-machine-scale-sets/disk-encryption-cli.md) -- [Encrypt a virtual machine scale sets using Azure PowerShell](../../virtual-machine-scale-sets/disk-encryption-powershell.md).-- [Encrypt a virtual machine scale sets using the Azure Resource Manager](../../virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md)-- [Create and configure a key vault for Azure Disk Encryption](../../virtual-machine-scale-sets/disk-encryption-key-vault.md)-- [Use Azure Disk Encryption with virtual machine scale set extension sequencing](../../virtual-machine-scale-sets/disk-encryption-extension-sequencing.md)-
-## Next steps
--- [Azure encryption overview](encryption-overview.md)-- [Data encryption at rest](encryption-atrest.md)-- [Data security and encryption best practices](data-encryption-best-practices.md)
security Data Encryption Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/data-encryption-best-practices.md
Azure Key Vault is designed to support application keys and secrets. Key Vault i
Following are security best practices for using Key Vault.
-**Best practice**: Grant access to users, groups, and applications at a specific scope.
+**Best practice**: Grant access to users, groups, and applications at a specific scope.
**Detail**: Use Azure RBAC predefined roles. For example, to grant access to a user to manage key vaults, you would assign the predefined role [Key Vault Contributor](../../role-based-access-control/built-in-roles.md) to this user at a specific scope. The scope in this case would be a subscription, a resource group, or just a specific key vault. If the predefined roles donΓÇÖt fit your needs, you can [define your own roles](../../role-based-access-control/custom-roles.md).
-**Best practice**: Control what users have access to.
+**Best practice**: Control what users have access to.
**Detail**: Access to a key vault is controlled through two separate interfaces: management plane and data plane. The management plane and data plane access controls work independently. Use Azure RBAC to control what users have access to. For example, if you want to grant an application access to use keys in a key vault, you only need to grant data plane access permissions by using key vault access policies, and no management plane access is needed for this application. Conversely, if you want a user to be able to read vault properties and tags but not have any access to keys, secrets, or certificates, you can grant this user read access by using Azure RBAC, and no access to the data plane is required.
Use Azure RBAC to control what users have access to. For example, if you want to
**Best practice**: Store certificates in your key vault. Your certificates are of high value. In the wrong hands, your application's security or the security of your data can be compromised. **Detail**: Azure Resource Manager can securely deploy certificates stored in Azure Key Vault to Azure VMs when the VMs are deployed. By setting appropriate access policies for the key vault, you also control who gets access to your certificate. Another benefit is that you manage all your certificates in one place in Azure Key Vault. See [Deploy Certificates to VMs from customer-managed Key Vault](/archive/blogs/kv/updated-deploy-certificates-to-vms-from-customer-managed-key-vault) for more information.
-**Best practice**: Ensure that you can recover a deletion of key vaults or key vault objects.
+**Best practice**: Ensure that you can recover a deletion of key vaults or key vault objects.
**Detail**: Deletion of key vaults or key vault objects can be inadvertent or malicious. Enable the soft delete and purge protection features of Key Vault, particularly for keys that are used to encrypt data at rest. Deletion of these keys is equivalent to data loss, so you can recover deleted vaults and vault objects if needed. Practice Key Vault recovery operations on a regular basis. > [!NOTE] > If a user has contributor permissions (Azure RBAC) to a key vault management plane, they can grant themselves access to the data plane by setting a key vault access policy. We recommend that you tightly control who has contributor access to your key vaults, to ensure that only authorized persons can access and manage your key vaults, keys, secrets, and certificates.
->
->
## Manage with secure workstations > [!NOTE] > The subscription administrator or owner should use a secure access workstation or a privileged access workstation.
->
->
Because the vast majority of attacks target the end user, the endpoint becomes one of the primary points of attack. An attacker who compromises the endpoint can use the userΓÇÖs credentials to gain access to the organizationΓÇÖs data. Most endpoint attacks take advantage of the fact that users are administrators in their local workstations.
-**Best practice**: Use a secure management workstation to protect sensitive accounts, tasks, and data.
+**Best practice**: Use a secure management workstation to protect sensitive accounts, tasks, and data.
**Detail**: Use a [privileged access workstation](https://4sysops.com/archives/understand-the-microsoft-privileged-access-workstation-paw-security-model/) to reduce the attack surface in workstations. These secure management workstations can help you mitigate some of these attacks and ensure that your data is safer.
-**Best practice**: Ensure endpoint protection.
+**Best practice**: Ensure endpoint protection.
**Detail**: Enforce security policies across all devices that are used to consume data, regardless of the data location (cloud or on-premises). ## Protect data at rest [Data encryption at rest](https://cloudblogs.microsoft.com/microsoftsecure/2015/09/10/cloud-security-controls-series-encrypting-data-at-rest/) is a mandatory step toward data privacy, compliance, and data sovereignty.
-**Best practice**: Apply disk encryption to help safeguard your data.
-**Detail**: Use [Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md). It enables IT administrators to encrypt Windows and Linux IaaS VM disks. Disk Encryption combines the industry-standard Windows BitLocker feature and the Linux dm-crypt feature to provide volume encryption for the OS and the data disks.
+**Best practice**: Apply disk encryption to help safeguard your data.
+**Detail**: Use [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md). Disk Encryption combines the industry-standard Linux dm-crypt or Windows BitLocker feature to provide volume encryption for the OS and the data disks.
Azure Storage and Azure SQL Database encrypt data at rest by default, and many services offer encryption as an option. You can use Azure Key Vault to maintain control of keys that access and encrypt your data. See [Azure resource providers encryption model support to learn more](encryption-atrest.md#azure-resource-providers-encryption-model-support).
-**Best practices**: Use encryption to help mitigate risks related to unauthorized data access.
+**Best practices**: Use encryption to help mitigate risks related to unauthorized data access.
**Detail**: Encrypt your drives before you write sensitive data to them. Organizations that donΓÇÖt enforce data encryption are more exposed to data-confidentiality issues. For example, unauthorized or rogue users might steal data in compromised accounts or gain unauthorized access to data coded in Clear Format. Companies also must prove that they are diligent and using correct security controls to enhance their data security in order to comply with industry regulations.
For data moving between your on-premises infrastructure and Azure, consider appr
Following are best practices specific to using Azure VPN Gateway, SSL/TLS, and HTTPS.
-**Best practice**: Secure access from multiple workstations located on-premises to an Azure virtual network.
+**Best practice**: Secure access from multiple workstations located on-premises to an Azure virtual network.
**Detail**: Use [site-to-site VPN](../../vpn-gateway/tutorial-site-to-site-portal.md).
-**Best practice**: Secure access from an individual workstation located on-premises to an Azure virtual network.
+**Best practice**: Secure access from an individual workstation located on-premises to an Azure virtual network.
**Detail**: Use [point-to-site VPN](../../vpn-gateway/vpn-gateway-howto-point-to-site-classic-azure-portal.md).
-**Best practice**: Move larger data sets over a dedicated high-speed WAN link.
+**Best practice**: Move larger data sets over a dedicated high-speed WAN link.
**Detail**: Use [ExpressRoute](../../expressroute/expressroute-introduction.md). If you choose to use ExpressRoute, you can also encrypt the data at the application level by using SSL/TLS or other protocols for added protection.
-**Best practice**: Interact with Azure Storage through the Azure portal.
+**Best practice**: Interact with Azure Storage through the Azure portal.
**Detail**: All transactions occur via HTTPS. You can also use [Storage REST API](/rest/api/storageservices/) over HTTPS to interact with [Azure Storage](https://azure.microsoft.com/services/storage/). Organizations that fail to protect data in transit are more susceptible to [man-in-the-middle attacks](/previous-versions/office/skype-server-2010/gg195821(v=ocs.14)), [eavesdropping](/previous-versions/office/skype-server-2010/gg195641(v=ocs.14)), and session hijacking. These attacks can be the first step in gaining access to confidential data.
security Encryption Atrest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-atrest.md
Microsoft Azure Services each support one or more of the encryption at rest mode
### Azure disk encryption
-Any customer using Azure Infrastructure as a Service (IaaS) features can achieve encryption at rest for their IaaS VMs and disks through Azure Disk Encryption. For more information on Azure Disk encryption, see the [Azure Disk Encryption documentation](./azure-disk-encryption-vms-vmss.md).
+Any customer using Azure Infrastructure as a Service (IaaS) features can achieve encryption at rest for their IaaS VMs and disks through Azure Disk Encryption. For more information on Azure Disk encryption, see [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) or [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md).
#### Azure storage
security Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-overview.md
The three server-side encryption models offer different key management character
### Azure disk encryption
-You can protect Windows and Linux virtual machines by using [Azure disk encryption](./azure-disk-encryption-vms-vmss.md), which uses [Windows BitLocker](/previous-versions/windows/it-pro/windows-vista/cc766295(v=ws.10)) technology and Linux [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) to protect both operating system disks and data disks with full volume encryption.
+You can protect your managed disks by using [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md), which uses [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt), or [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md), which uses [Windows BitLocker](/previous-versions/windows/it-pro/windows-vista/cc766295(v=ws.10)), to protect both operating system disks and data disks with full volume encryption.
Encryption keys and secrets are safeguarded in your [Azure Key Vault subscription](../../key-vault/general/overview.md). By using the Azure Backup service, you can back up and restore encrypted virtual machines (VMs) that use Key Encryption Key (KEK) configuration.
security Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/iaas.md
Organizations that don't monitor VM performance canΓÇÖt determine whether certai
## Encrypt your virtual hard disk files We recommend that you encrypt your virtual hard disks (VHDs) to help protect your boot volume and data volumes at rest in storage, along with your encryption keys and secrets.
-[Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md) helps you encrypt your Windows and Linux IaaS virtual machine disks. Azure Disk Encryption uses the industry-standard [BitLocker](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732774(v=ws.11)) feature of Windows and the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with [Azure Key Vault](https://azure.microsoft.com/documentation/services/key-vault/) to help you control and manage the disk-encryption keys and secrets in your key vault subscription. The solution also ensures that all data on the virtual machine disks are encrypted at rest in Azure Storage.
+[Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) helps you encrypt your Linux and Windows IaaS virtual machine disks. Azure Disk Encryption uses the industry-standard [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux and the [BitLocker](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732774(v=ws.11)) feature of Windows to provide volume encryption for the OS and the data disks. The solution is integrated with [Azure Key Vault](../../key-vault/index.yml) to help you control and manage the disk-encryption keys and secrets in your key vault subscription. The solution also ensures that all data on the virtual machine disks are encrypted at rest in Azure Storage.
Following are best practices for using Azure Disk Encryption:
security Isolation Choices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/isolation-choices.md
Encryption in transit is a mechanism of protecting data when it is transmitted a
#### Encryption at Rest
-For many organizations, [data encryption at rest](isolation-choices.md) is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure features that provide encryption of data that is ΓÇ£at restΓÇ¥:
+For many organizations, [data encryption at rest](isolation-choices.md) is a mandatory step towards data privacy, compliance, and data sovereignty. There are three Azure features that provide encryption of data that is "at rest":
- [Storage Service Encryption](../../storage/blobs/security-recommendations.md) allows you to request that the storage service automatically encrypt data when writing it to Azure Storage. - [Client-side Encryption](../../storage/blobs/security-recommendations.md) also provides the feature of encryption at rest.-- [Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md) allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
+- [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md).
+
+For more information, see [Overview of managed disk encryption options](../../virtual-machines/disk-encryption-overview.md).
#### Azure Disk Encryption
-[Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md) for virtual machines (VMs) helps you address organizational security and compliance requirements by encrypting your VM disks (including boot and data disks) with keys and policies you control in [Azure Key Vault](https://azure.microsoft.com/services/key-vault/).
+[Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) help you address organizational security and compliance requirements by encrypting your VM disks (including boot and data disks) with keys and policies you control in [Azure Key Vault](https://azure.microsoft.com/services/key-vault/).
The Disk Encryption solution for Windows is based on [Microsoft BitLocker Drive Encryption](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732774(v=ws.11)), and the Linux solution is based on [dm-crypt](https://en.wikipedia.org/wiki/Dm-crypt).
security Operational Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-checklist.md
This checklist is intended to help enterprises think through various operational
|Checklist Category| Description| | | -- | | [<br>Security Roles & Access Controls](../../security-center/security-center-planning-and-operations-guide.md)|<ul><li>Use [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md) to provide user-specific that used to assign permissions to users, groups, and applications at a certain scope.</li></ul> |
-| [<br>Data Collection & Storage](../../storage/blobs/security-recommendations.md)|<ul><li>Use Management Plane Security to secure your Storage Account using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).</li><li>Data Plane Security to Securing Access to your Data using [Shared Access Signatures (SAS)](../../storage/common/storage-sas-overview.md) and Stored Access Policies.</li><li>Use Transport-Level Encryption ΓÇô Using HTTPS and the encryption used by [SMB (Server message block protocols) 3.0](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) for [Azure File Shares](../../storage/files/storage-dotnet-how-to-use-files.md).</li><li>Use [Client-side encryption](../../storage/common/storage-client-side-encryption.md) to secure data that you send to storage accounts when you require sole control of encryption keys. </li><li>Use [Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md) to automatically encrypt data in Azure Storage, and [Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md) to encrypt virtual machine disk files for the OS and data disks.</li><li>Use Azure [Storage Analytics](/rest/api/storageservices/storage-analytics) to monitor authorization type; like with Blob Storage, you can see if users have used a Shared Access Signature or the storage account keys.</li><li>Use [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) to access storage resources from different domains.</li></ul> |
+| [<br>Data Collection & Storage](../../storage/blobs/security-recommendations.md)|<ul><li>Use Management Plane Security to secure your Storage Account using [Azure role-based access control (Azure RBAC)](../../role-based-access-control/role-assignments-portal.md).</li><li>Data Plane Security to Securing Access to your Data using [Shared Access Signatures (SAS)](../../storage/common/storage-sas-overview.md) and Stored Access Policies.</li><li>Use Transport-Level Encryption ΓÇô Using HTTPS and the encryption used by [SMB (Server message block protocols) 3.0](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview) for [Azure File Shares](../../storage/files/storage-dotnet-how-to-use-files.md).</li><li>Use [Client-side encryption](../../storage/common/storage-client-side-encryption.md) to secure data that you send to storage accounts when you require sole control of encryption keys. </li><li>Use [Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md) to automatically encrypt data in Azure Storage, and [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) to encrypt virtual machine disk files for the OS and data disks.</li><li>Use Azure [Storage Analytics](/rest/api/storageservices/storage-analytics) to monitor authorization type; like with Blob Storage, you can see if users have used a Shared Access Signature or the storage account keys.</li><li>Use [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) to access storage resources from different domains.</li></ul> |
|[<br>Security Policies & Recommendations](../../security-center/security-center-planning-and-operations-guide.md)|<ul><li>Use [Microsoft Defender for Cloud](../../security-center/security-center-services.md#supported-endpoint-protection-solutions-) to deploy endpoint solutions.</li><li>Add a [web application firewall (WAF)](../../web-application-firewall/ag/ag-overview.md) to secure web applications.</li><li> Use a [firewall](../../sentinel/connect-data-sources.md) from a Microsoft partner to increase your security protections. </li><li>Apply security contact details for your Azure subscription; this the [Microsoft Security Response Center (MSRC)](https://technet.microsoft.com/security/dn528958.aspx) contacts you if it discovers that your customer data has been accessed by an unlawful or unauthorized party.</li></ul> |
-| [<br>Identity & Access Management](identity-management-best-practices.md)|<ul><li>[Synchronize your on-premises directory with your cloud directory using Azure AD](../../active-directory/hybrid/whatis-hybrid-identity.md).</li><li>Use [Single Sign-On](https://azure.microsoft.com/resources/videos/overview-of-single-sign-on/) to enable users to access their SaaS applications based on their organizational account in Azure AD.</li><li>Use the [Password Reset Registration Activity](../../active-directory/authentication/howto-sspr-reporting.md) report to monitor the users that are registering.</li><li>Enable [multi-factor authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) for users.</li><li>Developers to use secure identity capabilities for apps like [Microsoft Security Development Lifecycle (SDL)](https://www.microsoft.com/download/details.aspx?id=12379).</li><li>Actively monitor for suspicious activities by using Azure AD Premium anomaly reports and [Azure AD identity protection capability](../../active-directory/identity-protection/overview-identity-protection.md).</li></ul> |
+| [<br>Identity & Access Management](identity-management-best-practices.md)|<ul><li>[Synchronize your on-premises directory with your cloud directory using Azure AD](../../active-directory/hybrid/whatis-hybrid-identity.md).</li><li>Use [single sign-on](https://azure.microsoft.com/resources/videos/overview-of-single-sign-on/) to enable users to access their SaaS applications based on their organizational account in Azure AD.</li><li>Use the [Password Reset Registration Activity](../../active-directory/authentication/howto-sspr-reporting.md) report to monitor the users that are registering.</li><li>Enable [multi-factor authentication (MFA)](../../active-directory/authentication/concept-mfa-howitworks.md) for users.</li><li>Developers to use secure identity capabilities for apps like [Microsoft Security Development Lifecycle (SDL)](https://www.microsoft.com/download/details.aspx?id=12379).</li><li>Actively monitor for suspicious activities by using Azure AD Premium anomaly reports and [Azure AD identity protection capability](../../active-directory/identity-protection/overview-identity-protection.md).</li></ul> |
|[<br>Ongoing Security Monitoring](../../security-center/security-center-planning-and-operations-guide.md)|<ul><li>Use Malware Assessment Solution [Azure Monitor logs](../../azure-monitor/logs/log-query-overview.md) to report on the status of antimalware protection in your infrastructure.</li><li>Use [Update assessment](../../automation/update-management/overview.md) to determine the overall exposure to potential security problems, and whether or how critical these updates are for your environment.</li><li>The [Identity and Access](../../security-center/security-center-remediate-recommendations.md) provide you an overview of user </li><ul><li>user identity state,</li><li>number of failed attempts to sign in,</li><li> the userΓÇÖs account that were used during those attempts, accounts that were locked out</li> <li>accounts with changed or reset password </li><li>Currently number of accounts that are logged in.</li></ul></ul> | | [<br>Microsoft Defender for Cloud detection capabilities](../../security-center/security-center-alerts-overview.md#detect-threats)|<ul><li>Use [detection capabilities](../../security-center/security-center-alerts-overview.md#detect-threats), to identify active threats targeting your Microsoft Azure resources.</li><li>Use [integrated threat intelligence](/archive/blogs/azuresecurity/get-threat-intelligence-reports-with-azure-security-center) that looks for known bad actors by leveraging global threat intelligence from Microsoft products and services, the [Microsoft Digital Crimes Unit (DCU)](https://www.microsoft.com/trustcenter/security/cybercrime), the [Microsoft Security Response Center (MSRC)](https://www.microsoft.com/msrc?rtc=1), and external feeds.</li><li>Use [Behavioral analytics](https://blogs.technet.microsoft.com/enterprisemobility/2016/06/30/ata-behavior-analysis-monitoring/) that applies known patterns to discover malicious behavior. </li><li>Use [Anomaly detection](/azure/machine-learning/studio-module-reference/anomaly-detection) that uses statistical profiling to build a historical baseline.</li></ul> | | [<br>Developer Operations (DevOps)](/azure/architecture/checklist/dev-ops)|<ul><li>[Infrastructure as Code (IaC)](../../azure-resource-manager/templates/syntax.md) is a practice, which enables the automation and validation of creation and teardown of networks and virtual machines to help with delivering secure, stable application hosting platforms.</li><li>[Continuous Integration and Deployment](/visualstudio/containers/overview#continuous-delivery-and-continuous-integration-cicd) drive the ongoing merging and testing of code, which leads to finding defects early. </li><li>[Release Management](/azure/devops/pipelines/overview?viewFallbackFrom=azure-devops) Manage automated deployments through each stage of your pipeline.</li><li>[App Performance Monitoring](../../azure-monitor/app/asp-net.md) of running applications including production environments for application health and customer usage help organizations form a hypothesis and quickly validate or disprove strategies.</li><li>Using [Load Testing & Auto-Scale](https://www.visualstudio.com/docs/test/performance-testing/getting-started/getting-started-with-performance-testing) we can find performance problems in our app to improve deployment quality and to make sure our app is always up or available to cater to the business needs.</li></ul> |
security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/overview.md
For many organizations, data encryption at rest is a mandatory step towards data
- [Client-side Encryption](../../storage/common/storage-client-side-encryption.md) also provides the feature of encryption at rest. -- [Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md) allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
+- [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) allows you to encrypt the OS disks and data disks used by an IaaS virtual machine.
### Storage Analytics
Azure Firewall is offered in two SKUs: Standard and Premium. [Azure Firewall Sta
The ability to control routing behavior on your Azure Virtual Networks is a critical network security and access control capability. For example, if you want to make sure that all traffic to and from your Azure Virtual Network goes through that virtual security appliance, you need to be able to control and customize routing behavior. You can do this by configuring User-Defined Routes in Azure.
-[User-Defined Routes](../../virtual-network/virtual-networks-udr-overview.md#custom-routes) allow you to customize inbound and outbound paths for traffic moving into and out of individual virtual machines or subnets to insure the most secure route possible. [Forced tunneling](../../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) is a mechanism you can use to ensure that your services are not allowed to initiate a connection to devices on the Internet.
+[User-Defined Routes](../../virtual-network/virtual-networks-udr-overview.md#custom-routes) allow you to customize inbound and outbound paths for traffic moving into and out of individual virtual machines or subnets to ensure the most secure route possible. [Forced tunneling](../../vpn-gateway/vpn-gateway-forced-tunneling-rm.md) is a mechanism you can use to ensure that your services are not allowed to initiate a connection to devices on the Internet.
This is different from being able to accept incoming connections and then responding to them. Front-end web servers need to respond to requests from Internet hosts, and so Internet-sourced traffic is allowed inbound to these web servers and the web servers can respond.
The Azure Key Vault (AKV) service is designed to improve the security and manage
If you are running SQL Server with on-premises machines, there are steps you can follow to access Azure Key Vault from your on-premises SQL Server instance. But for SQL Server in Azure VMs, you can save time by using the Azure Key Vault Integration feature. With a few Azure PowerShell cmdlets to enable this feature, you can automate the configuration necessary for a SQL VM to access your key vault. ### VM Disk Encryption
-[Azure Disk Encryption](./azure-disk-encryption-vms-vmss.md) is a new capability that helps you encrypt your Windows and Linux IaaS virtual machine disks. It applies the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets in your Key Vault subscription. The solution also ensures that all data on the virtual machine disks are encrypted at rest in your Azure storage.
+[Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md) helps you encrypt your IaaS virtual machine disks. It applies the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets in your Key Vault subscription. The solution also ensures that all data on the virtual machine disks are encrypted at rest in your Azure storage.
### Virtual networking
-Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical Azure network fabric. Each logical [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) is isolated from all other Azure Virtual Networks. This isolation helps insure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
+Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be connected to an Azure Virtual Network. An Azure Virtual Network is a logical construct built on top of the physical Azure network fabric. Each logical [Azure Virtual Network](../../virtual-network/virtual-networks-overview.md) is isolated from all other Azure Virtual Networks. This isolation helps ensure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
### Patch Updates Patch Updates provide the basis for finding and fixing potential problems and simplify the software update management process, both by reducing the number of software updates you must deploy in your enterprise and by increasing your ability to monitor compliance.
Microsoft uses multiple security practices and technologies across its products
| Free / Common Features | Basic Features |Premium P1 Features |Premium P2 Features | Azure Active Directory Join ΓÇô Windows 10 only related features| | :- | :- |:- |:- |:- |
-| [Directory Objects](../../active-directory/fundamentals/active-directory-whatis.md), [User/Group Management (add/update/delete)/ User-based provisioning, Device registration](../../active-directory/fundamentals/active-directory-whatis.md), [Single Sign-On (SSO)](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Change for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Connect (Sync engine that extends on-premises directories to Azure Active Directory)](../../active-directory/fundamentals/active-directory-whatis.md), [Security / Usage Reports](../../active-directory/fundamentals/active-directory-whatis.md) | [Group-based access management / provisioning](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Company Branding (Logon Pages/Access Panel customization)](../../active-directory/fundamentals/active-directory-whatis.md), [Application Proxy](../../active-directory/fundamentals/active-directory-whatis.md), [SLA 99.9%](../../active-directory/fundamentals/active-directory-whatis.md) | [Self-Service Group and app Management/Self-Service application additions/Dynamic Groups](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset/Change/Unlock with on-premises write-back](../../active-directory/fundamentals/active-directory-whatis.md), [Multi-Factor Authentication (Cloud and On-premises (MFA Server))](../../active-directory/fundamentals/active-directory-whatis.md), [MIM CAL + MIM Server](../../active-directory/fundamentals/active-directory-whatis.md), [Cloud App Discovery](../../active-directory/fundamentals/active-directory-whatis.md), [Connect Health](../../active-directory/fundamentals/active-directory-whatis.md), [Automatic password rollover for group accounts](../../active-directory/fundamentals/active-directory-whatis.md)| [Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md), [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md)| [Join a device to Azure AD, Desktop SSO, Microsoft Passport for Azure AD, Administrator BitLocker recovery](../../active-directory/fundamentals/active-directory-whatis.md), [MDM auto-enrollment, Self-Service BitLocker recovery, Additional local administrators to Windows 10 devices via Azure AD Join](../../active-directory/fundamentals/active-directory-whatis.md)|
+| [Directory Objects](../../active-directory/fundamentals/active-directory-whatis.md), [User/Group Management (add/update/delete)/ User-based provisioning, Device registration](../../active-directory/fundamentals/active-directory-whatis.md), [single sign-on (SSO)](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Change for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Connect (Sync engine that extends on-premises directories to Azure Active Directory)](../../active-directory/fundamentals/active-directory-whatis.md), [Security / Usage Reports](../../active-directory/fundamentals/active-directory-whatis.md) | [Group-based access management / provisioning](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset for cloud users](../../active-directory/fundamentals/active-directory-whatis.md), [Company Branding (Logon Pages/Access Panel customization)](../../active-directory/fundamentals/active-directory-whatis.md), [Application Proxy](../../active-directory/fundamentals/active-directory-whatis.md), [SLA 99.9%](../../active-directory/fundamentals/active-directory-whatis.md) | [Self-Service Group and app Management/Self-Service application additions/Dynamic Groups](../../active-directory/fundamentals/active-directory-whatis.md), [Self-Service Password Reset/Change/Unlock with on-premises write-back](../../active-directory/fundamentals/active-directory-whatis.md), [Multi-Factor Authentication (Cloud and On-premises (MFA Server))](../../active-directory/fundamentals/active-directory-whatis.md), [MIM CAL + MIM Server](../../active-directory/fundamentals/active-directory-whatis.md), [Cloud App Discovery](../../active-directory/fundamentals/active-directory-whatis.md), [Connect Health](../../active-directory/fundamentals/active-directory-whatis.md), [Automatic password rollover for group accounts](../../active-directory/fundamentals/active-directory-whatis.md)| [Identity Protection](../../active-directory/identity-protection/overview-identity-protection.md), [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md)| [Join a device to Azure AD, Desktop SSO, Microsoft Passport for Azure AD, Administrator BitLocker recovery](../../active-directory/fundamentals/active-directory-whatis.md), [MDM auto-enrollment, Self-Service BitLocker recovery, Additional local administrators to Windows 10 devices via Azure AD Join](../../active-directory/fundamentals/active-directory-whatis.md)|
- [Cloud App Discovery](/cloud-app-security/set-up-cloud-discovery) is a premium feature of Azure Active Directory that enables you to identify cloud applications that are used by the employees in your organization.
security Steps Secure Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/steps-secure-identity.md
Previously updated : 05/01/2022 Last updated : 08/17/2022
security Virtual Machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/virtual-machines-overview.md
The solution is integrated with Azure Key Vault to help you control and manage t
Learn more:
-* [Azure Disk Encryption for IaaS VMs](./azure-disk-encryption-vms-vmss.md)
+* [Azure Disk Encryption for Linux VMs](../../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption for Windows VMs](../../virtual-machines/linux/disk-encryption-overview.md)
* [Quickstart: Encrypt a Windows IaaS VM with Azure PowerShell](../../virtual-machines/linux/disk-encryption-powershell-quickstart.md) ## Virtual machine backup
Site Recovery:
* **Simplifies your BCDR strategy**: Site Recovery makes it easy to handle replication, failover, and recovery of multiple business workloads and apps from a single location. Site Recovery orchestrates replication and failover but doesn't intercept your application data or have any information about it. * **Provides flexible replication**: By using Site Recovery, you can replicate workloads running on Hyper-V virtual machines, VMware virtual machines, and Windows/Linux physical servers.
-* **Supports failover and recovery**: Site Recovery provides test failovers to support disaster recovery drills without affecting production environments. You can also run planned failovers with a zero-data loss for expected outages, or unplanned failovers with minimal data loss (depending on replication frequency) for unexpected disasters. After failover, you can fail back to your primary sites. Site Recovery provides recovery plans that can include scripts and Azure automation workbooks so that you can customize failover and recovery of multi-tier applications.
+* **Supports failover and recovery**: Site Recovery provides test failovers to support disaster recovery drills without affecting production environments. You can also run planned failovers with a zero-data loss for expected outages, or unplanned failovers with minimal data loss (depending on replication frequency) for unexpected disasters. After failover, you can fail back to your primary sites. Site Recovery provides recovery plans that can include scripts and Azure Automation workbooks so that you can customize failover and recovery of multi-tier applications.
* **Eliminates secondary datacenters**: You can replicate to a secondary on-premises site, or to Azure. Using Azure as a destination for disaster recovery eliminates the cost and complexity of maintaining a secondary site. Replicated data is stored in Azure Storage. * **Integrates with existing BCDR technologies**: Site Recovery partners with other applications' BCDR features. For example, you can use Site Recovery to help protect the SQL Server back end of corporate workloads. This includes native support for SQL Server Always On to manage the failover of availability groups.
Learn more:
Virtual machines need network connectivity. To support that requirement, Azure requires virtual machines to be connected to an Azure virtual network.
-An Azure virtual network is a logical construct built on top of the physical Azure network fabric. Each logical Azure virtual network is isolated from all other Azure virtual networks. This isolation helps insure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
+An Azure virtual network is a logical construct built on top of the physical Azure network fabric. Each logical Azure virtual network is isolated from all other Azure virtual networks. This isolation helps ensure that network traffic in your deployments is not accessible to other Microsoft Azure customers.
Learn more:
service-fabric How To Managed Cluster Dedicated Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-dedicated-hosts.md
Before you begin:
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) * Retrieve a managed cluster ARM template. Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template. This guide shows how to deploy a Standard SKU cluster with two node types and 12 nodes.
-* The user needs to have Microsoft.Authorization/roleAssignments/write permissions to the host group such as [User Access Administrator](/azure/role-based-access-control/built-in-roles#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles#owner) to do role assignments in a host group. For more information, see [Assign Azure roles using the Azure portal - Azure RBAC](/azure/role-based-access-control/role-assignments-portal?tabs=current#prerequisites).
+* The user needs to have Microsoft.Authorization/roleAssignments/write permissions to the host group such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner) to do role assignments in a host group. For more information, see [Assign Azure roles using the Azure portal - Azure RBAC](../role-based-access-control/role-assignments-portal.md?tabs=current#prerequisites).
## Review the template
The template used in this guide is from [Azure Samples - Service Fabric cluster
## Create a client certificate Service Fabric managed clusters use a client certificate as a key for access control. If you already have a client certificate that you would like to use for access control to your cluster, you can skip this step.
-If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
+If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](../key-vault/certificates/quick-create-portal.md). Note the certificate thumbprint as it will be required to deploy the template in the next step.
## Deploy Dedicated Host resources and configure access to Service Fabric Resource Provider
Create a dedicated host group and add a role assignment to the host group with t
> * Each fault domain needs a dedicated host to be placed in it and Service Fabric managed clusters require five fault domains. Therefore, at least five dedicated hosts should be present in each dedicated host group.
-3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](/azure/role-based-access-control/built-in-roles#all). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
+3. The [sample ARM deployment template for Dedicated Host Group](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-ADH) used in the previous step also adds a role assignment to the host group with contributor access. For more information on Azure roles, see [Azure built-in roles - Azure RBAC](../role-based-access-control/built-in-roles.md#all). This role assignment is defined in the resources section of template with Principal ID determined from the first step and a role definition ID.
```JSON "variables": {
Create an Azure Service Fabric managed cluster with node type(s) configured to r
* Cluster Name: Enter a unique name for your cluster, such as mysfcluster. * Admin Username: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster. * Admin Password: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
- * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
+ * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](../key-vault/certificates/quick-create-portal.md) to create a self-signed certificate.
* Node Type Name: Enter a unique name for your node type, such as nt1. 3. Deploy an ARM template through one of the methods below:
Create an Azure Service Fabric managed cluster with node type(s) configured to r
``` ## Next steps > [!div class="nextstepaction"]
-> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
+> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
service-fabric How To Managed Cluster Ephemeral Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-ephemeral-os-disks.md
Ephemeral OS disks work well where applications are tolerant of individual VM fa
This article describes how to create a Service Fabric managed cluster node types specifically with Ephemeral OS disks using an Azure Resource Manager template (ARM template). ## Prerequisites
-This guide builds upon the managed cluster quick start guide: [Deploy a Service Fabric managed cluster using Azure Resource Manager](/azure/service-fabric/quickstart-managed-cluster-template)
+This guide builds upon the managed cluster quick start guide: [Deploy a Service Fabric managed cluster using Azure Resource Manager](./quickstart-managed-cluster-template.md)
Before you begin: * If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free) * Retrieve a managed cluster ARM template. Sample Resource Manager templates are available in the [Azure samples on GitHub](https://github.com/Azure-Samples/service-fabric-cluster-templates). These templates can be used as a starting point for your cluster template. * Ephemeral OS disks are supported both for primary and secondary node type. This guide shows how to deploy a Standard SKU cluster with two node types - a primary and a secondary node type, which uses Ephemeral OS disk.
-* Ephemeral OS disks aren't supported for every SKU. VM sizes such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2, Bs, Dav4, Eav4 supports Ephemeral OS disks. Ensure the SKU with which you want to deploy supports Ephemeral OS disk. For more information on individual SKU, see [supported VM SKU](/azure/virtual-machines/dv3-dsv3-series) and navigate to the desired SKU on left side pane.
+* Ephemeral OS disks aren't supported for every SKU. VM sizes such as DSv1, DSv2, DSv3, Esv3, Fs, FsV2, GS, M, Mdsv2, Bs, Dav4, Eav4 supports Ephemeral OS disks. Ensure the SKU with which you want to deploy supports Ephemeral OS disk. For more information on individual SKU, see [supported VM SKU](../virtual-machines/dv3-dsv3-series.md) and navigate to the desired SKU on left side pane.
* Ephemeral OS disks in Service Fabric are placed in the space for temporary disks for the VM SKU. Ensure the VM SKU you're using has more than 127 GiB of temporary disk space to place Ephemeral OS disk. ## Review the template
The template used in this guide is from [Azure Samples - Service Fabric cluster
## Create a client certificate Service Fabric managed clusters use a client certificate as a key for access control. If you already have a client certificate that you would like to use for access control to your cluster, you can skip this step.
-If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](/azure/key-vault/certificates/quick-create-portal). Note the certificate thumbprint as it will be required to deploy the template in the next step.
+If you need to create a new client certificate, follow the steps in [set and retrieve a certificate from Azure Key Vault](../key-vault/certificates/quick-create-portal.md). Note the certificate thumbprint as it will be required to deploy the template in the next step.
## Deploy the template
If you need to create a new client certificate, follow the steps in [set and ret
* Cluster Name: Enter a unique name for your cluster, such as mysfcluster. * Admin Username: Enter a name for the admin to be used for RDP on the underlying VMs in the cluster. * Admin Password: Enter a password for the admin to be used for RDP on the underlying VMs in the cluster.
- * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](/azure/key-vault/certificates/quick-create-portal) to create a self-signed certificate.
+ * Client Certificate Thumbprint: Provide the thumbprint of the client certificate that you would like to use to access your cluster. If you don't have a certificate, follow [set and retrieve a certificate](../key-vault/certificates/quick-create-portal.md) to create a self-signed certificate.
* Node Type Name: Enter a unique name for your node type, such as nt1.
A node type can only be configured to use Ephemeral OS disk at the time of creat
## Next steps > [!div class="nextstepaction"]
-> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
+> [Read about Service Fabric managed cluster configuration options](how-to-managed-cluster-configuration.md)
service-fabric Service Fabric Sfctl Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-sfctl-node.md
This api allows removing all existing configuration overrides on specified node.
## sfctl node remove-state Notifies Service Fabric that the persisted state on a node has been permanently removed or lost.
-This implies that it is not possible to recover the persisted state of that node. This generally happens if a hard disk has been wiped clean, or if a hard disk crashes. The node has to be down for this operation to be successful. This operation lets Service Fabric know that the replicas on that node no longer exist, and that Service Fabric should stop waiting for those replicas to come back up. Do not run this cmdlet if the state on the node has not been removed and the node can come back up with its state intact. Starting from Service Fabric 6.5, in order to use this API for seed nodes, please change the seed nodes to regular (non-seed) nodes and then invoke this API to remove the node state. If the cluster is running on Azure, after the seed node goes down, Service Fabric will try to change it to a non-seed node automatically. To make this happen, make sure the number of non-seed nodes in the primary node type is no less than the number of Down seed nodes. If necessary, add more nodes to the primary node type to achieve this. For standalone cluster, if the Down seed node is not expected to come back up with its state intact, please remove the node from the cluster. For more information, see [Add or remove nodes to a standalone Service Fabric cluster running on Windows Server](/azure/service-fabric/service-fabric-cluster-windows-server-add-remove-nodes).
+This implies that it is not possible to recover the persisted state of that node. This generally happens if a hard disk has been wiped clean, or if a hard disk crashes. The node has to be down for this operation to be successful. This operation lets Service Fabric know that the replicas on that node no longer exist, and that Service Fabric should stop waiting for those replicas to come back up. Do not run this cmdlet if the state on the node has not been removed and the node can come back up with its state intact. Starting from Service Fabric 6.5, in order to use this API for seed nodes, please change the seed nodes to regular (non-seed) nodes and then invoke this API to remove the node state. If the cluster is running on Azure, after the seed node goes down, Service Fabric will try to change it to a non-seed node automatically. To make this happen, make sure the number of non-seed nodes in the primary node type is no less than the number of Down seed nodes. If necessary, add more nodes to the primary node type to achieve this. For standalone cluster, if the Down seed node is not expected to come back up with its state intact, please remove the node from the cluster. For more information, see [Add or remove nodes to a standalone Service Fabric cluster running on Windows Server](./service-fabric-cluster-windows-server-add-remove-nodes.md).
### Arguments
Gets the progress of an operation started with StartNodeTransition using the pro
## Next steps - [Setup](service-fabric-cli.md) the Service Fabric CLI.-- Learn how to use the Service Fabric CLI using the [sample scripts](./scripts/sfctl-upgrade-application.md).
+- Learn how to use the Service Fabric CLI using the [sample scripts](./scripts/sfctl-upgrade-application.md).
site-recovery How To Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/how-to-move-from-classic-to-modernized-vmware-disaster-recovery.md
Last updated 07/15/2022
# How to move from classic to modernized VMware disaster recoveryΓÇ»
-This article provides information about how you can move/migrate your VMware replications from [classic](/azure/site-recovery/vmware-azure-architecture) to [modernized](/azure/site-recovery/vmware-azure-architecture-preview) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism which ensures that the complete initial replication is not performed again for non-critical replicated items, and only the differential data is transferred.
+This article provides information about how you can move/migrate your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-preview.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism which ensures that the complete initial replication is not performed again for non-critical replicated items, and only the differential data is transferred.
> [!Note] > - Movement of physical servers to modernized architecture is not yet supported.  
site-recovery Move From Classic To Modernized Vmware Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/move-from-classic-to-modernized-vmware-disaster-recovery.md
Last updated 07/15/2022
# Move from classic to modernized VMware disaster recovery  
-This article provides information about the architecture, necessary infrastructure, and FAQs about moving your VMware replications from [classic](/azure/site-recovery/vmware-azure-architecture) to [modernized](/azure/site-recovery/vmware-azure-architecture-preview) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism, which ensures that complete initial replication isn't performed again for non-critical replicated items, and only the differential data is transferred.
+This article provides information about the architecture, necessary infrastructure, and FAQs about moving your VMware replications from [classic](./vmware-azure-architecture.md) to [modernized](./vmware-azure-architecture-preview.md) protection architecture. With this capability to migrate, you can successfully transfer your replicated items from a configuration server to an Azure Site Recovery replication appliance. This migration is guided by a smart replication mechanism, which ensures that complete initial replication isn't performed again for non-critical replicated items, and only the differential data is transferred.
> [!Note] > - Movement of physical servers to modernized architecture is not yet supported.  
The components involved in the migration of replicated items of a VMware machine
Ensure the following for a successful movement of replicated item: - A Recovery Services vault using the modernized experience. ΓÇ» >[!Note]
- >Any new Recovery Services vault created will have the modernized experience switched on by default. You can [switch to the classic experience](/azure/site-recovery/vmware-azure-common-questions#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-preview-experience) but once done, you canΓÇÖt switch again. ΓÇ»
-- An [Azure Site Recovery replication appliance](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview), which has been successfully registered to the vault, and all its components are in a non-critical state.  
+ >Any new Recovery Services vault created will have the modernized experience switched on by default. You can [switch to the classic experience](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-preview-experience) but once done, you canΓÇÖt switch again. ΓÇ»
+- An [Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-preview.md), which has been successfully registered to the vault, and all its components are in a non-critical state.  
- The version of the appliance must be 9.50 or later. For a detailed version description, check [here](#architecture). - The vCenter server or vSphere host’s details, where the existing replicated machines reside, are added to the appliance for the on-premises discovery to be successful.  
Ensure the following for a successful movement of replicated item:
Ensure the following before you move from classic architecture to modernized architecture: -- [Create a Recovery Services vault](/azure/site-recovery/azure-to-azure-tutorial-enable-replication#create-a-recovery-services-vault) and ensure the experience has [not been switched to classic](/azure/site-recovery/vmware-azure-common-questions#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-preview-experience). -- [Deploy an Azure Site Recovery replication appliance](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview). -- [Add the on-premises machine’s vCenter Server details](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview) to the appliance, so that it successfully performs discovery.  
+- [Create a Recovery Services vault](./azure-to-azure-tutorial-enable-replication.md#create-a-recovery-services-vault) and ensure the experience has [not been switched to classic](./vmware-azure-common-questions.md#how-do-i-use-the-classic-experience-in-the-recovery-services-vault-rather-than-the-preview-experience).
+- [Deploy an Azure Site Recovery replication appliance](./deploy-vmware-azure-replication-appliance-preview.md).
+- [Add the on-premises machine’s vCenter Server details](./deploy-vmware-azure-replication-appliance-preview.md) to the appliance, so that it successfully performs discovery.  
### Prepare classic Recovery Services vault  
For the modernized architecture setup, ensure that:  
- The appliance and all its components are in a non-critical state and the appliance has a healthy heartbeat.ΓÇ» - The vCenter Server version is supported by the modernized architecture.ΓÇ» - The vCenter Server details of the source machine are added to the appliance.ΓÇ» -- The Linux distro version is supported by the modernized architecture.ΓÇ»[Learn more](/azure/site-recovery/vmware-physical-azure-support-matrix#for-linux). -- The Windows Server version is supported by the modernized architecture.ΓÇ»[Learn more](/azure/site-recovery/vmware-physical-azure-support-matrix#for-windows).
+- The Linux distro version is supported by the modernized architecture.ΓÇ»[Learn more](./vmware-physical-azure-support-matrix.md#for-linux).
+- The Windows Server version is supported by the modernized architecture.ΓÇ»[Learn more](./vmware-physical-azure-support-matrix.md#for-windows).
## Calculate total time to moveΓÇ»
The same formula will be used to calculate time for migration and is shown on th
## How to define required infrastructure
-When migrating machines from classic to modernized architecture, you will need to make sure that the required infrastructure has already been registered in the modernized Recovery Services vault. Refer to the replication applianceΓÇÖs [sizing and capacity details](/azure/site-recovery/deploy-vmware-azure-replication-appliance-preview#sizing-and-capacity) to help define the required infrastructure.
+When migrating machines from classic to modernized architecture, you will need to make sure that the required infrastructure has already been registered in the modernized Recovery Services vault. Refer to the replication applianceΓÇÖs [sizing and capacity details](./deploy-vmware-azure-replication-appliance-preview.md#sizing-and-capacity) to help define the required infrastructure.
As a rule, you should set up the same number of replication appliances, as the number of process servers in your classic Recovery Services vault. In the classic vault, if there was one configuration server and four process servers, then you should set up four replication appliances in the modernized Recovery Services vault.
site-recovery Physical Manage Configuration Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/physical-manage-configuration-server.md
Upgrade the server as follows:
* Microsoft Azure Site Recovery Provider * Microsoft Azure Site Recovery Configuration Server/Process Server * Microsoft Azure Site Recovery Configuration Server Dependencies
- * MySQL Server 5.5
+ * MySQL Server 5.7
4. Run the following command from and admin command prompt. ``` reg delete HKLM\Software\Microsoft\Azure Site Recovery\Registration
site-recovery Site Recovery Runbook Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-runbook-automation.md
Aman Sharma's blog over at [Harvesting Clouds](http://harvestingclouds.com) has
## Before you start -- If you're new to Azure Automation, you can [sign up](https://azure.microsoft.com/services/automation/) and [download sample scripts](https://azure.microsoft.com/documentation/scripts/). For more information, see [Automation runbooks - known issues and limitations](/azure/automation/automation-runbook-types#powershell-runbooks).
+- If you're new to Azure Automation, you can [sign up](https://azure.microsoft.com/services/automation/) and [download sample scripts](https://azure.microsoft.com/documentation/scripts/). For more information, see [Automation runbooks - known issues and limitations](../automation/automation-runbook-types.md#powershell-runbooks).
- Ensure that the Automation account has the following modules: - AzureRM.profile - AzureRM.Resources
This video provides another example. It demonstrates how to recover a two-tier W
- Learn about an [Azure Automation Run As account](../automation/manage-runas-account.md) - Review [Azure Automation sample scripts](https://gallery.technet.microsoft.com/scriptcenter/site/search?f%5B0%5D.Type=User&f%5B0%5D.Value=SC%20Automation%20Product%20Team&f%5B0%5D.Text=SC%20Automation%20Product%20Team).-- [Learn more](site-recovery-failover.md) about running failovers.
+- [Learn more](site-recovery-failover.md) about running failovers.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Machine name | Ensure that the display name of machine does not fall into [Azure
### For Windows
+> [!NOTE]
+> Ensure that 500MB free space is available on the installation folder in the on-premises and Azure machine.
+ **Operating system** | **Details** | Windows Server 2022 | Supported from [Update rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) (version 9.46 of the Mobility service) onwards.
Guest operating system architecture | 64-bit. | Check fails if unsupported.
Operating system disk size | Up to 2,048 GB for Generation 1 machines. <br> Up to 4,095 GB for Generation 2 machines. | Check fails if unsupported. Operating system disk count | 1 </br> boot and system partition on different disks is not supported | Check fails if unsupported. Data disk count | 64 or less. | Check fails if unsupported.
-Data disk size | Up to 32 TB when replicating to managed disk (9.41 version onwards)<br> Up to 4 TB when replicating to storage account </br> Each premium storage account can host up to 35 TB of data </br> Minimum disk size requirement - at least 1 GB </br> Preview architecture supports disks up to 8 TB. | Check fails if unsupported.
+Data disk size | Up to 32 TB when replicating to managed disk (9.41 version onwards)<br> Up to 4 TB when replicating to storage account </br> Each premium storage account can host up to 35 TB of data </br> Minimum disk size requirement - at least 1 GB
RAM | Site Recovery driver consumes 6% of RAM. Network adapters | Multiple adapters are supported. | Shared VHD | Not supported. | Check fails if unsupported.
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
Previously updated : 07/28/2022 Last updated : 08/18/2022 # Blob Storage feature support in Azure Storage accounts
-This article shows whether a feature is fully supported, supported at the preview level, or is not yet supported. Support levels are impacted by storage account type, and whether certain capabilities or protocols are enabled on the account.
+Feature support is impacted by the type of account that you create and the settings that enable on that account. You can use the tables in this article to assess feature support based on these factors. The items that appear in these tables will change over time as support continues to expand.
-The items that appear in these tables will change over time as support continues to expand.
+## How to use these tables
+
+Each table uses the following icons to indicate support level:
+
+| Icon | Description |
+||-|
+| &#x2705; | Fully supported |
+| &#x1F7E6; | Supported at the preview level |
+| &nbsp;&#x2B24; | Not _yet_ supported |
+
+This table describes the impact of **enabling** the capability and not the specific use of that capability. For example, if you enable the Network File System (NFS) 3.0 protocol but never use the NFS 3.0 protocol to upload a blob, a check mark in the **NFS 3.0 enabled** column indicates that feature support is not negatively impacted by merely enabling NFS 3.0 support.
+
+Even though a feature is not be negatively impacted, it might not be compatible when used with a specific capability. For example, enabling NFS 3.0 has no impact on Azure Active Directory (Azure AD) authorization. However, you can't use Azure AD to authorize an NFS 3.0 request. See any of these articles for information about known limitations:
+
+- [Known issues: Hierarchical namespace capability](data-lake-storage-known-issues.md)
+
+- [Known issues: Network File System (NFS) 3.0 protocol](network-file-system-protocol-known-issues.md)
+
+- [Known issues: SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-known-issues.md)
## Standard general-purpose v2 accounts
-| Storage feature | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
+The following table describes whether a feature is supported in a standard general-purpose v2 account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
+
+> [!IMPORTANT]
+> This table describes the impact of **enabling** HNS, NFS, or SFTP and not the specific use of those capabilities.
+
+| Storage feature | Default | HNS | NFS | SFTP |
||-|||--|
-| [Access tier - archive](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Access tier - cool](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) |
-| [Access tier - hot](access-tiers-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Anonymous public access](anonymous-read-access-configure.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)| ![Yes](../media/icons/yes-icon.png) |
-| [Azure Active Directory security](authorize-access-azure-active-directory.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blob inventory](blob-inventory.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Blob index tags](storage-manage-find-blobs.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blob snapshots](snapshots-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blob Storage APIs](reference.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) <sup>3</sup>| ![Yes](../media/icons/yes-icon.png) <sup>3</sup> |
-| [Blob Storage Azure CLI commands](storage-quickstart-blobs-cli.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Blob Storage events](storage-blob-event-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Blob Storage PowerShell commands](storage-quickstart-blobs-powershell.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Blob versioning](versioning-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blobfuse](storage-how-to-mount-container-linux.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Change feed](storage-blob-change-feed.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Custom domains](storage-custom-domain-name.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Encryption scopes](encryption-scope-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Immutable storage](immutable-storage-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) |![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
-
-<sup>3</sup> See [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md). These issues apply to all accounts that have the hierarchical namespace feature enabled.
+| [Access tier - archive](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Access tier - cool](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
+| [Access tier - hot](access-tiers-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Anonymous public access](anonymous-read-access-configure.md) | &#x2705; | &#x2705; | &#x2705;| &#x2705; |
+| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> |
+| [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Blob index tags](storage-manage-find-blobs.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Blob snapshots](snapshots-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Blob Storage APIs](reference.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blob Storage Azure CLI commands](storage-quickstart-blobs-cli.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blob Storage events](storage-blob-event-overview.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Blob Storage PowerShell commands](storage-quickstart-blobs-powershell.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blob versioning](versioning-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; |
+| [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
+| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+
+<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Azure Active Directory (AD) security.
+
+<sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
## Premium block blob accounts
-| Storage feature | Blob Storage (default support) | Data Lake Storage Gen2 <sup>1</sup> | NFS 3.0 <sup>1</sup> | SFTP <sup>1</sup> |
+The following table describes whether a feature is supported in a premium block blob account when you enable a hierarchical namespace (HNS), NFS 3.0 protocol, or SFTP.
+
+> [!IMPORTANT]
+> This table describes the impact of **enabling** HNS, NFS, or SFTP and not the specific use of those capabilities.
+
+| Storage feature | Default | HNS | NFS | SFTP |
||-|||--|
-| [Access tier - archive](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Access tier - cool](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Access tier - hot](access-tiers-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Anonymous public access](anonymous-read-access-configure.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Azure Active Directory security](authorize-access-azure-active-directory.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blob inventory](blob-inventory.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Blob index tags](storage-manage-find-blobs.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blob snapshots](snapshots-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blob Storage APIs](reference.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) <sup>3</sup>| ![Yes](../media/icons/yes-icon.png) <sup>3</sup> |
-| [Blob Storage Azure CLI commands](storage-quickstart-blobs-cli.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Blob Storage events](storage-blob-event-overview.md) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) <sup>3</sup>| ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Blob Storage PowerShell commands](storage-quickstart-blobs-powershell.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Blob versioning](versioning-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Blobfuse](storage-how-to-mount-container-linux.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Change feed](storage-blob-change-feed.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Custom domains](storage-custom-domain-name.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Encryption scopes](encryption-scope-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Immutable storage](immutable-storage-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Logging in Azure Monitor](./monitor-blob-storage.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> |
-| [Object replication for block blobs](object-replication-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Page blobs](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs#about-page-blobs) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) | ![No](../media/icons/no-icon.png) |
-| [Soft delete for blobs](./soft-delete-blob-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>3</sup> | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Soft delete for containers](soft-delete-container-overview.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-| [Static websites](storage-blob-static-website.md) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> | ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) <sup>2</sup> <sup>3</sup> | ![No](../media/icons/no-icon.png)| ![Yes](../media/icons/yes-icon.png) |
-| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
-
-<sup>1</sup> Data Lake Storage Gen2, Network File System (NFS) 3.0 protocol, and SSH File Transfer Protocol (SFTP) support all require a storage account with a hierarchical namespace enabled.
-
-<sup>2</sup> Feature is supported at the preview level.
-
-<sup>3</sup> See [Known issues with Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md). These issues apply to all accounts that have the hierarchical namespace feature enabled.
+| [Access tier - archive](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Access tier - cool](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Access tier - hot](access-tiers-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Anonymous public access](anonymous-read-access-configure.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> |
+| [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Blob index tags](storage-manage-find-blobs.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Blob snapshots](snapshots-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Blob Storage APIs](reference.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blob Storage Azure CLI commands](storage-quickstart-blobs-cli.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blob Storage events](storage-blob-event-overview.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Blob Storage PowerShell commands](storage-quickstart-blobs-powershell.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Blob versioning](versioning-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Blobfuse](storage-how-to-mount-container-linux.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Change feed](storage-blob-change-feed.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Custom domains](storage-custom-domain-name.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Customer-managed account failover](../common/storage-disaster-recovery-guidance.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Customer-managed keys (encryption)](../common/customer-managed-keys-overview.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Customer-provided keys (encryption)](encryption-customer-provided-keys.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Data redundancy options](../common/storage-redundancy.md?toc=/azure/storage/blobs/toc.json) | &#x2705; | &#x2705; | &#x2705;<sup>2</sup> | &#x2705; |
+| [Encryption scopes](encryption-scope-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Immutable storage](immutable-storage-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Last access time tracking for lifecycle management](lifecycle-management-overview.md#move-data-based-on-last-accessed-time) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x2705; |
+| [Lifecycle management policies (delete blob)](./lifecycle-management-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Lifecycle management policies (tiering)](./lifecycle-management-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Logging in Azure Monitor](./monitor-blob-storage.md) | &#x2705; | &#x2705; | &nbsp;&#x2B24; | &#x1F7E6; |
+| [Metrics in Azure Monitor](./monitor-blob-storage.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
+| [Object replication for block blobs](object-replication-overview.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Point-in-time restore for block blobs](point-in-time-restore-overview.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Soft delete for blobs](./soft-delete-blob-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Soft delete for containers](soft-delete-container-overview.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+| [Static websites](storage-blob-static-website.md) | &#x2705; | &#x2705; | &#x1F7E6; | &#x2705; |
+| [Storage Analytics logs (classic)](../common/storage-analytics-logging.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24;| &#x2705; |
+| [Storage Analytics metrics (classic)](../common/storage-analytics-metrics.md?toc=%2fazure%2fstorage%2fblobs%2ftoc.json) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
+
+<sup>1</sup> Requests that clients make by using NFS 3.0 or SFTP can't be authorized by using Azure Active Directory (AD) security.
+
+<sup>2</sup> Only locally redundant storage (LRS) and zone-redundant storage (ZRS) are supported.
## See also
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
When planning for disaster recovery during a regional outage, you should create
### Enabling access to virtual networks in other regions (preview)
-To enable access from a virtual network that is located in another region, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
-
+>
> [!IMPORTANT] > This capability is currently in PREVIEW. > > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+To enable access from a virtual network that is located in another region over service endpoints, register the `AllowGlobalTagsForStorage` feature in the subscription of the virtual network. All the subnets in the subscription that has the _AllowedGlobalTagsForStorage_ feature enabled will no longer use a public IP address to communicate with any storage account. Instead, all the traffic from these subnets to storage accounts will use a private IP address as a source IP. As a result, any storage accounts that use IP network rules to permit traffic from those subnets will no longer have an effect.
+
+> [!NOTE]
+> For updating the existing service endpoints to access a storage account in another region, perform an [update subnet](https://docs.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation on the subnet after registering the subscription with the `AllowGlobalTagsForStorage` feature. Similarly, to go back to the old configuration, perform an [update subnet](https://docs.microsoft.com/cli/azure/network/vnet/subnet?view=azure-cli-latest#az-network-vnet-subnet-update) operation after deregistering the subscription with the `AllowGlobalTagsForStorage` feature.
++ #### [Portal](#tab/azure-portal) During the preview you must use either PowerShell or the Azure CLI to enable this feature.
storage Storage Files Identity Auth Active Directory Domain Service Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-domain-service-enable.md
description: Learn how to enable identity-based authentication over Server Messa
Previously updated : 08/16/2022 Last updated : 08/17/2022
az storage account update -n <storage-account-name> -g <resource-group-name> --e
## Recommended: Use AES-256 encryption
-By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these steps:
+By default, Azure AD DS authentication uses Kerberos RC4 encryption. We recommend configuring it to use Kerberos AES-256 encryption instead by following these instructions.
-As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), execute the following Azure PowerShell commands. If using Azure Cloud Shell, be sure to run the `Connect-AzureAD` cmdlet first.
+The action requires running an operation on the Active Directory domain that's managed by Azure AD DS to reach a domain controller to request a property change to the domain object. The cmdlets below are Windows Server Active Directory PowerShell cmdlets, not Azure PowerShell cmdlets. Because of this, these PowerShell commands must be run from a machine that's domain-joined to the Azure AD DS domain.
-```azurepowershell
+> [!IMPORTANT]
+> Azure Cloud Shell won't work in this scenario.
+
+As an Azure AD DS user with the required permissions (typically, members of the **AAD DC Administrators** group will have the necessary permissions), execute the following PowerShell commands.
+
+```powershell
# 1. Find the service account in your managed domain that represents the storage account. $storageAccountName= ΓÇ£<InsertStorageAccountNameHere>ΓÇ¥
stream-analytics Sql Db Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/sql-db-table.md
It's important to ensure that the output of your Stream Analytics job matches th
* **Type mismatch**: The query and target types aren't compatible. Rows won't be inserted in the destination. Use a [conversion function](/stream-analytics-query/data-types-azure-stream-analytics) such as TRY_CAST() to align types in the query. The alternate option is to alter the destination table in your SQL database. * **Range**: The target type range is considerably smaller than the one used in the query. Rows with out-of-range values [may not be inserted](/stream-analytics-query/data-types-azure-stream-analytics) in the destination table, or truncated. Consider altering the destination column to a larger type range. * **Implicit**: The query and target types are different but compatible. The data will be implicitly converted, but this could result in data loss or failures. Use a [conversion function](/stream-analytics-query/data-types-azure-stream-analytics) such as TRY_CAST() to align types in the query, or alter the destination table.
-* **Record**: This type isn't yet supported for this output. The value will be replaced by the string ΓÇÿrecordΓÇÖ. Consider [parsing](/stream-analytics-query/stream-analytics-parsing-json.md) the data, or using an UDF to [convert to string](/azure/stream-analytics/stream-analytics-javascript-user-defined-functions).
-* **Array**: This type isn't yet supported natively in Azure SQL Database. The value will be replaced by the string ΓÇÿrecordΓÇÖ. Consider [parsing](/stream-analytics-query/stream-analytics-parsing-json.md) the data, or using an UDF to [convert to string](/azure/stream-analytics/stream-analytics-javascript-user-defined-functions).
+* **Record**: This type isn't yet supported for this output. The value will be replaced by the string ΓÇÿrecordΓÇÖ. Consider [parsing](/azure/stream-analytics/stream-analytics-parsing-json) the data, or using an UDF to [convert to string](/azure/stream-analytics/stream-analytics-javascript-user-defined-functions).
+* **Array**: This type isn't yet supported natively in Azure SQL Database. The value will be replaced by the string ΓÇÿrecordΓÇÖ. Consider [parsing](/azure/stream-analytics/stream-analytics-parsing-json) the data, or using an UDF to [convert to string](/azure/stream-analytics/stream-analytics-javascript-user-defined-functions).
* **Column missing from destination table**: This column is missing from the destination table. The data won't be inserted. Add this column to your destination table if needed. ## Next steps
-* [Use SQL reference data as input source](/azure/stream-analytics/sql-reference-data)
+* [Use SQL reference data as input source](./sql-reference-data.md)
* [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference) * [Query examples for common Stream Analytics usage patterns](stream-analytics-stream-analytics-query-patterns.md) * [Understand inputs for Azure Stream Analytics](stream-analytics-add-inputs.md)
-* [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md)
+* [Understand outputs from Azure Stream Analytics](stream-analytics-define-outputs.md)
synapse-analytics Quick Start Create Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/quick-start-create-lake-database.md
A transaction consists of one or more discrete events.
The easiest way to find entities is by using the search box above the different business areas that contain the tables.
-![Database Template example](./media/quick-start-create-lake-database/model-example.png)
- ## Configure lake database After you have created the database, make sure the storage account and the filepath is set to a location where you wish to store the data. The path will default to the primary storage account within Azure Synapse Analytics but can be changed to your needs.
-
- ![Lake database example](./media/quick-start-create-lake-database/lake-database-example.png)
+
+ :::image type="content" source="./media/quick-start-create-lake-database/lake-database-example.png" alt-text="Screenshot of an individual entity properties in the Retail database template." lightbox="./media/quick-start-create-lake-database/lake-database-example.png":::
To save your layout and make it available within Azure Synapse, **Publish** all changes. This step completes the setup of the lake database and makes it available to all components within Azure Synapse Analytics and outside.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/netezza/1-design-performance-migration.md
Microsoft recommends moving your existing data model as-is to Azure and using th
You can automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the performance hit on the existing Netezza environment, which may already be running close to capacity.
-[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](/azure/hdinsight/hadoop/apache-hadoop-introduction), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](../../../hdinsight/hadoop/apache-hadoop-introduction.md), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
When you're planning to use Data Factory facilities to manage the migration process, create metadata that lists all the data tables to be migrated and their location.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/1-design-performance-migration.md
The [SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assis
SSMA for Oracle can help you migrate an Oracle data warehouse or data mart to Azure Synapse. SSMA is designed to automate the process of migrating tables, views, and data from an existing Oracle environment.
-[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](/azure/hdinsight/hadoop/apache-hadoop-introduction), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](../../../hdinsight/hadoop/apache-hadoop-introduction.md), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
Data Factory can be used to migrate data at source to Azure SQL target. This offline data movement helps to reduce migration downtime significantly.
The [workload management guide](../../sql-data-warehouse/analyze-your-workload.m
## Next steps
-To learn about ETL and load for Oracle migration, see the next article in this series: [Data migration, ETL, and load for Oracle migrations](2-etl-load-migration-considerations.md).
+To learn about ETL and load for Oracle migration, see the next article in this series: [Data migration, ETL, and load for Oracle migrations](2-etl-load-migration-considerations.md).
synapse-analytics 2 Etl Load Migration Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/2-etl-load-migration-considerations.md
You can use the Oracle connector in Data Factory to unload large Oracle tables i
:::image type="content" source="../media/2-etl-load-migration-considerations/azure-data-factory-source-tab.png" border="true" alt-text="Screenshot of Azure Data Factory Oracle partition options in the source tab.":::
-For information on how to configure the Oracle connector for parallel copy, see [Parallel copy from Oracle](/azure/data-factory/connector-oracle?tabs=data-factory#parallel-copy-from-oracle).
+For information on how to configure the Oracle connector for parallel copy, see [Parallel copy from Oracle](../../../data-factory/connector-oracle.md?tabs=data-factory#parallel-copy-from-oracle).
For more information on Data Factory copy activity performance and scalability, see [Copy activity performance and scalability guide](../../../data-factory/copy-activity-performance.md).
synapse-analytics 3 Security Access Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/3-security-access-operations.md
Azure Synapse automatically takes snapshots throughout the day and creates resto
Azure Synapse supports user-defined restore points, which are created from manually triggered snapshots. By creating restore points before and after large data warehouse modifications, you ensure that the restore points are logically consistent. The user-defined restore points augment data protection and reduce recovery time if there are workload interruptions or user errors.
-In addition to snapshots, Azure Synapse performs a standard geo-backup once per day to a [paired data center](/azure/availability-zones/cross-region-replication-azure). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored if restore points in the primary region aren't available.
+In addition to snapshots, Azure Synapse performs a standard geo-backup once per day to a [paired data center](../../../availability-zones/cross-region-replication-azure.md). The RPO for a geo-restore is 24 hours. You can restore the geo-backup to a server in any region where Azure Synapse is supported. A geo-backup ensures that a data warehouse can be restored if restore points in the primary region aren't available.
>[!TIP] >Microsoft Azure provides automatic backups to a separate geographical location to enable DR.
synapse-analytics 5 Minimize Sql Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/oracle/5-minimize-sql-issues.md
Although the SQL language is standardized, individual vendors sometimes implemen
You can automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the performance hit on the existing Oracle environment, which may already be running close to capacity.
-[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud to orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](/azure/hdinsight/hadoop/apache-hadoop-introduction), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud to orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](../../../hdinsight/hadoop/apache-hadoop-introduction.md), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
Azure also includes [Azure Database Migration Services](../../../dms/dms-overview.md) to help you plan and perform a migration from environments such as Oracle. [SQL Server Migration Assistant](/sql/ssma/oracle/sql-server-migration-assistant-for-oracle-oracletosql) (SSMA) for Oracle can automate migration of Oracle databases, including in some cases functions and procedural code.
synapse-analytics 1 Design Performance Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/migration-guides/teradata/1-design-performance-migration.md
When migrating from an on-premises Teradata environment, you can leverage cloud
You can automate and orchestrate the migration process by using the capabilities of the Azure environment. This approach minimizes the performance hit on the existing Netezza environment, which may already be running close to capacity.
-[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](/azure/hdinsight/hadoop/apache-hadoop-introduction), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
+[Azure Data Factory](../../../data-factory/introduction.md) is a cloud-based data integration service that supports creating data-driven workflows in the cloud that orchestrate and automate data movement and data transformation. You can use Data Factory to create and schedule data-driven workflows (pipelines) that ingest data from disparate data stores. Data Factory can process and transform data by using compute services such as [Azure HDInsight Hadoop](../../../hdinsight/hadoop/apache-hadoop-introduction.md), Spark, Azure Data Lake Analytics, and Azure Machine Learning.
When you're planning to use Data Factory facilities to manage the migration process, create metadata that lists all the data tables to be migrated and their location.
synapse-analytics Sql Data Warehouse Table Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/sql-data-warehouse-table-constraints.md
FOREIGN KEY constraint is not supported in dedicated SQL pool.
Having primary key and/or unique key allows dedicated SQL pool engine to generate an optimal execution plan for a query. All values in a primary key column or a unique constraint column should be unique.
-After creating a table with primary key or unique constraint in dedicated SQL pool, users need to make sure all values in those columns are unique. A violation of that may cause the query to return inaccurate result. This example shows how a query may return inaccurate result if the primary key or unique constraint column includes duplicate values.
+> [!IMPORTANT]
+> After creating a table with primary key or unique constraint in dedicated SQL pool, users need to make sure all values in those columns are unique.
+> A violation of that may cause the query to return inaccurate result.
+
+This example shows how a query may return inaccurate result if the primary key or unique constraint column includes duplicate values.
```sql -- Create table t1
synapse-analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/whats-new.md
This article lists updates to Azure Synapse Analytics that are published in Apri
## General
-* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](/azure/cognitive-services/) models, AI models from partners, and bring-your-own-data models.
+* **Azure Orbital analytics with Synapse Analytics** - We now offer an [Azure Orbital analytics sample solution](https://github.com/Azure/Azure-Orbital-Analytics-Samples) showing an end-to-end implementation of extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with [Azure Synapse Analytics](overview-what-is.md). The sample solution also demonstrates how to integrate geospatial-specific [Azure Cognitive Services](../cognitive-services/index.yml) models, AI models from partners, and bring-your-own-data models.
* **Azure Synapse success by design** - Project success is no accident and requires careful planning and execution. The Synapse Analytics' Success by Design playbooks are now available. The [Azure Synapse proof of concept playbook](./guidance/proof-of-concept-playbook-overview.md) provides a guide to scope, design, execute, and evaluate a proof of concept for SQL or Spark workloads. These guides contain best practices from the most challenging and complex solution implementations incorporating Azure Synapse. To learn more about the Azure Synapse proof of concept playbook, read [Success by Design](./guidance/success-by-design-introduction.md). ## SQL
Now, Azure Synapse Analytics provides built-in support for deep learning infrast
To learn more about how to leverage these libraries within your Azure Synapse Analytics GPU-accelerated pools, read the [Deep learning tutorials](./machine-learning/concept-deep-learning.md). ## Next steps
-[Get started with Azure Synapse Analytics](get-started.md)
+[Get started with Azure Synapse Analytics](get-started.md)
update-center Manage Update Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/manage-update-settings.md
To configure update settings on your machines on a single VM, follow these steps
- **Periodic assessment** - enable periodic **Assessment** to run every 24 hours. >[!NOTE]
- > You must [register for the periodic assessement](/azure/update-center/enable-machines?branch=release-updatecenterv2-publicpreview&tabs=portal-periodic%2Cps-periodic-assessment%2Ccli-periodic-assessment%2Crest-periodic-assessment) in your Azure subscription to enable this feature.
+ > You must [register for the periodic assessement](./enable-machines.md?branch=release-updatecenterv2-publicpreview&tabs=portal-periodic%2cps-periodic-assessment%2ccli-periodic-assessment%2crest-periodic-assessment) in your Azure subscription to enable this feature.
- - **Hot patching** - for Azure VMs, you can enable [hot patching](/azure/automanage/automanage-hotpatch) on supported Windows Server Azure Edition Virtual Machines (VMs) don't require a reboot after installation. You can use update management center (preview) to install patches with other patch classifications or to schedule patch installation when you require immediate critical patch deployment.
+ - **Hot patching** - for Azure VMs, you can enable [hot patching](../automanage/automanage-hotpatch.md) on supported Windows Server Azure Edition Virtual Machines (VMs) don't require a reboot after installation. You can use update management center (preview) to install patches with other patch classifications or to schedule patch installation when you require immediate critical patch deployment.
- **Patch orchestration** option provides the following: - **Automatic by operating system** - When the workload running on the VM doesn't have to meet availability targets, operating system updates are automatically downloaded and installed. Machines are rebooted as needed.
- - **Azure-orchestrated (preview)** - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](/azure/virtual-machines/automatic-vm-guest-patching). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
+ - **Azure-orchestrated (preview)** - Available *Critical* and *Security* patches are downloaded and applied automatically on the Azure VM using [automatic VM guest patching](../virtual-machines/automatic-vm-guest-patching.md). This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
- **Manual updates** - Configures the Windows Update agent by setting [configure automatic updates](/windows-server/administration/windows-server-update-services/deploy/4-configure-group-policy-settings-for-automatic-updates#configure-automatic-updates). - **Image Default** - Only supported for Linux Virtual Machines, this mode honors the default patching configuration in the image used to create the VM.
A notification appears to confirm that the update settings are successfully chan
* [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal. * To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
update-center Periodic Assessment At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/periodic-assessment-at-scale.md
This article describes how to enable Periodic Assessment for your machines at scale using Azure Policy. Periodic Assessment is a setting on your machine that enables you to see the latest updates available for your machines and removes the hassle of performing assessment manually every time you need to check the update status. Once you enable this setting, update management center (preview) fetches updates on your machine once every 24 hours. >[!NOTE]
-> You must [register for the periodic assessement](/azure/update-center/enable-machines?branch=release-updatecenterv2-publicpreview&tabs=portal-periodic%2Cps-periodic-assessment%2Ccli-periodic-assessment%2Crest-periodic-assessment) in your Azure subscription to enable this feature.
+> You must [register for the periodic assessement](./enable-machines.md?branch=release-updatecenterv2-publicpreview&tabs=portal-periodic%2cps-periodic-assessment%2ccli-periodic-assessment%2crest-periodic-assessment) in your Azure subscription to enable this feature.
## Enable Periodic assessment for your Azure machines using Policy 1. Go to **Policy** from the Azure portal and under **Authoring**, go to **Definitions**.
You can monitor compliance of resources under **Compliance** and remediation sta
* [View assessment compliance](view-updates.md) and [deploy updates](deploy-updates.md) for a selected Azure VM or Arc-enabled server, or across [multiple machines](manage-multiple-machines.md) in your subscription in the Azure portal. * To view update assessment and deployment logs generated by update management center (preview), see [query logs](query-logs.md).
-* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
+* To troubleshoot issues, see the [Troubleshoot](troubleshoot.md) update management center (preview).
update-center Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-center/troubleshoot.md
To review the logs related to all actions performed by the extension, check for
### Arc-enabled servers
-For Arc-enabled servers, review the [troubleshoot VM extensions](/azure/azure-arc/servers/troubleshoot-vm-extensions) article for general troubleshooting steps.
+For Arc-enabled servers, review the [troubleshoot VM extensions](../azure-arc/servers/troubleshoot-vm-extensions.md) article for general troubleshooting steps.
To review the logs related to all actions performed by the extension, on Windows check for more details in `C:\ProgramData\GuestConfig\extension_Logs\Microsoft.SoftwareUpdateManagement\WindowsOsUpdateExtension`. It includes the following two log files of interest:
virtual-desktop Deploy Windows Server Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/deploy-windows-server-virtual-machine.md
description: How to deploy and configure Windows Server edition virtual machines
- Previously updated : 03/18/2022-+ Last updated : 08/18/2022+
-# Deploy Windows Server based virtual machines on Azure Virtual Desktop
+# Deploy Windows Server-based virtual machines on Azure Virtual Desktop
>[!IMPORTANT] >This content applies to Azure Virtual Desktop with Azure Resource Manager Azure Virtual Desktop objects. If you're using Azure Virtual Desktop (classic) without Azure Resource Manager objects, see [this article](./virtual-desktop-fall-2019/deploy-windows-7-virtual-machine.md).
-The process to deploy Windows Server based Virtual Machines (VM) on Azure Virtual Desktop is slightly different than for VMs running other versions of Windows such as Windows 10 or Windows 11. This guide will walk you through the steps.
-
-Azure Virtual Desktop Host pool supports running Windows Server 2012 R2 and above editions.
+The process for deploying Windows Server-based virtual machines (VMs) on Azure Virtual Desktop is slightly different than the one for VMs running other versions of Windows, such as Windows 10 or Windows 11. This guide will walk you through the process.
> [!NOTE]
-> - Azure AD Join session host scenario is not supported with Windows Server editions.
+> Windows Server scenarios support the following versions of Azure Active Directory (AD)-joined session hosts:
+>
+> - Windows Server 2019
+> - Windows Server 2022
+>
+> However, Windows Server scenarios don't support the following versions of Azure AD-joined session hosts:
+>
+> - Windows Server 2012
+> - Windows Server 2016
## Prerequisites
-Running Windows Server based host virtual machines on Azure Virtual Desktop requires Remote Desktop Services (RDS) Licensing Server.
+Before you get started, you'll need to make sure you have the following things:
+
+- Azure Virtual Desktop host pools support running Windows Server 2012 R2 and later.
+
+- Running Windows Server-based host VMs on Azure Virtual Desktop requires a Remote Desktop Services (RDS) Licensing Server. This server should be a separate server or remote VM in your environment that you've assigned the Remote Desktop Licensing Server role to.
+
+ For more information about licensing, see the following articles:
+
+ - [Operating systems and licenses](prerequisites.md)
+ - [License your RDS deployment with client access licenses](/windows-server/remote/remote-desktop-services/rds-client-access-license)
-For more information, refer [Operating systems and licenses](prerequisites.md)
+ If you're already using Windows Server-based Remote Desktop Services, you probably already have a licensing server set up in your environment. If you do, you can continue using the same license server as long as the Azure Virtual Desktop hosts have line-of-sight with the server.
-Use the following information to learn about how licensing works in Remote Desktop Services and to deploy and manage your licenses.
+- Your Windows Server VM should already be assigned the Remote Desktop Session Host role. Without that role, the Azure Virtual Desktop Agent won't install and the deployment won't work.
-[License your RDS deployment with client access licenses](/windows-server/remote/remote-desktop-services/rds-client-access-license)
+## Configure Windows Server-based VMs
-If you're already using Windows Server based Remote Desktop Services, you'll likely have Licensing Server setup in your environment. You can continue using the same provided Azure Virtual Desktop hosts has line of sight to the Server.
+Now that you've fulfilled the requirements, you're ready to configure Windows Server-based VMs for deployment on Azure Virtual Desktop.
-## Configure Windows Server based Virtual Machines
+To configure your VM:
-Once you've done the prerequisites, you're ready to configure Windows Server based VMs for deployment on Azure Virtual Desktop.
+1. Follow the instructions from [Create a host pool using the Azure portal](create-host-pools-azure-marketplace.md) until you reach step 6 in [Virtual machine details](create-host-pools-azure-marketplace.md#virtual-machine-details). When it's time to select an image in the **Virtual machine details** field, either select a relevant Windows Server image or upload your own customized Windows Server image.
-1. Follow the instructions from [Create a host pool using the Azure portal](create-host-pools-azure-marketplace.md).
+2. For the **Domain to join** field, you can select either **Active Directory** or **Azure Active Directory**.
+
+ >[!NOTE]
+ >If you select **Azure Active Directory**, you should not select the **Enroll VM with Intune** option, as Intune doesn't support Windows Server.
-1. Select relevant Windows Server image or upload your own customized image based on Windows Server edition at **Step 6** under **Virtual machine details** section.
+3. Connect to the newly deployed VM using an account with local administrator privileges.
-1. Select **Active Directory** as an option under **Domain to Join** at **Step 12** of **Virtual machine details** section.
+4. Next, open the **Start** menu on your VM Desktop and enter **gpedit.msc** to open the Group Policy Editor.
-1. Connect to the newly deployed VM using an account with local administrator privileges.
-1. Open the Start menu and type "gpedit.msc" to open the Group Policy Editor.
-1. Navigate the tree to **Computer Configuration > Administrative Templates > Windows Components > Remote Desktop Services > Remote Desktop Session Host > Licensing**
-1. Select policy **Use the specified Remote Desktop license servers** and set the policy to point to the Remote Desktop Licensing Servers FQDN/IP Address.
-2. Select policy **Specify the licensing mode for the Remote Desktop Session Host server** and set the policy to Per Device or Per User, as appropriate for your licensing eligibility.
+5. Go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Licensing**.
+
+6. Once you're at **Licensing**, select **Use the specified Remote Desktop license servers**, then set the policy to point to the Remote Desktop Licensing Servers FQDN/IP Address.
+
+7. Finally, select **Specify the licensing mode for the Remote Desktop Session Host server** and set the policy to **Per device** or **Per user**, depending on your licensing eligibility.
> [!NOTE]
-> - You can also use and apply Domain based GPO and scope it to OU where Azure Virtual Desktop Hosts resides in Active Directory.
+> You can also use and apply a domain-based group policy object (GPO) and scope it to the Organizational Unit (OU) where the Azure Virtual Desktop hosts are located in your Active Directory.
## Next steps
-Now that you've deployed Windows Server based Host VMs, you can sign in to a supported Azure Virtual Desktop client to test it as part of a user session. If you want to learn how to connect to a session, check out these articles:
-- [Connect with the Windows Desktop client](user-documentation/connect-windows-7-10.md)-- [Connect with the web client](user-documentation/connect-web.md)
+Now that you've deployed Windows Server-based Host VMs, you can sign in to a supported Azure Virtual Desktop client to test it as part of a user session. If you want to learn how to connect to a session using Remote Desktop Services for Windows Server, check out our [list of available clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
+
+If you'd like to learn about other ways to create VMs for Azure Virtual Desktop, check out these articles:
+
+- To set up a VM automatically as part of the host pool setup process, see [Tutorial: Create a host pool](create-host-pools-azure-marketplace.md).
+- If you'd like to manually create VMs in the Azure portal after setting up a host pool, see [Expand an existing host pool with new session hosts](expand-existing-host-pool.md).
+- You can also manually create a VM with [Azure CLI, PowerShell](create-host-pools-powershell.md), or [REST API](/rest/api/desktopvirtualization/).
virtual-machines Copy Files To Vm Using Scp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/copy-files-to-vm-using-scp.md
+
+ Title: Use SCP to move files to and from a VM
+description: Securely move files to and from a Linux VM in Azure using SCP and an SSH key pair.
++++ Last updated : 07/30/2022+++
+# Use SCP to move files to and from a VM
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+This article shows how to move files from your workstation up to an Azure VM, or from an Azure VM down to your workstation, using Secure Copy (SCP). Moving files between your workstation and a VM, quickly and securely, is critical for managing your Azure infrastructure.
+
+For this article, you need a VM deployed in Azure with SSH enabled. You also need an SCP client for your local computer. It is built on top of SSH and included in the default shell of most computers.
++
+## Quick commands
+
+Copy a file up to the VM
+
+```bash
+scp file azureuser@azurehost:directory/targetfile
+```
+
+Copy a file down from the VM
+
+```bash
+scp azureuser@azurehost:directory/file targetfile
+```
+
+## Detailed walkthrough
+
+As examples, we move an Azure configuration file up to a VM and pull down a log file directory, both using SCP.
+
+## SSH key pair authentication
+
+SCP uses SSH for the transport layer. SSH handles the authentication on the destination host, and it moves the file in an encrypted tunnel provided by default with SSH. For SSH authentication, usernames and passwords can be used. However, SSH public and private key authentication are recommended as a security best practice. Once SSH has authenticated the connection, SCP then begins copying the file. Using a properly configured `~/.ssh/config` and SSH public and private keys, the SCP connection can be established by just using a server name (or IP address). If you only have one SSH key, SCP looks for it in the `~/.ssh/` directory, and uses it by default to log in to the VM.
+
+For more information on configuring your `~/.ssh/config` and SSH public and private keys, see [Create SSH keys](/linux/mac-create-ssh-keys.md).
+
+## SCP a file to a VM
+
+For the first example, we copy an Azure configuration file up to a VM that is used to deploy automation. Because this file contains Azure API credentials, which include secrets, security is important. The encrypted tunnel provided by SSH protects the contents of the file.
+
+The following command copies the local *.azure/config* file to an Azure VM with FQDN *myserver.eastus.cloudapp.azure.com*. If you don't have an [FQDN set](/create-fqdn.md), you can also use the IP address of the VM. The admin user name on the Azure VM is *azureuser*. The file is targeted to the */home/azureuser/* directory. Substitute your own values in this command.
+
+```bash
+scp ~/.azure/config azureuser@myserver.eastus.cloudapp.com:/home/azureuser/config
+```
+
+## SCP a directory from a VM
+
+For this example, we copy a directory of log files from the VM down to your workstation. A log file may or may not contain sensitive or secret data. However, using SCP ensures the contents of the log files are encrypted. Using SCP to transfer the files is the easiest way to get the log directory and files down to your workstation while also being secure.
+
+The following command copies files in the */home/azureuser/logs/* directory on the Azure VM to the local /tmp directory:
+
+```bash
+scp -r azureuser@myserver.eastus.cloudapp.com:/home/azureuser/logs/. /tmp/
+```
+
+The `-r` flag instructs SCP to recursively copy the files and directories from the point of the directory listed in the command. Also notice that the command-line syntax is similar to a `cp` copy command.
+
+## Next steps
+
+* [Manage users, SSH, and check or repair disks on Azure Linux VMs using the VMAccess Extension](/extensions/vmaccess.md?toc=/azure/virtual-machines/linux/toc.json)
virtual-machines Disk Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disk-encryption.md
To enable double encryption at rest for managed disks, see our articles covering
## Server-side encryption versus Azure disk encryption
-[Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) leverages either the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows to encrypt managed disks with customer-managed keys within the guest VM. Server-side encryption with customer-managed keys improves on ADE by enabling you to use any OS types and images for your VMs by encrypting data in the Storage service.
+[Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) leverages either the [DM-Crypt](https://en.wikipedia.org/wiki/Dm-crypt) feature of Linux or the [BitLocker](/windows/security/information-protection/bitlocker/bitlocker-overview) feature of Windows to encrypt managed disks with customer-managed keys within the guest VM. Server-side encryption with customer-managed keys improves on ADE by enabling you to use any OS types and images for your VMs by encrypting data in the Storage service.
> [!IMPORTANT] > Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD). When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another, the managed identity associated with managed disks is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Azure AD directories](../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
virtual-machines Expand Unmanaged Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/expand-unmanaged-disks.md
Title: Expand unmanaged disks in Azure
description: Expand the size of an unmanaged virtual hard disks attached to a virtual machine using Azure PowerShell in the Resource Manager deployment model. -+ Last updated 11/17/2021
virtual-machines Azure Disk Enc Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/azure-disk-enc-linux.md
For a full list of prerequisites, see [Azure Disk Encryption for Linux VMs](../l
## Extension Schema There are two versions of extension schema for Azure Disk Encryption (ADE):-- v1.1 - A newer recommended schema that does not use Azure Active Directory (AAD) properties.-- v0.1 - An older schema that requires Azure Active Directory (AAD) properties.
+- v1.1 - A newer recommended schema that does not use Azure Active Directory (Azure AD) properties.
+- v0.1 - An older schema that requires Azure Active Directory (Azure AD) properties.
To select a target schema, the `typeHandlerVersion` property must be set equal to version of schema you want to use.
-### Schema v1.1: No AAD (recommended)
+### Schema v1.1: No Azure AD (recommended)
-The v1.1 schema is recommended and does not require Azure Active Directory (AAD) properties.
+The v1.1 schema is recommended and does not require Azure Active Directory (Azure AD) properties.
> [!NOTE]
-> The `DiskFormatQuery` parameter is deprecated. Its functionity has been replaced by the EncryptFormatAll option instead, which is the recommended way to format data disks at time of encryption.
+> The `DiskFormatQuery` parameter is deprecated. Its functionality has been replaced by the EncryptFormatAll option instead, which is the recommended way to format data disks at time of encryption.
```json {
The v1.1 schema is recommended and does not require Azure Active Directory (AAD)
```
-### Schema v0.1: with AAD
+### Schema v0.1: with Azure AD
The 0.1 schema requires `AADClientID` and either `AADClientSecret` or `AADClientCertificate`.
Alternatively, you can file an Azure support incident. Go to [Azure support](htt
## Next steps * For more information about VM extensions, see [Virtual machine extensions and features for Linux](features-linux.md).
-* For more information about Azure Disk Encryption for Linux, see [Linux virtual machines](../../security/fundamentals/azure-disk-encryption-vms-vmss.md#linux-virtual-machines).
+* For more information about Azure Disk Encryption for Linux, see [Linux virtual machines](../../virtual-machines/linux/disk-encryption-overview.md).
virtual-machines Azure Disk Enc Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/azure-disk-enc-windows.md
Last updated 03/19/2020
## Overview
-Azure Disk Encryption leverages BitLocker to provide full disk encryption on Azure virtual machines running Windows. This solution is integrated with Azure Key Vault to manage disk encryption keys and secrets in your key vault subscription.
+Azure Disk Encryption uses BitLocker to provide full disk encryption on Azure virtual machines running Windows. This solution is integrated with Azure Key Vault to manage disk encryption keys and secrets in your key vault subscription.
## Prerequisites
For a full list of prerequisites, see [Azure Disk Encryption for Windows VMs](..
## Extension Schema There are two versions of extension schema for Azure Disk Encryption (ADE):-- v2.2 - A newer recommended schema that does not use Azure Active Directory (AAD) properties.-- v1.1 - An older schema that requires Azure Active Directory (AAD) properties.
+- v2.2 - A newer recommended schema that does not use Azure Active Directory (Azure AD) properties.
+- v1.1 - An older schema that requires Azure Active Directory (Azure AD) properties.
To select a target schema, the `typeHandlerVersion` property must be set equal to version of schema you want to use.
-### Schema v2.2: No AAD (recommended)
+### Schema v2.2: No Azure AD (recommended)
The v2.2 schema is recommended for all new VMs and does not require Azure Active Directory properties.
The v2.2 schema is recommended for all new VMs and does not require Azure Active
} ``` -
-### Schema v1.1: with AAD
+### Schema v1.1: with Azure AD
The 1.1 schema requires `aadClientID` and either `aadClientSecret` or `AADClientCertificate` and is not recommended for new VMs.
Note: All values are case sensitive.
## Template deployment
-For an example of template deployment based on schema v2.2, see Azure QuickStart Template [encrypt-running-windows-vm-without-aad](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad).
+For an example of template deployment based on schema v2.2, see Azure Quickstart Template [encrypt-running-windows-vm-without-aad](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm-without-aad).
-For an example of template deployment based on schema v1.1, see Azure QuickStart Template [encrypt-running-windows-vm](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm).
+For an example of template deployment based on schema v1.1, see Azure Quickstart Template [encrypt-running-windows-vm](https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.compute/encrypt-running-windows-vm).
>[!NOTE] > Also if `VolumeType` parameter is set to All, data disks will be encrypted only if they are properly formatted.
Alternatively, you can file an Azure support incident. Go to [Azure support](htt
## Next steps * For more information about extensions, see [Virtual machine extensions and features for Windows](features-windows.md).
-* For more information about Azure Disk Encryption for Windows, see [Windows virtual machines](../../security/fundamentals/azure-disk-encryption-vms-vmss.md#windows-virtual-machines).
+* For more information about Azure Disk Encryption for Windows, see [Windows virtual machines](../../virtual-machines/windows/disk-encryption-overview.md).
virtual-machines Oms Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/oms-windows.md
The following JSON shows the schema for the Log Analytics agent extension. The e
Azure VM extensions can be deployed with Azure Resource Manager templates. The JSON schema detailed in the previous section can be used in an Azure Resource Manager template to run the Log Analytics agent extension during an Azure Resource Manager template deployment. A sample template that includes the Log Analytics agent VM extension can be found on the [Azure Quickstart Gallery](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/oms-extension-windows-vm). >[!NOTE]
->The template does not support specifying more than one workspace ID and workspace key when you want to configure the agent to report to multiple workspaces. To configure the agent to report to multiple workspaces, see [Adding or removing a workspace](../../azure-monitor/agents/agent-manage.md#adding-or-removing-a-workspace).
+>The template does not support specifying more than one workspace ID and workspace key when you want to configure the agent to report to multiple workspaces. To configure the agent to report to multiple workspaces, see [Add or remove a workspace](../../azure-monitor/agents/agent-manage.md#add-or-remove-a-workspace).
The JSON for a virtual machine extension can be nested inside the virtual machine resource, or placed at the root or top level of a Resource Manager JSON template. The placement of the JSON affects the value of the resource name and type. For more information, see [Set name and type for child resources](../../azure-resource-manager/templates/child-resource-name-type.md).
virtual-machines Generalize https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generalize.md
To generalize your Windows VM, follow these steps:
5. Then change the directory to %windir%\system32\sysprep, and then run: ```
- sysprep /generalize /shutdown /mode:vm
+ sysprep.exe /oobe /generalize /mode:vm /shutdown
``` 6. The VM will shut down when Sysprep is finished generalizing the VM. Do not restart the VM.
virtual-machines Generation 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/generation-2.md
For more information, see [Trusted launch](trusted-launch.md).
| Azure Site Recovery | :heavy_check_mark: | :heavy_check_mark: | | Backup/restore | :heavy_check_mark: | :heavy_check_mark: | | Azure Compute Gallery | :heavy_check_mark: | :heavy_check_mark: |
-| [Azure disk encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) | :heavy_check_mark: | :heavy_check_mark: |
+| [Azure disk encryption](../virtual-machines/disk-encryption-overview.md) | :heavy_check_mark: | :heavy_check_mark: |
| [Server-side encryption](disk-encryption.md) | :heavy_check_mark: | :heavy_check_mark: |
virtual-machines Linux Vm Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux-vm-connect.md
Once the above prerequisites are met, you are ready to connect to your VM. Open
## Next steps
-Learn how to transfer files to an existing Linux VM, see [Use SCP to move files to and from a Linux VM](./linux/copy-files-to-linux-vm-using-scp.md).
+Learn how to transfer files to an existing VM, see [Use SCP to move files to and from a VM](./copy-files-to-vm-using-scp.md).
virtual-machines Attach Disk Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/attach-disk-portal.md
Title: Attach a data disk to a Linux VM description: Use the portal to attach new or existing data disk to a Linux VM.--++ Last updated 08/13/2021-+
virtual-machines Azure To Guest Disk Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/azure-to-guest-disk-mapping.md
Title: How to map Azure Disks to Linux VM guest disks description: How to determine the Azure Disks that underlay a Linux VM's guest disks. -+
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disk-encryption-overview.md
Linux server distributions that are not endorsed by Azure do not support Azure D
| Canonical | Ubuntu 16.04 | 16.04-DAILY-LTS | Canonical:UbuntuServer:16.04-DAILY-LTS:latest | OS and data disk | | Canonical | Ubuntu 14.04.5</br>[with Azure tuned kernel updated to 4.15 or later](disk-encryption-troubleshooting.md) | 14.04.5-LTS | Canonical:UbuntuServer:14.04.5-LTS:latest | OS and data disk | | Canonical | Ubuntu 14.04.5</br>[with Azure tuned kernel updated to 4.15 or later](disk-encryption-troubleshooting.md) | 14.04.5-DAILY-LTS | Canonical:UbuntuServer:14.04.5-DAILY-LTS:latest | OS and data disk |
+| Oracle | Oracle Linux 8.5 (Public Preview) | 8.5 | Oracle:Oracle-Linux:ol85-lvm:latest | OS and data disk (see note below) |
+| Oracle | Oracle Linux 8.5 Gen 2 (Public Preview) | 8.5 | Oracle:Oracle-Linux:ol85-lvm-gen2:latest | OS and data disk (see note below) |
+| RedHat | RHEL 8.6 | 8.6 | RedHat:RHEL:8_6:latest | OS and data disk (see note below) |
+| RedHat | RHEL 8.6 Gen 2 | 8.5 | RedHat:RHEL:86-gen2:latest | OS and data disk (see note below) |
+| RedHat | RHEL 8.5 | 8.5 | RedHat:RHEL:8_5:latest | OS and data disk (see note below) |
+| RedHat | RHEL 8.5 Gen 2 | 8.5 | RedHat:RHEL:85-gen2:latest | OS and data disk (see note below) |
| RedHat | RHEL 8.4 | 8.4 | RedHat:RHEL:8.4:latest | OS and data disk (see note below) | | RedHat | RHEL 8.3 | 8.3 | RedHat:RHEL:8.3:latest | OS and data disk (see note below) | | RedHat | RHEL 8-LVM | 8-LVM | RedHat:RHEL:8-LVM:8.2.20200905 | OS and data disk (see note below) |
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/migrate-to-premium-storage-using-azure-site-recovery.md
Title: Migrate your Linux VMs to Azure Premium Storage with Azure Site Recovery description: Migrate your existing virtual machines to Azure Premium Storage by using Site Recovery. Premium Storage offers high-performance, low-latency disk support for I/O-intensive workloads running on Azure Virtual Machines. -+ Last updated 08/15/2017
virtual-machines Os Disk Swap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/os-disk-swap.md
Title: Swap between OS disks using the Azure CLI ' description: Change the operating system disk used by an Azure virtual machine using the Azure CLI.--++ Last updated 04/24/2018-+ # Change the OS disk used by an Azure VM using the Azure CLI
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/overview.md
- Title: Overview of Linux VMs in Azure
-description: Overview of Linux virtual machines in Azure.
----- Previously updated : 11/14/2019----
-# Linux virtual machines in Azure
-
-**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-
-Azure Virtual Machines (VM) is one of several types of [on-demand, scalable computing resources](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Typically, you choose a VM when you need more control over the computing environment than the other choices offer. This article gives you information about what you should consider before you create a VM, how you create it, and how you manage it.
-
-An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and installing the software that runs on it.
-
-Azure virtual machines can be used in various ways. Some examples are:
-
-* **Development and test** ΓÇô Azure VMs offer a quick and easy way to create a computer with specific configurations required to code and test an application.
-* **Applications in the cloud** ΓÇô Because demand for your application can fluctuate, it might make economic sense to run it on a VM in Azure. You pay for extra VMs when you need them and shut them down when you donΓÇÖt.
-* **Extended datacenter** ΓÇô Virtual machines in an Azure virtual network can easily be connected to your organizationΓÇÖs network.
-
-The number of VMs that your application uses can scale up and out to whatever is required to meet your needs.
-
-## What do I need to think about before creating a VM?
-There are always a multitude of [design considerations](/azure/architecture/reference-architectures/n-tier/linux-vm) when you build out an application infrastructure in Azure. These aspects of a VM are important to think about before you start:
-
-* The names of your application resources
-* The location where the resources are stored
-* The size of the VM
-* The maximum number of VMs that can be created
-* The operating system that the VM runs
-* The configuration of the VM after it starts
-* The related resources that the VM needs
-
-### Locations
-There are multiple [geographical regions](https://azure.microsoft.com/regions/) around the world where you can create Azure resources. Usually, the region is called **location** when you create a VM. For a VM, the location specifies where the virtual hard disks will be stored.
-
-This table shows some of the ways you can get a list of available locations.
-
-| Method | Description |
-| | |
-| Azure portal |Select a location from the list when you create a VM. |
-| Azure PowerShell |Use the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) command. |
-| REST API |Use the [List locations](/rest/api/resources/subscriptions) operation. |
-| Azure CLI |Use the [az account list-locations](/cli/azure/account) operation. |
-
-## Availability
-Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9% provided you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the standard 99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your workload inside of an availability set. An availability set ensures that your VMs are distributed across multiple fault domains in the Azure data centers as well as deployed onto hosts with different maintenance windows. The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/) explains the guaranteed availability of Azure as a whole.
-
-## VM Size
-The [size](../sizes.md) of the VM that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, storage capacity, and network bandwidth. Azure offers a wide variety of sizes to support many types of uses.
-
-Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) based on the VMΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
-
-## VM Limits
-Your subscription has default [quota limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) in place that could impact the deployment of many VMs for your project. The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by [filing a support ticket requesting an increase](../../azure-portal/supportability/regional-quota-requests.md)
-
-## Managed Disks
-
-Managed Disks handles Azure Storage account creation and management in the background for you, and ensures that you do not have to worry about the scalability limits of the storage account. You specify the disk size and the performance tier (Standard or Premium), and Azure creates and manages the disk. As you add disks or scale the VM up and down, you don't have to worry about the storage being used. If you're creating new VMs, [use the Azure CLI](quick-create-cli.md) or the Azure portal to create VMs with Managed OS and data disks. If you have VMs with unmanaged disks, you can [convert your VMs to be backed with Managed Disks](convert-unmanaged-to-managed-disks.md).
-
-You can also manage your custom images in one storage account per Azure region, and use them to create hundreds of VMs in the same subscription. For more information about Managed Disks, see the [Managed Disks Overview](../managed-disks-overview.md).
-
-## Distributions
-Microsoft Azure supports running a number of popular Linux distributions provided and maintained by a number of partners. You can find available distributions in the Azure Marketplace. Microsoft actively works with various Linux communities to add even more flavors to the [Azure endorsed Linux Distros](endorsed-distros.md) list.
-
-If your preferred Linux distro of choice is not currently present in the gallery, you can "Bring your own Linux" VM by [creating and uploading a Linux VHD in Azure](create-upload-generic.md).
-
-Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure runtime. For more information on Azure partner offers, see the following links:
-
-* Linux on Azure - [Endorsed Distributions](endorsed-distros.md)
-* SUSE - [Azure Marketplace - SUSE Linux Enterprise Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=suse)
-* Red Hat - [Azure Marketplace - Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Red%20Hat%20Enterprise%20Linux)
-* Canonical - [Azure Marketplace - Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&filters=partners&search=canonical)
-* Debian - [Azure Marketplace - Debian](https://azuremarketplace.microsoft.com/marketplace/apps?search=Debian&page=1)
-* FreeBSD - [Azure Marketplace - FreeBSD](https://azuremarketplace.microsoft.com/marketplace/apps?search=freebsd&page=1)
-* Flatcar - [Azure Marketplace - Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Flatcar&page=1)
-* RancherOS - [Azure Marketplace - RancherOS](https://azuremarketplace.microsoft.com/marketplace/apps/rancher.rancheros)
-* Bitnami - [Bitnami Library for Azure](https://azure.bitnami.com/)
-* Mesosphere - [Azure Marketplace - Mesosphere DC/OS on Azure](https://azure.microsoft.com/services/kubernetes-service/mesosphere/)
-* Docker - [Azure Marketplace - Docker images](https://azuremarketplace.microsoft.com/marketplace/apps?search=docker&page=1&filters=virtual-machine-images)
-* Jenkins - [Azure Marketplace - CloudBees Jenkins Platform](https://azuremarketplace.microsoft.com/marketplace/apps/cloudbees.cloudbees-core-contact)
--
-## Cloud-init
-
-To achieve a proper DevOps culture, all infrastructure must be code. When all the infrastructure lives in code it can easily be recreated. Azure works with all the major automation tooling like Ansible, Chef, SaltStack, and Puppet. Azure also has its own tooling for automation:
-
-* [Azure Templates](create-ssh-secured-vm-from-template.md)
-* [Azure `VMaccess`](../extensions/vmaccess.md)
-
-Azure supports for [cloud-init](https://cloud-init.io/) across most Linux Distros that support it. We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure marketplace. These images will make your cloud-init deployments and configurations work seamlessly with VMs and virtual machine scale sets.
-
-* [Using cloud-init on Azure Linux VMs](using-cloud-init.md)
-
-## Storage
-* [Introduction to Microsoft Azure Storage](../../storage/common/storage-introduction.md)
-* [Add a disk to a Linux VM using the azure-cli](add-disk.md)
-* [How to attach a data disk to a Linux VM in the Azure portal](attach-disk-portal.md)
-
-## Networking
-* [Virtual Network Overview](../../virtual-network/virtual-networks-overview.md)
-* [IP addresses in Azure](../../virtual-network/ip-services/public-ip-addresses.md)
-* [Opening ports to a Linux VM in Azure](nsg-quickstart.md)
-* [Create a Fully Qualified Domain Name in the Azure portal](../create-fqdn.md)
--
-## Data residency
-
-In Azure, the feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/).
--
-## Next steps
-
-Create your first VM!
--- [Portal](quick-create-portal.md)-- [Azure CLI](quick-create-cli.md)-- [PowerShell](quick-create-powershell.md)
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
With this scope, you can manage platform updates that do not require a reboot on
### OS image Using this scope with maintenance configurations lets you decide when to apply upgrades to OS disks in your *virtual machine scale sets* through an easier and more predictable experience. An upgrade works by replacing the OS disk of a VM with a new disk created using the latest image version. Any configured extensions and custom data scripts are run on the OS disk, while data disks are retained. Some features and limitations unique to this scope are: -- Scale sets need to have [automatic OS upgrades](/azure/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade) enabled in order to use maintenance configurations.
+- Scale sets need to have [automatic OS upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) enabled in order to use maintenance configurations.
- Schedule recurrence is defaulted to daily - A minimum of 5 hours is required for the maintenance window
For an Azure Functions sample, see [Scheduling Maintenance Updates with Maintena
## Next steps
-To learn more, see [Maintenance and updates](maintenance-and-updates.md).
+To learn more, see [Maintenance and updates](maintenance-and-updates.md).
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/overview.md
+
+ Title: Overview of virtual machines in Azure
+description: Overview of virtual machines in Azure.
+++++ Last updated : 11/14/2019++++
+# Virtual machines in Azure
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
+
+Azure virtual machines are one of several types of [on-demand, scalable computing resources](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Typically, you choose a virtual machine when you need more control over the computing environment than the other choices offer. This article gives you information about what you should consider before you create a virtual machine, how you create it, and how you manage it.
+
+An Azure virtual machine gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the virtual machine by performing tasks, such as configuring, patching, and installing the software that runs on it.
+
+Azure virtual machines can be used in various ways. Some examples are:
+
+* **Development and test** ΓÇô Azure virtual machines offer a quick and easy way to create a computer with specific configurations required to code and test an application.
+* **Applications in the cloud** ΓÇô Because demand for your application can fluctuate, it might make economic sense to run it on a virtual machine in Azure. You pay for extra virtual machines when you need them and shut them down when you donΓÇÖt.
+* **Extended datacenter** ΓÇô virtual machines in an Azure virtual network can easily be connected to your organizationΓÇÖs network.
+
+The number of virtual machines that your application uses can scale up and out to whatever is required to meet your needs.
+
+## What do I need to think about before creating a virtual machine?
+There is always a multitude of [design considerations](/azure/architecture/reference-architectures/n-tier/linux-vm) when you build out an application infrastructure in Azure. These aspects of a virtual machine are important to think about before you start:
+
+* The names of your application resources
+* The location where the resources are stored
+* The size of the virtual machine
+* The maximum number of virtual machines that can be created
+* The operating system that the virtual machine runs
+* The configuration of the virtual machine after it starts
+* The related resources that the virtual machine needs
+
+### Locations
+There are multiple [geographical regions](https://azure.microsoft.com/regions/) around the world where you can create Azure resources. Usually, the region is called **location** when you create a virtual machine. For a virtual machine, the location specifies where the virtual hard disks will be stored.
+
+This table shows some of the ways you can get a list of available locations.
+
+| Method | Description |
+| | |
+| Azure portal |Select a location from the list when you create a virtual machine. |
+| Azure PowerShell |Use the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) command. |
+| REST API |Use the [List locations](/rest/api/resources/subscriptions) operation. |
+| Azure CLI |Use the [az account list-locations](/cli/azure/account) operation. |
+
+## Availability
+There are multiple options to manage the availability of your virtual machines in Azure.
+- **[Availability Zones](../availability-zones/az-overview.md)** are physically separated zones within an Azure region. Availability zones guarantee you will have virtual machine Connectivity to at least one instance at least 99.99% of the time when you have two or more instances deployed across two or more Availability Zones in the same Azure region.
+- **[Virtual machine scale sets](../virtual-machine-scale-sets/overview.md)** let you create and manage a group of load balanced virtual machines. The number of virtual machine instances can automatically increase or decrease in response to demand or a defined schedule. Scale sets provide high availability to your applications, and allow you to centrally manage, configure, and update many virtual machines. Virtual machines in a scale set can also be deployed into multiple availability zones, a single availability zone, or regionally.
+- **[Proximity Placement Groups](co-location.md)** are a grouping construct used to ensure Azure compute resources are physically located close to each other. Proximity placement groups are useful for workloads where low latency is a requirement.
+
+Fore more information see [Availability options for Azure virtual machines](availability.md) and [SLA for Azure virtual machines](https://azure.microsoft.com/support/legal/sla/virtual-machines/v1_9/).
+
+## Virtual machine size
+The [size](sizes.md) of the virtual machine that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, storage capacity, and network bandwidth. Azure offers a wide variety of sizes to support many types of uses.
+
+Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) based on the virtual machineΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
+
+## Virtual machine limits
+Your subscription has default [quota limits](../azure-resource-manager/management/azure-subscription-service-limits.md) in place that could impact the deployment of many virtual machines for your project. The current limit on a per subscription basis is 20 virtual machines per region. Limits can be raised by [filing a support ticket requesting an increase](../azure-portal/supportability/regional-quota-requests.md)
+
+## Managed Disks
+
+Managed Disks handles Azure Storage account creation and management in the background for you, and ensures that you do not have to worry about the scalability limits of the storage account. You specify the disk size and the performance tier (Standard or Premium), and Azure creates and manages the disk. As you add disks or scale the virtual machine up and down, you don't have to worry about the storage being used. If you're creating new virtual machines, [use the Azure CLI](linux/quick-create-cli.md) or the Azure portal to create virtual machines with Managed OS and data disks. If you have virtual machines with unmanaged disks, you can [convert your virtual machines to be backed with Managed Disks](linux/convert-unmanaged-to-managed-disks.md).
+
+You can also manage your custom images in one storage account per Azure region, and use them to create hundreds of virtual machines in the same subscription. For more information about Managed Disks, see the [Managed Disks Overview](managed-disks-overview.md).
+
+## Distributions
+Microsoft Azure supports a variety of Linux and Windows distributions. You can find available distributions in the [marketplace](https://azuremarketplace.microsoft.com), Azure portal or by querying results using CLI, PowerShell and REST APIs.
+
+This table shows some ways that you can find the information for an image.
+
+| Method | Description |
+| | |
+| Azure portal |The values are automatically specified for you when you select an image to use. |
+| Azure PowerShell |[Get-AzVMImagePublisher](/powershell/module/az.compute/get-azvmimagepublisher) -Location *location*<BR>[Get-AzVMImageOffer](/powershell/module/az.compute/get-azvmimageoffer) -Location *location* -Publisher *publisherName*<BR>[Get-AzVMImageSku](/powershell/module/az.compute/get-azvmimagesku) -Location *location* -Publisher *publisherName* -Offer *offerName* |
+| REST APIs |[List image publishers](/rest/api/compute/platformimages/platformimages-list-publishers)<BR>[List image offers](/rest/api/compute/platformimages/platformimages-list-publisher-offers)<BR>[List image skus](/rest/api/compute/platformimages/platformimages-list-publisher-offer-skus) |
+| Azure CLI |[az vm image list-publishers](/cli/azure/vm/image) --location *location*<BR>[az vm image list-offers](/cli/azure/vm/image) --location *location* --publisher *publisherName*<BR>[az vm image list-skus](/cli/azure/vm) --location *location* --publisher *publisherName* --offer *offerName*|
+
+Microsoft works closely with partners to ensure the images available are updated and optimized for an Azure runtime. For more information on Azure partner offers, see the following links:
+
+* Linux on Azure - [Endorsed Distributions](linux/endorsed-distros.md)
+* SUSE - [Azure Marketplace - SUSE Linux Enterprise Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=suse)
+* Red Hat - [Azure Marketplace - Red Hat Enterprise Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Red%20Hat%20Enterprise%20Linux)
+* Canonical - [Azure Marketplace - Ubuntu Server](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&filters=partners&search=canonical)
+* Debian - [Azure Marketplace - Debian](https://azuremarketplace.microsoft.com/marketplace/apps?search=Debian&page=1)
+* FreeBSD - [Azure Marketplace - FreeBSD](https://azuremarketplace.microsoft.com/marketplace/apps?search=freebsd&page=1)
+* Flatcar - [Azure Marketplace - Flatcar Container Linux](https://azuremarketplace.microsoft.com/marketplace/apps?search=Flatcar&page=1)
+* RancherOS - [Azure Marketplace - RancherOS](https://azuremarketplace.microsoft.com/marketplace/apps/rancher.rancheros)
+* Bitnami - [Bitnami Library for Azure](https://azure.bitnami.com/)
+* Mesosphere - [Azure Marketplace - Mesosphere DC/OS on Azure](https://azure.microsoft.com/services/kubernetes-service/mesosphere/)
+* Docker - [Azure Marketplace - Docker images](https://azuremarketplace.microsoft.com/marketplace/apps?search=docker&page=1&filters=virtual-machine-images)
+* Jenkins - [Azure Marketplace - CloudBees Jenkins Platform](https://azuremarketplace.microsoft.com/marketplace/apps/cloudbees.cloudbees-core-contact)
++
+## Cloud-init
+
+To achieve a proper DevOps culture, all infrastructures must be code. When all the infrastructure lives in code it can easily be recreated. Azure works with all the major automation tooling like Ansible, Chef, SaltStack, and Puppet. Azure also has its own tooling for automation:
+
+* [Azure Templates](linux/create-ssh-secured-vm-from-template.md)
+* [Azure `VMaccess`](extensions/vmaccess.md)
+
+Azure supports for [cloud-init](https://cloud-init.io/) across most Linux Distros that support it. We are actively working with our endorsed Linux distro partners in order to have cloud-init enabled images available in the Azure marketplace. These images will make your cloud-init deployments and configurations work seamlessly with virtual machines and virtual machine scale sets.
+
+* [Using cloud-init on Azure Linux virtual machines](linux/using-cloud-init.md)
+
+## Storage
+* [Introduction to Microsoft Azure Storage](../storage/common/storage-introduction.md)
+* [Add a disk to a Linux virtual machine using the azure-cli](linux/add-disk.md)
+* [How to attach a data disk to a Linux virtual machine in the Azure portal](linux/attach-disk-portal.md)
+
+## Networking
+* [Virtual Network Overview](../virtual-network/virtual-networks-overview.md)
+* [IP addresses in Azure](../virtual-network/ip-services/public-ip-addresses.md)
+* [Opening ports to a Linux virtual machine in Azure](linux/nsg-quickstart.md)
+* [Create a Fully Qualified Domain Name in the Azure portal](create-fqdn.md)
++
+## Data residency
+
+In Azure, the feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/).
++
+## Next steps
+
+Create your first virtual machine!
+
+- [Portal](linux/quick-create-portal.md)
+- [Azure CLI](linux/quick-create-cli.md)
+- [PowerShell](linux/quick-create-powershell.md)
virtual-machines Copy Managed Disks To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-to-same-or-different-subscription.md
description: Azure CLI Script Sample - Copy (or move) managed disks to the same
documentationcenter: storage -++ ms.devlang: azurecli
virtual-machines Copy Managed Disks Vhd To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-vhd-to-storage-account.md
description: Azure CLI sample - Export or copy a managed disk to a storage accou
documentationcenter: storage -+
virtual-machines Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-same-or-different-subscription.md
description: Azure CLI Script Sample - Copy (or move) snapshot of a managed disk
documentationcenter: storage -+
virtual-machines Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-storage-account.md
description: Azure CLI Script Sample - Export/Copy snapshot as VHD to a storage
documentationcenter: storage -+ ms.devlang: azurecli
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
tags: azure-service-management ms.assetid:-++ ms.devlang: azurecli vm-linux
virtual-machines Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-vhd.md
description: Azure CLI Script Sample - Create a managed disk from a VHD file in
documentationcenter: storage -+ ms.devlang: azurecli
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
editor: ramankum
tags: azure-service-management ms.assetid:-++ ms.devlang: azurecli vm-linux
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
editor: ramankum
tags: azure-service-management ms.assetid:-++ ms.devlang: azurecli vm-linux
virtual-machines Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-recommendations.md
For general information about Microsoft Defender for Cloud, see [What is Microso
| Recommendation | Comments | Defender for Cloud | |-|-|--|
-| Encrypt operating system disks. | [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) helps you encrypt your Windows and Linux IaaS VM disks. Without the necessary keys, the contents of encrypted disks are unreadable. Disk encryption protects stored data from unauthorized access that would otherwise be possible if the disk were copied.| [Yes](../security-center/asset-inventory.md) |
-| Encrypt data disks. | [Azure Disk Encryption](../security/fundamentals/azure-disk-encryption-vms-vmss.md) helps you encrypt your Windows and Linux IaaS VM disks. Without the necessary keys, the contents of encrypted disks are unreadable. Disk encryption protects stored data from unauthorized access that would otherwise be possible if the disk were copied.| - |
+| Encrypt operating system disks. | [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) helps you encrypt your Windows and Linux IaaS VM disks. Without the necessary keys, the contents of encrypted disks are unreadable. Disk encryption protects stored data from unauthorized access that would otherwise be possible if the disk were copied.| [Yes](../security-center/asset-inventory.md) |
+| Encrypt data disks. | [Azure Disk Encryption](../virtual-machines/disk-encryption-overview.md) helps you encrypt your Windows and Linux IaaS VM disks. Without the necessary keys, the contents of encrypted disks are unreadable. Disk encryption protects stored data from unauthorized access that would otherwise be possible if the disk were copied.| - |
| Limit installed software. | Limit installed software to what is required to successfully apply your solution. This guideline helps reduce your solution's attack surface. | - | | Use antivirus or antimalware. | In Azure, you can use antimalware software from security vendors such as Microsoft, Symantec, Trend Micro, and Kaspersky. This software helps protect your VMs from malicious files, adware, and other threats. You can deploy Microsoft Antimalware based on your application workloads. Microsoft Antimalware is available for Windows machines only. Use either basic secure-by-default or advanced custom configuration. For more information, see [Microsoft Antimalware for Azure Cloud Services and Virtual Machines](../security/fundamentals/antimalware.md). | - | | Securely store keys and secrets. | Simplify the management of your secrets and keys by providing your application owners with a secure, centrally managed option. This management reduces the risk of an accidental compromise or leak. Azure Key Vault can securely store your keys in hardware security modules (HSMs) that are certified to FIPS 140-2 Level 2. If you need to use FIPs 140.2 Level 3 to store your keys and secrets, you can use [Azure Dedicated HSM](../dedicated-hsm/overview.md). | - |
virtual-machines Virtual Machines Create Restore Points Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-cli.md
az disk create --resource-group ΓÇ£ExampleRgΓÇ¥ --name ΓÇ£ExampleDataDisk1ΓÇ¥ --
az disk create --resource-group ΓÇ£ExampleRgΓÇ¥ --name ΓÇ£ExampleDataDisk1ΓÇ¥ --sku Premium_LRS --size-gb 128 --source $dataDisk2RestorePoint ```
-Once you have created the disks, [create a new VM](/azure/virtual-machines/scripts/create-vm-from-managed-os-disks) and [attach these restored disks](/azure/virtual-machines/linux/add-disk#attach-an-existing-disk) to the newly created VM.
+Once you have created the disks, [create a new VM](./scripts/create-vm-from-managed-os-disks.md) and [attach these restored disks](./linux/add-disk.md#attach-an-existing-disk) to the newly created VM.
## Next steps
-[Learn more](/azure/virtual-machines/backup-recovery) about Backup and restore options for virtual machines in Azure.
+[Learn more](./backup-recovery.md) about Backup and restore options for virtual machines in Azure.
virtual-machines Virtual Machines Create Restore Points Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-portal.md
To restore a VM from a VM restore point, first restore individual disks from eac
:::image type="content" source="./media/virtual-machines-create-restore-points-portal/create-restore-points-create-disk.png" alt-text="Screenshot of progress of disk creation."::: 2. Enter the details in the **Create a managed disk** dialog to create disks from the restore points.
-Once the disks are created, [create a new VM](/azure/virtual-machines/windows/create-vm-specialized-portal#create-a-vm-from-a-disk.md) and [attach these restored disks](/azure/virtual-machines/windows/attach-managed-disk-portal) to the newly created VM.
+Once the disks are created, [create a new VM](./windows/create-vm-specialized-portal.md#create-a-vm-from-a-disk) and [attach these restored disks](./windows/attach-managed-disk-portal.md) to the newly created VM.
:::image type="content" source="./media/virtual-machines-create-restore-points-portal/create-restore-points-manage-disk.png" alt-text="Screenshot of progress of Create a managed disk screen.":::
virtual-machines Virtual Machines Create Restore Points Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-powershell.md
New-AzDisk -DiskName ΓÇ£ExampleDataDisk1ΓÇ¥ (New-AzDiskConfig -Location eastus
New-AzDisk -DiskName ΓÇ£ExampleDataDisk2ΓÇ¥ (New-AzDiskConfig -Location eastus -CreateOption Restore -SourceResourceId $dataDisk2RestorePoint) -ResourceGroupName ExampleRg ```
-After you create the disks, [create a new VM](/azure/virtual-machines/windows/create-vm-specialized-portal) and [attach these restored disks](/azure/virtual-machines/windows/attach-disk-ps#using-managed-disks) to the newly created VM.
+After you create the disks, [create a new VM](./windows/create-vm-specialized-portal.md) and [attach these restored disks](./windows/attach-disk-ps.md#using-managed-disks) to the newly created VM.
## Next steps
-[Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
+[Learn more](backup-recovery.md) about Backup and restore options for virtual machines in Azure.
virtual-machines Vm Applications How To https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications-how-to.md
Title: Create and deploy VM application packages description: Learn how to create and deploy VM Applications using an Azure Compute Gallery.+
virtual-machines Vm Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/vm-applications.md
Title: Overview of VM Applications in the Azure Compute Gallery description: Learn more about VM application packages in an Azure Compute Gallery.-+
virtual-machines Azure To Guest Disk Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/azure-to-guest-disk-mapping.md
Title: How to map Azure Disks to Windows VM guest disks description: How to determine the Azure Disks that underlay a Windows VM's guest disks. -+
virtual-machines Connect Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/connect-ssh.md
az ssh vm -g $myResourceGroup -n $myVM --local-user $myUsername -- -L 3389:loca
## Next steps
-Learn how to transfer files to an existing VM, see [Use SCP to move files to and from a Linux VM](../linux/copy-files-to-linux-vm-using-scp.md). The same steps will also work for Windows machines.
+Learn how to transfer files to an existing VM, see [Use SCP to move files to and from a VM](../copy-files-to-vm-using-scp.md).
virtual-machines Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-overview.md
Azure Disk Encryption is not available on [Basic, A-series VMs](https://azure.mi
> > Windows Server 2012 R2 Core and Windows Server 2016 Core requires the bdehdcfg component to be installed on the VM for encryption. - ## Networking requirements To enable Azure Disk Encryption, the VMs must meet the following network endpoint configuration requirements: - To get a token to connect to your key vault, the Windows VM must be able to connect to an Azure Active Directory endpoint, \[login.microsoftonline.com\].
virtual-machines Expand Os Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/expand-os-disk.md
Title: Expand virtual hard disks attached to a Windows VM in an Azure
description: Expand the size of the virtual hard disks attached to a virtual machine using Azure PowerShell in the Resource Manager deployment model. -+ Last updated 08/02/2022
virtual-machines Migrate To Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-managed-disks.md
Title: Migrate Azure VMs to Managed Disks description: Migrate Azure virtual machines created using unmanaged disks in storage accounts to use Managed Disks. -+ Last updated 05/30/2019
virtual-machines Migrate To Premium Storage Using Azure Site Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/migrate-to-premium-storage-using-azure-site-recovery.md
Title: Migrate your Windows VMs to Azure Premium Storage with Azure Site Recovery description: Learn how to migrate your VM disks from a standard storage account to a premium storage account by using Azure Site Recovery. -+ Last updated 08/15/2017
virtual-machines Os Disk Swap https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/os-disk-swap.md
Title: Swap OS disk for an Azure VM with PowerShell '
+ Title: Swap OS disk for an Azure VM with PowerShell
description: Change the operating system disk used by an Azure virtual machine using PowerShell.--++ Last updated 04/24/2018-+ - # Change the OS disk used by an Azure VM using PowerShell
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/overview.md
- Title: Overview of Windows VMs in Azure
-description: Overview of Windows virtual machines in Azure.
----- Previously updated : 11/14/2019----
-# Windows virtual machines in Azure
-
-**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets
-
-Azure Virtual Machines (VM) is one of several types of [on-demand, scalable computing resources](/azure/architecture/guide/technology-choices/compute-decision-tree) that Azure offers. Typically, you choose a VM when you need more control over the computing environment than the other choices offer. This article gives you information about what you should consider before you create a VM, how you create it, and how you manage it.
-
-An Azure VM gives you the flexibility of virtualization without having to buy and maintain the physical hardware that runs it. However, you still need to maintain the VM by performing tasks, such as configuring, patching, and installing the software that runs on it.
-
-Azure virtual machines can be used in various ways. Some examples are:
-
-* **Development and test** ΓÇô Azure VMs offer a quick and easy way to create a computer with specific configurations required to code and test an application.
-* **Applications in the cloud** ΓÇô Because demand for your application can fluctuate, it might make economic sense to run it on a VM in Azure. You pay for extra VMs when you need them and shut them down when you donΓÇÖt.
-* **Extended datacenter** ΓÇô Virtual machines in an Azure virtual network can easily be connected to your organizationΓÇÖs network.
-
-The number of VMs that your application uses can scale up and out to whatever is required to meet your needs.
-
-## What do I need to think about before creating a VM?
-There are always a multitude of [design considerations](/azure/architecture/reference-architectures/n-tier/windows-vm) when you build out an application infrastructure in Azure. These aspects of a VM are important to think about before you start:
-
-* The names of your application resources
-* The location where the resources are stored
-* The size of the VM
-* The maximum number of VMs that can be created
-* The operating system that the VM runs
-* The configuration of the VM after it starts
-* The related resources that the VM needs
-
-### Locations
-All resources created in Azure are distributed across multiple [geographical regions](https://azure.microsoft.com/regions/) around the world. Usually, the region is called **location** when you create a VM. For a VM, the location specifies where the virtual hard disks are stored.
-
-This table shows some of the ways you can get a list of available locations.
-
-| Method | Description |
-| | |
-| Azure portal |Select a location from the list when you create a VM. |
-| Azure PowerShell |Use the [Get-AzLocation](/powershell/module/az.resources/get-azlocation) command. |
-| REST API |Use the [List locations](/rest/api/resources/subscriptions/listlocations) operation. |
-| Azure CLI |Use the [az account list-locations](/cli/azure/account) operation. |
-
-## Availability
-Azure announced an industry leading single instance virtual machine Service Level Agreement of 99.9% provided you deploy the VM with premium storage for all disks. In order for your deployment to qualify for the standard 99.95% VM Service Level Agreement, you still need to deploy two or more VMs running your workload inside of an availability set. An availability set ensures that your VMs are distributed across multiple fault domains in the Azure data centers as well as deployed onto hosts with different maintenance windows. The full [Azure SLA](https://azure.microsoft.com/support/legal/sla/virtual-machines/) explains the guaranteed availability of Azure as a whole.
--
-## VM size
-The [size](../sizes.md) of the VM that you use is determined by the workload that you want to run. The size that you choose then determines factors such as processing power, memory, storage capacity, and network bandwidth. Azure offers a wide variety of sizes to support many types of uses.
-
-Azure charges an [hourly price](https://azure.microsoft.com/pricing/details/virtual-machines/windows/) based on the VMΓÇÖs size and operating system. For partial hours, Azure charges only for the minutes used. Storage is priced and charged separately.
-
-## VM Limits
-Your subscription has default [quota limits](../../azure-resource-manager/management/azure-subscription-service-limits.md) in place that could impact the deployment of many VMs for your project. The current limit on a per subscription basis is 20 VMs per region. Limits can be raised by [filing a support ticket requesting an increase](../../azure-portal/supportability/regional-quota-requests.md)
-
-### Operating system disks and images
-Virtual machines use [virtual hard disks (VHDs)](../managed-disks-overview.md) to store their operating system (OS) and data. VHDs are also used for the images you can choose from to install an OS.
-
-Azure provides many [marketplace images](https://azuremarketplace.microsoft.com/marketplace/apps?filters=virtual-machine-images%3Bwindows&page=1) to use with various versions and types of Windows Server operating systems. Marketplace images are identified by image publisher, offer, sku, and version (typically version is specified as latest). Only 64-bit operating systems are supported. For more information on the supported guest operating systems, roles, and features, see [Microsoft server software support for Microsoft Azure virtual machines](https://support.microsoft.com/help/2721672/microsoft-server-software-support-for-microsoft-azure-virtual-machines).
-
-This table shows some ways that you can find the information for an image.
-
-| Method | Description |
-| | |
-| Azure portal |The values are automatically specified for you when you select an image to use. |
-| Azure PowerShell |[Get-AzVMImagePublisher](/powershell/module/az.compute/get-azvmimagepublisher) -Location *location*<BR>[Get-AzVMImageOffer](/powershell/module/az.compute/get-azvmimageoffer) -Location *location* -Publisher *publisherName*<BR>[Get-AzVMImageSku](/powershell/module/az.compute/get-azvmimagesku) -Location *location* -Publisher *publisherName* -Offer *offerName* |
-| REST APIs |[List image publishers](/rest/api/compute/platformimages/platformimages-list-publishers)<BR>[List image offers](/rest/api/compute/platformimages/platformimages-list-publisher-offers)<BR>[List image skus](/rest/api/compute/platformimages/platformimages-list-publisher-offer-skus) |
-| Azure CLI |[az vm image list-publishers](/cli/azure/vm/image) --location *location*<BR>[az vm image list-offers](/cli/azure/vm/image) --location *location* --publisher *publisherName*<BR>[az vm image list-skus](/cli/azure/vm) --location *location* --publisher *publisherName* --offer *offerName*|
-
-You can choose to [upload and use your own image](upload-generalized-managed.md) and when you do, the publisher name, offer, and sku arenΓÇÖt used.
-
-### Extensions
-VM [extensions](../extensions/features-windows.md?toc=/azure/virtual-machines/windows/toc.json) give your VM additional capabilities through post deployment configuration and automated tasks.
-
-These common tasks can be accomplished using extensions:
-
-* **Run custom scripts** ΓÇô The [Custom Script Extension](../extensions/custom-script-windows.md?toc=/azure/virtual-machines/windows/toc.json) helps you configure workloads on the VM by running your script when the VM is provisioned.
-* **Deploy and manage configurations** ΓÇô The [PowerShell Desired State Configuration (DSC) Extension](../extensions/dsc-overview.md?toc=/azure/virtual-machines/windows/toc.json) helps you set up DSC on a VM to manage configurations and environments.
-* **Collect diagnostics data** ΓÇô The [Azure Diagnostics Extension](../extensions/diagnostics-template.md?toc=/azure/virtual-machines/windows/toc.json) helps you configure the VM to collect diagnostics data that can be used to monitor the health of your application.
-
-### Related resources
-The resources in this table are used by the VM and need to exist or be created when the VM is created.
-
-| Resource | Required | Description |
-| | | |
-| [Resource group](../../azure-resource-manager/management/overview.md) |Yes |The VM must be contained in a resource group. |
-| [OS disk](../managed-disks-overview.md) |Yes |The VM needs a disk to store the OS in most cases. |
-| [Virtual network](../../virtual-network/virtual-networks-overview.md) |Yes |The VM must be a member of a virtual network. |
-| [Public IP address](../../virtual-network/ip-services/public-ip-addresses.md) |No |The VM can have a public IP address assigned to it to remotely access it. |
-| [Network interface](../../virtual-network/virtual-network-network-interface.md) |Yes |The VM needs the network interface to communicate in the network. |
-| [Data disks](attach-managed-disk-portal.md) |No |The VM can include data disks to expand storage capabilities. |
--
-## Data residency
-
-In Azure, the feature to enable storing customer data in a single region is currently only available in the Southeast Asia Region (Singapore) of the Asia Pacific Geo and Brazil South (Sao Paulo State) Region of Brazil Geo. For all other regions, customer data is stored in Geo. For more information, see [Trust Center](https://azure.microsoft.com/global-infrastructure/data-residency/).
--
-## Next steps
-
-Create your first VM!
--- [Portal](quick-create-portal.md)-- [PowerShell](quick-create-powershell.md)-- [Azure CLI](quick-create-cli.md)
virtual-machines Automation Configure Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-devops.md
az ad app update --id $TF_VAR_app_registration_app_id --web-home-page-url https:
After updating the reply-urls, run the pipeline.
-By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, navigate to the app service resource. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](/azure/app-service/app-service-ip-restrictions).
+By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, navigate to the app service resource. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](../../../app-service/app-service-ip-restrictions.md).
You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality won't work.
You should now be able to visit the web app, and use it to deploy SAP workload z
## Next step > [!div class="nextstepaction"]
-> [DevOps hands on lab](automation-devops-tutorial.md)
+> [DevOps hands on lab](automation-devops-tutorial.md)
virtual-machines Automation Deploy Control Plane https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-deploy-control-plane.md
IS_PIPELINE_DEPLOYMENT=false
## Accessing the web app
-By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, find the web app. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](/azure/app-service/app-service-ip-restrictions).
+By default there will be no inbound public internet access to the web app apart from the deployer virtual network. To allow additional access to the web app, navigate to the Azure portal. In the deployer resource group, find the web app. Then under settings on the left hand side, click on networking. From here, click Access restriction. Add any allow or deny rules you would like. For more information on configuring access restrictions, see [Set up Azure App Service access restrictions](../../../app-service/app-service-ip-restrictions.md).
You will also need to grant reader permissions to the app service system-assigned managed identity. Navigate to the app service resource. On the left hand side, click "Identity". In the "system assigned" tab, click on "Azure role assignments" > "Add role assignment". Select "subscription" as the scope, and "reader" as the role. Then click save. Without this step, the web app dropdown functionality will not work.
You can log in and visit the web app by following the URL from earlier or clicki
## Next step > [!div class="nextstepaction"]
-> [Configure SAP Workload Zone](automation-configure-workload-zone.md)
+> [Configure SAP Workload Zone](automation-configure-workload-zone.md)
virtual-machines Expose Sap Process Orchestration On Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/expose-sap-process-orchestration-on-azure.md
Integration architecture needs differ depending on the interface used. SAP-propr
This article focuses on modern APIs and http (that includes integration scenarios like [AS2](https://wikipedia.org/wiki/AS2)). [FTP](https://wikipedia.org/wiki/File_Transfer_Protocol) will serve as an example to handle `non-http` integration needs. For more information about the different Microsoft load balancing solutions, see [this article](/azure/architecture/guide/technology-choices/load-balancing-overview). > [!NOTE]
-> SAP publishes dedicated [connectors](https://support.sap.com/en/product/connectors.html) for their proprietary interfaces. Check SAP's documentation for [Java](https://support.sap.com/en/product/connectors/jco.html), and [.NET](https://support.sap.com/en/product/connectors/msnet.html) for example. They are supported by [Microsoft Gateways](/azure/data-factory/connector-sap-table?tabs=data-factory#prerequisites) too. Be aware that iDocs can also be posted via [http](https://blogs.sap.com/2012/01/14/post-idoc-to-sap-erp-over-http-from-any-application/).
+> SAP publishes dedicated [connectors](https://support.sap.com/en/product/connectors.html) for their proprietary interfaces. Check SAP's documentation for [Java](https://support.sap.com/en/product/connectors/jco.html), and [.NET](https://support.sap.com/en/product/connectors/msnet.html) for example. They are supported by [Microsoft Gateways](../../../data-factory/connector-sap-table.md?tabs=data-factory#prerequisites) too. Be aware that iDocs can also be posted via [http](https://blogs.sap.com/2012/01/14/post-idoc-to-sap-erp-over-http-from-any-application/).
Security concerns require the usage of [Firewalls](../../../firewall/features.md) for lower-level protocols and [Web Application Firewalls](../../../web-application-firewall/overview.md) (WAF) to address http-based traffic with [Transport Layer Security](https://wikipedia.org/wiki/Transport_Layer_Security) (TLS). To be effective, TLS sessions need to be terminated at the WAF level. Supporting zero-trust approaches, it's advisable to [re-encrypt](../../../application-gateway/ssl-overview.md) again afterwards to ensure end-to-encryption.
Which integration flavor described in this article fits your requirements best,
- [High-availability](../../../virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios.md) and [disaster recovery](/azure/cloud-adoption-framework/scenarios/sap/eslz-business-continuity-and-disaster-recovery) for the VM-based SAP integration workloads -- Modern [authentication mechanisms like OAuth2](/azure/api-management/sap-api?#production-considerations) where applicable
+- Modern [authentication mechanisms like OAuth2](../../../api-management/sap-api.md#production-considerations) where applicable
- Utilize a managed key store like [Azure Key Vault](../../../key-vault/general/overview.md) for all involved credentials, certificates, and keys
The integration scenarios covered by SAP Process Orchestration can be addressed
[Protect APIs with Application Gateway and API Management](/azure/architecture/reference-architectures/apis/protect-apis)
-[Integrate API Management in an internal virtual network with Application Gateway](/azure/api-management/api-management-howto-integrate-internal-vnet-appgateway)
+[Integrate API Management in an internal virtual network with Application Gateway](../../../api-management/api-management-howto-integrate-internal-vnet-appgateway.md)
[Deploy the Application Gateway WAF triage workbook to better understand SAP related WAF alerts](https://github.com/Azure/Azure-Network-Security/tree/master/Azure%20WAF/Workbook%20-%20AppGw%20WAF%20Triage%20Workbook)
The integration scenarios covered by SAP Process Orchestration can be addressed
[Understand implication of combining Azure Firewall and Azure Application Gateway](/azure/architecture/example-scenario/gateway/firewall-application-gateway#application-gateway-before-firewall)
-[Work with SAP OData APIs in Azure API Management](/azure/api-management/sap-api)
+[Work with SAP OData APIs in Azure API Management](../../../api-management/sap-api.md)
virtual-machines Sap Deployment Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-deployment-checklist.md
We recommend that you set up and validate a full HADR solution and security desi
1. Test the validity of your Azure role-based access control (Azure RBAC) architecture. The goal is to separate and limit the access and permissions of different teams. For example, SAP Basis team members should be able to deploy VMs and assign disks from Azure Storage into a given Azure virtual network. But the SAP Basis team shouldn't be able to create its own virtual networks or change the settings of existing virtual networks. Members of the network team shouldn't be able to deploy VMs into virtual networks in which SAP application and DBMS VMs are running. Nor should members of this team be able to change attributes of VMs or even delete VMs or disks. 1. Verify that [network security group and ASC](../../../virtual-network/network-security-groups-overview.md) rules work as expected and shield the protected resources. 1. Make sure that all resources that need to be encrypted are encrypted. Define and implement processes to back up certificates, store and access those certificates, and restore the encrypted entities.
- 1. Use [Azure Disk Encryption](../../../security/fundamentals/azure-disk-encryption-vms-vmss.md) for OS disks where possible from an OS-support point of view.
+ 1. Use [Azure Disk Encryption](../../../virtual-machines/disk-encryption-overview.md) for OS disks where possible from an OS-support point of view.
1. Be sure that you're not using too many layers of encryption. In some cases, it does make sense to use Azure Disk Encryption together with one of the DBMS Transparent Data Encryption methods to protect different disks or components on the same server. For example, on an SAP DBMS server, the Azure Disk Encryption (ADE) can be enabled on the operating system boot disk (if the OS supports ADE) and those data disk(s) not used by the DBMS data persistence files. An example is to use ADE on the disk holding the DBMS TDE encryption keys. 1. Performance testing. In SAP, based on SAP tracing and measurements, make these comparisons: - Where applicable, compare the top 10 online reports to your current implementation.
virtual-machines Sap Planning Supported Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/sap-planning-supported-configurations.md
For simplification, we did not distinguish between SAP Central Services and SAP
## High Availability protection for the SAP DBMS layer As you look to deploy SAP production systems, you need to consider hot standby type of high availability configurations. Especially with SAP HANA, where data needs to be loaded into memory before being able to get the full performance and scalability back, Azure service healing is not an ideal measure for high availability.
-In general, Microsoft supports only high availability configurations and software packages that are described in the [SAP workload scenarios](/azure/virtual-machines/workloads/sap/get-started). You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
+In general, Microsoft supports only high availability configurations and software packages that are described in the [SAP workload scenarios](./get-started.md). You can read the same statement in SAP note [#1928533](https://launchpad.support.sap.com/#/notes/1928533). Microsoft will not provide support for other high availability third-party software frameworks that are not documented by Microsoft with SAP workload. In such cases, the third-party supplier of the high availability framework is the supporting party for the high availability configuration who needs to be engaged by you as a customer into the support process. Exceptions are going to be mentioned in this article.
In general Microsoft supports a limited set of high availability configurations on Azure VMs or HANA Large Instances units. For the supported scenarios of HANA Large Instances, read the document [Supported scenarios for HANA Large Instances](./hana-supported-scenario.md).
Scenario(s) that we did not test and therefore have no experience with list like
## Next Steps
-Read next steps in the [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
+Read next steps in the [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)
virtual-network-manager Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/faq.md
Azure SQL Managed Instance has some network requirements. These are enforced thr
* Virtual networks can't be added to a network group when the Azure Virtual Network Manager custom policy `enforcementMode` element is set to `Disabled`.
-* Azure Virtual Network Manager policies don't support the standard policy compliance evaluation cycle. For more information, see [Evaluation triggers](/azure/governance/policy/how-to/get-compliance-data#evaluation-triggers).
+* Azure Virtual Network Manager policies don't support the standard policy compliance evaluation cycle. For more information, see [Evaluation triggers](../governance/policy/how-to/get-compliance-data.md#evaluation-triggers).
## Next steps
-Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
+Create an [Azure Virtual Network Manager](create-virtual-network-manager-portal.md) instance using the Azure portal.
virtual-network-manager How To Block High Risk Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-block-high-risk-ports.md
For this how-to, you'll need a virtual network environment that includes virtual
* Place all virtual networks in the same subscription, region, and resource group
-Not sure how to build a virtual network? Learn more in [Quickstart: Create a virtual network using the Azure portal](/azure/virtual-network/quick-create-portal).
+Not sure how to build a virtual network? Learn more in [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
## Create a Virtual Network Manager
To apply the new rule collection, you'll redeploy your security admin configurat
- Learn how to [create a mesh network topology with Azure Virtual Network Manager using the Azure portal](how-to-create-mesh-network.md) -- Check out the [Azure Virtual Network Manager FAQ](faq.md)-
+- Check out the [Azure Virtual Network Manager FAQ](faq.md)
virtual-network-manager Tutorial Create Secured Hub And Spoke https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/tutorial-create-secured-hub-and-spoke.md
Deploy a virtual network gateway into the hub virtual network. This virtual netw
1. Select **Review + create** and then select **Create** after validation has passed. The deployment of a virtual network gateway can take about 30 minutes. You can move on to the next section while waiting for this deployment to complete.
-## Create a network group
+## Create a dynamic network group
1. Go to your Azure Virtual Network Manager instance. This tutorial assumes you've created one using the [quickstart](create-virtual-network-manager-portal.md) guide.
Deploy a virtual network gateway into the hub virtual network. This virtual netw
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-page.png" alt-text="Screenshot of the network groups page.":::
-1. On the **Get started** tab, select **Add** under *Define dynamic membership*.
+1. On the **Overview** page, select **Create Azure Policy** under *Create policy to dynamically add members*.
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/define-dynamic-membership.png" alt-text="Screenshot of the define dynamic membership button.":::
-1. On the **Define dynamic membership** page, select or enter the following information:
+1. On the **Create Azure Policy** page, select or enter the following information:
:::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/network-group-conditional.png" alt-text="Screenshot of create a network group conditional statements tab."::: | Setting | Value | | - | -- |
+ | Policy name | Enter **VNetAZPolicy** in the text box. |
+ | Scope | Select **Select Scopes** and choose your current subscription. |
+ | Criteria | |
| Parameter | Select **Name** from the drop-down.| | Operator | Select **Contains** from the drop-down.|
- | Condition | Enter **VNet-** to add the three previously created virtual networks into this network group. |
-
-1. Select **Preview resources** to verify the virtual networks selected by the conditional statement, and select **Close**. Then select **Save** to deploy the group membership.
-
- :::image type="content" source="./media/tutorial-create-secured-hub-and-spoke/evaluate-vnet.png" alt-text="Screenshot of effective virtual networks page.":::
+ | Condition | Enter **VNet-** to dynamically add the three previously created virtual networks into this network group. |
+1. Select **Save** to deploy the group membership.
+1. Under **Settings**, select **Group Members** to view the membership of the group based on the conditions defined in Azure Policy.
## Create a hub and spoke connectivity configuration 1. Select **Configuration** under *Settings*, then select **+ Add a configuration**. Select **Connectivity** from the drop-down menu.
virtual-network Create Vm Dual Stack Ipv6 Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal.md
You'll create two public IP addresses in this section, IPv4 and IPv6.
4. Select **Create**.
-## Create network security group
-
-You'll create a network security group to allow SSH connections to the virtual machine.
-
-1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results.
-
-2. Select **+Create**.
-
-3. Enter or select the following information in the **Basics** tab.
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myNSG**. |
- | Region | Select **East US 2**. |
-
-4. Select **Review + create**.
-
-5. Select **Create**.
-
-### Create network security group rules
-
-In this section, you'll create the inbound rule.
-
-1. In the search box at the top of the portal, enter **Network security group**. Select **Network security groups** in the search results.
-
-2. In **Network security groups**, select **myNSG**.
-
-3. In **Settings**, select **Inbound security rules**.
-
-4. Select **+ Add**.
-
-5. In **Add inbound security rule**, enter or select the following information.
-
- | Setting | Value |
- | - | -- |
- | Source | Leave the default of **Any**. |
- | Source port ranges | Leave the default of *. |
- | Destination | Leave the default of **Any**. |
- | Service | Select **SSH**. |
- | Action | Leave the default of **Allow**. |
- | Priority | Enter **200**. |
- | Name | Enter **myNSGRuleSSH**. |
-
-6. Select **Add**.
-
-## Create virtual machine
-
-In this section, you'll create the virtual machine and its supporting resources.
-
-### Create network interface
-
-You'll create a network interface and attach the public IP addresses you created previously.
-
-1. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
-
-2. Select **+ Create**.
-
-3. In the **Basics** tab of **Create network interface, enter or select the following information.
-
- | Setting | Value |
- | - | -- |
- | **Project details** | |
- | Subscription | Select your subscription. |
- | Resource group | Select **myResourceGroup**. |
- | **Instance details** | |
- | Name | Enter **myNIC1**. |
- | Region | Select **East US 2**. |
- | Virtual network | Select **myVNet**. |
- | Subnet | Select **myBackendSubnet (10.1.0.0/24,2404:f800:8000:122:/64)**. |
- | Network security group | Select **myNSG**. |
- | Private IP address (IPv6) | Select the box. |
- | IPv6 name | Enter **Ipv6config**. |
-
-4. Select **Review + create**.
-
-5. Select **Create**.
-
-### Associate public IP addresses
-
-You'll associate the IPv4 and IPv6 addresses you created previously to the network interface.
-
-1. In the search box at the top of the portal, enter **Network interface**. Select **Network interfaces** in the search results.
-
-2. Select **myNIC1**.
-
-3. Select **IP configurations** in **Settings**.
-
-4. In **IP configurations**, select **Ipv4config**.
-
-5. In **Ipv4config**, select **Associate** in **Public IP address**.
-
-6. Select **myPublicIP-IPv4** in **Public IP address**.
-
-7. Select **Save**.
-
-8. Close **Ipv4config**.
-
-9. In **IP configurations**, select **ipconfig-ipv6**.
-
-10. In **Ipv6config**, select **Associate** in **Public IP address**.
-
-11. Select **myPublicIP-IPv6** in **Public IP address**.
-
-12. Select **Save**.
- ### Create virtual machine 1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
You'll associate the IPv4 and IPv6 addresses you created previously to the netwo
| **Network interface** | | | Virtual network | Select **myVNet**. | | Subnet | Select **myBackendSubnet (10.1.0.0/24,2404:f800:8000:122:/64)**. |
- | Public IP | Select **None**. |
- | NIC network security group | Select **None**. |
+ | Public IP | Select **myPublicIP-IPv4**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **Create new**. </br> Enter **myNSG** in Name. </br> Select **OK**. |
6. Select **Review + create**.
You'll associate the IPv4 and IPv6 addresses you created previously to the netwo
### Network interface configuration
-A network interface is automatically created and attached to the chosen virtual network during creation. In this section, you'll remove this default network interface and attach the network interface you created previously.
+A network interface is automatically created and attached to the chosen virtual network during creation. In this section, you'll add the IPv6 configuration to the existing network interface.
1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results. 2. Select **myVM**.
-3. Select **Networking** in **Settings**.
+3. Select **Stop**, to stop the virtual machine. Wait for the machine to shut down.
-4. Select **Attach network interface**.
+4. Select **Networking** in **Settings**.
-5. Select **myNIC1** that you created previously.
+5. The name of your default network interface will be **myvmxx**, with xx a random number. In this example, it's **myvm281**. Select **myvm281** next to **Network Interface:**.
-6. Select **OK**.
+6. In the properties of the network interface, select **IP configurations** in **Settings**.
-7. Select **Detach network interface**.
+7. In **IP configurations**, select **+ Add**.
-8. The name of your default network interface will be **myvmxx**, with xx a random number. In this example, it's **myvm281**. Select **myvm281** in **Detach network interface**.
+8. In **Add IP configuration**, enter or select the following information.
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **Ipv6config**. |
+ | IP version | Select **IPv6**. |
+ | **Private IP address settings** | |
+ | Allocation | Leave the default of **Dynamic**. |
+ | Public IP address | Select **Associate**. |
+ | Public IP address | Select **myPublicIP-IPv6**. |
9. Select **OK**. 10. Return to the **Overview** of **myVM** and start the virtual machine.
-11. The default network interface can be safely deleted.
- ## Test SSH connection You'll connect to the virtual machine with SSH to test the IPv4 public IP address.
You'll connect to the virtual machine with SSH to test the IPv4 public IP addres
4. Open an SSH connection to the virtual machine by using the following command. Replace the IP address with the IP address of your virtual machine. Replace **`azureuser`** with the username you chose during virtual machine creation. The **`-i`** is the path to the private key that you downloaded earlier. In this example, it's **~/.ssh/mySSHKey.pem**.
-```bash
-ssh -i ~/.ssh/mySSHkey.pem azureuser@20.22.46.19
-```
+ ```bash
+ ssh -i ~/.ssh/mySSHkey.pem azureuser@20.22.46.19
+ ```
## Clean up resources
virtual-network Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-overview.md
Virtual appliance UDR / ExpressRoute >> NAT gateway >> Instance-level public IP
### NAT gateway timers
-* NAT gateway holds on to SNAT ports after a connection closes before it is available to reuse to connect to the same destination endpoint over the internet. SNAT port reuse timer durations vary depending on how the connection closes. To learn more, see [Port Reuse Timers](/azure/virtual-network/nat-gateway/nat-gateway-resource#port-reuse-timers).
+* NAT gateway holds on to SNAT ports after a connection closes before it is available to reuse to connect to the same destination endpoint over the internet. SNAT port reuse timer durations vary depending on how the connection closes. To learn more, see [Port Reuse Timers](./nat-gateway-resource.md#port-reuse-timers).
-* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives. To learn more, see [Idle Timeout Timers](/azure/virtual-network/nat-gateway/nat-gateway-resource#idle-timeout-timers).
+* A default TCP idle timeout of 4 minutes is used and can be increased to up to 120 minutes. Any activity on a flow can also reset the idle timer, including TCP keepalives. To learn more, see [Idle Timeout Timers](./nat-gateway-resource.md#idle-timeout-timers).
* UDP traffic has an idle timeout timer of 4 minutes that cannot be changed.
For information on the SLA, see [SLA for Virtual Network NAT](https://azure.micr
* Learn about the [NAT gateway resource](./nat-gateway-resource.md).
-* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).
+* [Learn module: Introduction to Azure Virtual Network NAT](/learn/modules/intro-to-azure-virtual-network-nat).