Updates from: 07/01/2022 22:57:40
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
There are three ways to check whether an application is in quarantine:
## Why is my application in quarantine?
+Below are the common reasons your application may go into quarantine
+ |Description|Recommended Action| ||| |**SCIM Compliance issue:** An HTTP/404 Not Found response was returned rather than the expected HTTP/200 OK response. In this case, the Azure AD provisioning service has made a request to the target application and received an unexpected response.|Check the admin credentials section. See if the application requires specifying the tenant URL and that the URL is correct. If you don't see an issue, contact the application developer to ensure that their service is SCIM-compliant. https://tools.ietf.org/html/rfc7644#section-3.4.2 |
active-directory All Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/all-reports.md
# View a list and description of system reports
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some of the information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- Permissions Management has various types of system reports that capture specific sets of data. These reports allow management, auditors, and administrators to: - Make timely decisions.
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
# Frequently asked questions (FAQs)
-> [!IMPORTANT]
-> Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-
-> [!NOTE]
-> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-- This article answers frequently asked questions (FAQs) about Permissions Management. ## What's Permissions Management?
Yes, non-Azure customers can use our solution. Permissions Management is a multi
## Is Permissions Management available for tenants hosted in the European Union (EU)?
-No, the Permissions Management Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
+No, the Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does Permissions Management provide?
Permissions Management currently doesn't support hybrid environments.
Permissions Management supports user identities (for example, employees, customers, external partners) and workload identities (for example, virtual machines, containers, web apps, serverless functions).
-<!## Is Permissions Management General Data Protection Regulation (GDPR) compliant?
-
-Permissions Management is currently not GDPR compliant.>
- ## Is Permissions Management available in Government Cloud? No, Permissions Management is currently not available in Government clouds.
It depends on each customer and how many AWS accounts, GCP projects, and Azure s
## Once Permissions Management is deployed, how fast can I get permissions insights?
-Once fully onboarded with data collection set up, customers can access permissions usage insights within hours. Our machine-learning engine refreshes the Permission Creep Index every hour so that customers can start their risk assessment right away.
+Once fully onboarded with data collection setup, customers can access permissions usage insights within hours. Our machine-learning engine refreshes the Permission Creep Index every hour so that customers can start their risk assessment right away.
## Is Permissions Management collecting and storing sensitive personal data?
No, Permissions Management doesn't have access to sensitive personal data.
You can read our blog and visit our web page. You can also get in touch with your Microsoft point of contact to schedule a demo.
+## What is the data destruction/decommission process?
+
+If a customer initiates a free Permissions Management 90-day trial, but does not follow up and convert to a paid license within 90 days of the free trial expiration, we will delete all collected data on or just before 90 days.
+
+If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 90 days of license termination.
+
+We also have the ability to remove, export or modify specific data should the Global Admin using the Entra Permissions Management service file an official Data Subject Request. This can be initiated by opening a ticket in the Azure portal [New support request - Microsoft Entra admin center](https://entra.microsoft.com/#blade/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical), or alternately contacting your local Microsoft representative.
+
+## Do I require a license to use Entra Permissions Management?
+
+Yes, as of July 1st, 2022, new customers must acquire a free 90-trial license or a paid license to use the service. You can enable a trial or purchase licenses here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement)
+
+## What do I do if IΓÇÖm using Public Preview version of Entra Permissions Management?
+
+If you are using the Public Preview version of Entra Permissions Management, your current deployment(s) will continue to work through October 1st.
+
+After October 1st you will need to move over to use the newly released version of the service and enable a 90-day trial or purchase licenses to continue using the service.
+
+## What do I do if IΓÇÖm using the legacy version of the CloudKnox service?
+
+We are currently working on developing a migration plan to help customers on the original CloudKnox service move to the new Entra Permissions Management service later in 2022.
+
+## Can I use Entra Permissions Management in the EU?
+
+Yes, the product is compliant.
+
+## How to I enable one of the new 18 languages supported in the GA release?
+
+We are now localized in 18 languages. We respect your browser setting or you can manually enable your language of choice by adding a query string suffix to your Entra Permissions Management URL:
+
+`?lang=xx-XX`
+
+Where xx-XX is one of the following available language parameters: 'cs-CZ', 'de-DE', 'en-US', 'es-ES', 'fr-FR', 'hu-HU', 'id-ID', 'it-IT', 'ja-JP', 'ko-KR', 'nl-NL', 'pl-PL', 'pt-BR', 'pt-PT', 'ru-RU', 'sv-SE', 'tr-TR', 'zh-CN', or 'zh-TW'.
+ ## Resources - [Public Preview announcement blog](https://www.aka.ms/CloudKnox-Public-Preview-Blog) - [Permissions Management web page](https://microsoft.com/security/business/identity-access-management/permissions-management)
+- For more information about Microsoft's privacy and security terms, seeΓÇ»[Commercial Licensing Terms](https://www.microsoft.com/licensing/terms/product/ForallOnlineServices/all).
+- For more information about Microsoft's data processing and security terms when you subscribe to a product, see [Microsoft Products and Services Data Protection Addendum (DPA)](https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA).
+- For more information about MicrosoftΓÇÖs policy and practices for Data Subject Requests for GDPR and CCPA: [https://docs.microsoft.com/en-us/compliance/regulatory/gdpr-dsr-azure](https://docs.microsoft.com/compliance/regulatory/gdpr-dsr-azure).
## Next steps -- For an overview of Permissions Management, see [What's Permissions Management Permissions Management?](overview.md).
+- For an overview of Permissions Management, see [What's Permissions Management?](overview.md).
- For information on how to onboard Permissions Management in your organization, see [Enable Permissions Management in your organization](onboard-enable-tenant.md).
active-directory How To Add Remove Role Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-add-remove-role-task.md
# Add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities -
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management (Entra) is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can add and remove roles and tasks for Microsoft Azure and Google Cloud Platform (GCP) identities using the **Remediation** dashboard. > [!NOTE]
active-directory How To Attach Detach Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-attach-detach-permissions.md
# Attach and detach policies for Amazon Web Services (AWS) identities -
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can attach and detach permissions for users, roles, and groups for Amazon Web Services (AWS) identities using the **Remediation** dashboard. > [!NOTE]
active-directory How To Audit Trail Results https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-audit-trail-results.md
# Generate an on-demand report from a query
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can generate an on-demand report from a query in the **Audit** dashboard in Permissions Management. You can: - Run a report on-demand.
active-directory How To Clone Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-clone-role-policy.md
# Clone a role/policy in the Remediation dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can use the **Remediation** dashboard in Permissions Management to clone roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems. > [!NOTE]
active-directory How To Create Alert Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-alert-trigger.md
# Create and view activity alerts and alert triggers
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can create and view activity alerts and alert triggers in Permissions Management. ## Create an activity alert trigger
active-directory How To Create Approve Privilege Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-approve-privilege-request.md
# Create or approve a request for permissions
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to create or approve a request for permissions in the **Remediation** dashboard in Permissions Management. You can create and approve requests for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems. The **Remediation** dashboard has two privilege-on-demand (POD) workflows you can use:
active-directory How To Create Custom Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-custom-queries.md
# Create a custom query
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can use the **Audit** dashboard in Permissions Management to create custom queries that you can modify, save, and run as often as you want. ## Open the Audit dashboard
active-directory How To Create Group Based Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-group-based-permissions.md
# Select group-based permissions settings
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can create and manage group-based permissions in Permissions Management with the User management dashboard. [!NOTE] The Permissions Management Administrator for all authorization systems will be able to create the new group based permissions.
active-directory How To Create Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-role-policy.md
# Create a role/policy in the Remediation dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can use the **Remediation** dashboard in Permissions Management to create roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems. > [!NOTE]
active-directory How To Create Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-create-rule.md
# Create a rule in the Autopilot dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to create a rule in the Permissions Management **Autopilot** dashboard. > [!NOTE]
active-directory How To Delete Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-delete-role-policy.md
# Delete a role/policy in the Remediation dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can use the **Remediation** dashboard in Permissions Management to delete roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems. > [!NOTE]
active-directory How To Modify Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-modify-role-policy.md
# Modify a role/policy in the Remediation dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can use the **Remediation** dashboard in Permissions Management to modify roles/policies for the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems. > [!NOTE]
active-directory How To Notifications Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-notifications-rule.md
# View notification settings for a rule in the Autopilot dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to view notification settings for a rule in the Permissions Management **Autopilot** dashboard. > [!NOTE]
active-directory How To Recommendations Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-recommendations-rule.md
# Generate, view, and apply rule recommendations in the Autopilot dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to generate and view rule recommendations in the Permissions Management **Autopilot** dashboard. > [!NOTE]
active-directory How To Revoke Task Readonly Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-revoke-task-readonly-status.md
# Revoke access to high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities -
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can revoke high-risk and unused tasks or assign read-only status for Microsoft Azure and Google Cloud Platform (GCP) identities using the **Remediation** dashboard. > [!NOTE]
active-directory How To View Role Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/how-to-view-role-policy.md
# View information about roles/ policies in the Remediation dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Remediation** dashboard in Permissions Management enables system administrators to view, adjust, and remediate excessive permissions based on a user's activity data. You can use the **Roles/Policies** subtab in the dashboard to view information about roles and policies in the Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) authorization systems. > [!NOTE]
active-directory Integration Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/integration-api.md
# Set and view configuration settings
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This topic describes how to view configuration settings, create and delete a service account, and create a role in Permissions Management. ## View configuration settings
active-directory Multi Cloud Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/multi-cloud-glossary.md
# The Permissions Management glossary
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This glossary provides a list of some of the commonly used cloud terms in Permissions Management. These terms will help Permissions Management users navigate through cloud-specific terms and cloud-generic terms. ## Commonly-used acronyms and terms
active-directory Onboard Add Account After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-add-account-after-onboarding.md
# Add an account/ subscription/ project after onboarding is complete
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to add an Amazon Web Services (AWS) account, Microsoft Azure subscription, or Google Cloud Platform (GCP) project in Microsoft Permissions Management after you've completed the onboarding process. ## Add an AWS account after onboarding is complete
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
# Onboard an Amazon Web Services (AWS) account
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-
-> [!NOTE]
-> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-- This article describes how to onboard an Amazon Web Services (AWS) account on Permissions Management. > [!NOTE]
To view a video on how to configure and onboard AWS accounts in Permissions Mana
### 5. Set up an AWS member account
+Select **Enable AWS SSO checkbox**, if the AWS account access is configured through AWS SSO.
+
+Choose from 3 options to manage AWS accounts.
+
+#### Option 1: Automatically manage
+
+Choose this option to automatically detect and add to monitored account list, without additional configuration. Steps to detect list of accounts and onboard for collection:
+
+- Deploy Master account CFT (Cloudformation template) which creates organization account role that grants permission to OIDC role created earlier to list accounts, OUs and SCPs.
+- If AWS SSO is enabled, organization account CFT also adds policy needed to collect AWS SSO configuration details.
+- Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. This creates a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection.
+
+Any current or future accounts found get onboarded automatically.
+
+To view status of onboarding after saving the configuration:
+
+- Navigate to data collectors tab.
+- Click on the status of the data collector.
+- View accounts on the In Progress page
+
+#### Option 2: Enter authorization systems
1. In the **Permissions Management Onboarding - AWS Member Account Details** page, enter the **Member Account Role** and the **Member Account IDs**. You can enter up to 10 account IDs. Click the plus icon next to the text box to add more account IDs.
To view a video on how to configure and onboard AWS accounts in Permissions Mana
1. Return to Permissions Management, and in the **Permissions Management Onboarding - AWS Member Account Details** page, select **Next**. This step completes the sequence of required connections from Azure AD STS to the OIDC connection account and the AWS member account.
+
+#### Option 3: Select authorization systems
+
+This option detects all AWS accounts that are accessible through OIDC role access created earlier.
+
+- Deploy Master account CFT (Cloudformation template) which creates organization account role that grants permission to OIDC role created earlier to list accounts, OUs and SCPs.
+- If AWS SSO is enabled, organization account CFT also adds policy needed to collect AWS SSO configuration details.
+- Deploy Member account CFT in all the accounts that need to be monitored by Entra Permissions Management. This creates a cross account role that trusts the OIDC role created earlier. The SecurityAudit policy is attached to the role created for data collection.
+- Click Verify and Save.
+- Navigate to newly create Data Collector row under AWSdata collectors.
+- Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+- To onboard and start collection, choose specific ones from the detected list and consent for collection.
### 6. Review and save
active-directory Onboard Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-azure.md
# Onboard a Microsoft Azure subscription
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management (Permissions Management) is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-
-> [!NOTE]
-> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
- This article describes how to onboard a Microsoft Azure subscription or subscriptions on Permissions Management (Permissions Management). Onboarding a subscription creates a new authorization system to represent the Azure subscription in Permissions Management. > [!NOTE]
To view a video on how to enable Permissions Management in your Azure AD tenant,
### 1. Add Azure subscription details
-1. On the **Permissions Management Onboarding - Azure Subscription Details** page, enter the **Subscription IDs** that you want to onboard.
+Choose from 3 options to manage Azure subscriptions.
+
+#### Option 1: Automatically manage
+
+This option allows subscriptions to be automatically detected and monitored without additional configuration. Steps to detect list of subscriptions and onboard for collection:
+
+- Grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription scope.
+
+Any current or future subscriptions found get onboarded automatically.
+
+ To view status of onboarding after saving the configuration:
+
+1. In the MEPM portal, click the cog on the top right-hand side.
+1. Navigate to data collectors tab.
+1. Click ΓÇÿCreate ConfigurationΓÇÖ
+1. For onboarding mode, select ΓÇÿAutomatically ManageΓÇÖ
+1. Click ΓÇÿVerify Now & SaveΓÇÖ
+1. Collectors will now be listed and change through status types. For each collector listed with a status of ΓÇ£Collected InventoryΓÇ¥, click on that status to view further information.
+1. You can then view subscriptions on the In Progress page
- > [!NOTE]
- > To locate the Azure subscription IDs, open the **Subscriptions** page in Azure.
- > You can enter up to 10 subscriptions IDs. Select the plus sign **(+)** icon next to the text box to enter more subscriptions.
+#### Option 2: Enter authorization systems
-1. From the **Scope** dropdown, select **Subscription** or **Management Group**. The script box displays the role assignment script.
+You have the ability to specify only certain subscriptions to manage and monitor with MEPM (up to 10 per collector). Follow the steps below to configure these subscriptions to be monitored:
- > [!NOTE]
- > Select **Subscription** if you want to assign permissions separately for each individual subscription. The generated script has to be executed once per subscription.
- > Select **Management Group** if all of your subscriptions are under one management group. The generated script must be executed once for the management group.
+1. For each subscription you wish to manage, ensure that the ΓÇÿReaderΓÇÖ role has been granted to Cloud Infrastructure Entitlement Management application for this subscription.
+1. In the MEPM portal, click the cog on the top right-hand side.
+1. Navigate to data collectors tab
+1. Click ΓÇÿCreate ConfigurationΓÇÖ
+1. Select ΓÇÿEnter Authorization SystemsΓÇÖ
+1. Under the Subscription IDs section, enter a desired subscription ID into the input box. Click the ΓÇ£+ΓÇ¥ up to 9 additional times, putting a single subscription ID into each respective input box.
+1. Once you have input all of the desired subscriptions, click next
+1. Click ΓÇÿVerify Now & SaveΓÇÖ
+1. Once the access to read and collect data is verified, collection will begin.
-1. To give this role assignment to the service principal, copy the script to a file on your system where Azure CLI is installed and execute it.
+To view status of onboarding after saving the configuration:
- You can execute the script once for each subscription, or once for all the subscriptions in the management group.
+1. Navigate to data collectors tab.
+1. Click on the status of the data collector.
+1. View subscriptions on the In Progress page
-1. From the **Enable Controller** dropdown, select:
+#### Option 3: Select authorization systems
- - **True**, if you want the controller to provide Permissions Management with read and write access so that any remediation you want to do from the Permissions Management platform can be done automatically.
- - **False**, if you want the controller to provide Permissions Management with read-only access.
+This option detects all subscriptions that are accessible by the Cloud Infrastructure Entitlement Management application.
-1. Return to **Permissions Management Onboarding - Azure Subscription Details** page and select **Next**.
+1. Grant Reader role to Cloud Infrastructure Entitlement Management application at management group or subscription(s) scope.
+1. Click Verify and Save.
+1. Navigate to newly create Data Collector row under Azure data collectors.
+1. Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+1. To onboard and start collection, choose specific ones subscriptions from the detected list and consent for collection.
### 2. Review and save.
To view a video on how to enable Permissions Management in your Azure AD tenant,
- For information on how to enable or disable the controller after onboarding is complete, see [Enable or disable the controller](onboard-enable-controller-after-onboarding.md). - For information on how to add an account/subscription/project after onboarding is complete, see [Add an account/subscription/project after onboarding is complete](onboard-add-account-after-onboarding.md). - For an overview on Permissions Management, see [What's Permissions Management?](overview.md).-- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
+- For information on how to start viewing information about your authorization system in Permissions Management, see [View key statistics and data about your authorization system](ui-dashboard.md).
active-directory Onboard Enable Controller After Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-controller-after-onboarding.md
# Enable or disable the controller after onboarding is complete
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to enable or disable the controller in Microsoft Azure and Google Cloud Platform (GCP) after onboarding is complete. This article also describes how to enable the controller in Amazon Web Services (AWS) if you disabled it during onboarding. You can only enable the controller in AWS at this time; you can't disable it.
active-directory Onboard Enable Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-enable-tenant.md
# Enable Permissions Management in your organization
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
--
-> [!NOTE]
-> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
--- This article describes how to enable Permissions Management in your organization. Once you've enabled Permissions Management, you can connect it to your Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP) platforms. > [!NOTE]
To enable Permissions Management in your organization:
## How to enable Permissions Management on your Azure AD tenant 1. In your browser:
- 1. Go to [Azure services](https://portal.azure.com) and use your credentials to sign in to [Azure Active Directory](https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
+ 1. Go to [Entra services](https://entra.microsoft.com) and use your credentials to sign in to [Azure Active Directory](https://ms.portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/Overview).
1. If you aren't already authenticated, sign in as a global administrator user. 1. If needed, activate the global administrator role in your Azure AD tenant.
- 1. In the Azure AD portal, select **Features highlights**, and then select **Permissions Management**.
-
- 1. If you're prompted to select a sign in account, sign in as a global administrator for a specified tenant.
-
- The **Welcome to Permissions Management** screen appears, displaying information on how to enable Permissions Management on your tenant.
-
-1. To provide access to the Permissions Management application, create a service principal.
-
- An Azure service principal is a security identity used by user-created apps, services, and automation tools to access specific Azure resources.
-
- > [!NOTE]
- > To complete this step, you must have Azure CLI or Azure PowerShell on your system, or an Azure subscription where you can run Cloud Shell.
+ 1. In the Azure AD portal, select **Permissions Management**, and then select the link to purchase a license or begin a trial.
- - To create a service principal that points to the Permissions Management application via Cloud Shell:
-
- 1. Copy the script on the **Welcome** screen:
-
- `az ad sp create --id b46c3ac5-9da6-418f-a849-0a07a10b3c6c`
-
- 1. If you have an Azure subscription, return to the Azure AD portal and select **Cloud Shell** on the navigation bar.
- If you don't have an Azure subscription, open a command prompt on a Windows Server.
- 1. If you have an Azure subscription, paste the script into Cloud Shell and press **Enter**.
-
- - For information on how to create a service principal through the Azure portal, see [Create an Azure service principal with the Azure CLI](/cli/azure/create-an-azure-service-principal-azure-cli).
-
- - For information on the **az** command and how to sign in with the no subscriptions flag, see [az login](/cli/azure/reference-index?view=azure-cli-latest#az-login&preserve-view=true).
-
- - For information on how to create a service principal via Azure PowerShell, see [Create an Azure service principal with Azure PowerShell](/powershell/azure/create-azure-service-principal-azureps?view=azps-7.1.0&preserve-view=true).
-
- 1. After the script runs successfully, the service principal attributes for Permissions Management display. Confirm the attributes.
-
- The **Cloud Infrastructure Entitlement Management** application displays in the Azure AD portal under **Enterprise applications**.
-
-1. Return to the **Welcome to Permissions Management** screen and select **Enable Permissions Management**.
+> [!NOTE]
+> There are two ways to enable a trial or a full product license, self-service and volume licensing.
+> For self-service, navigate to the M365 portal at [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) and purchase licenses or sign up for a free trial. The second way is through Volume Licensing or Enterprise agreements. If your organization falls under a volume license or enterprise agreement scenario, please contact your Microsoft representative.
- You have now completed enabling Permissions Management on your tenant. Permissions Management launches with the **Data Collectors** dashboard.
+Permissions Management launches with the **Data Collectors** dashboard.
## Configure data collection settings
active-directory Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-gcp.md
# Onboard a Google Cloud Platform (GCP) project
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
--
-> [!NOTE]
-> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
-- This article describes how to onboard a Google Cloud Platform (GCP) project on Permissions Management. > [!NOTE]
To view a video on how to configure and onboard GCP accounts in Permissions Mana
### 2. Set up a GCP OIDC project.
+Choose from 3 options to manage GCP projects.
+
+#### Option 1: Automatically manage
+
+This option allows projects to be automatically detected and monitored without additional configuration. Steps to detect list of projects and onboard for collection:
+
+- Grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope.
+
+Any current or future projects found get onboarded automatically.
+
+To view status of onboarding after saving the configuration:
+
+- Navigate to data collectors tab.
+- Click on the status of the data collector.
+- View projects on the In Progress page
+
+#### Option 2: Enter authorization systems
+ 1. In the **Permissions Management Onboarding - GCP OIDC Account Details & IDP Access** page, enter the **OIDC Project ID** and **OIDC Project Number** of the GCP project in which the OIDC provider and pool will be created. You can change the role name to your requirements. > [!NOTE]
To view a video on how to configure and onboard GCP accounts in Permissions Mana
You can either download and run the script at this point or you can do it in the Google Cloud Shell, as described [later in this article](onboard-gcp.md#4-run-scripts-in-cloud-shell-optional-if-not-already-executed). 1. Select **Next**.
+#### Option 3: Select authorization systems
+
+This option detects all projects that are accessible by the Cloud Infrastructure Entitlement Management application.
+
+- Grant Viewer and Security Reviewer role to service account created in previous step at organization, folder or project scope.
+- Click Verify and Save.
+- Navigate to newly create Data Collector row under GCP data collectors.
+- Click on Status column when the row has ΓÇ£PendingΓÇ¥ status
+- To onboard and start collection, choose specific ones from the detected list and consent for collection.
+ ### 3. Set up GCP member projects. 1. In the **Permissions Management Onboarding - GCP Project Ids** page, enter the **Project IDs**.
active-directory Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/overview.md
# What's Permissions Management? -
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-
-> [!NOTE]
-> The Permissions Management PREVIEW is currently not available for tenants hosted in the European Union (EU).
- ## Overview Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP).
active-directory Product Account Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-explorer.md
# View roles and identities that can access account information from an external account
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- You can view information about users, groups, and resources that can access account information from an external account in Permissions Management. ## Display information about users, groups, or tasks
active-directory Product Account Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-account-settings.md
# View personal and organization information
-> [!IMPORTANT]
-> Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Account settings** dashboard in Permissions Management allows you to view personal information, passwords, and account preferences. This information can't be modified because the user information is pulled from Azure AD. Only **User Session Time(min)**
active-directory Product Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-audit-trail.md
# Filter and query user activity
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Audit** dashboard in Permissions Management details all user activity performed in your authorization system. It captures all high risk activity in a centralized location, and allows system administrators to query the logs. The **Audit** dashboard enables you to: - Create and save new queries so you can access key data points easily.
active-directory Product Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-dashboard.md
Last updated 02/23/2022
-- # View data about the activity in your authorization system
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The Permissions Management **Dashboard** provides an overview of the authorization system and account activity being monitored. You can use this dashboard to view data collected from your Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) authorization systems. ## View data about your authorization system
active-directory Product Data Inventory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-inventory.md
# Display an inventory of created resources and licenses for your authorization system
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- You can use the **Inventory** dashboard in Permissions Management to display an inventory of created resources and licensing information for your authorization system and its associated accounts. ## View resources created for your authorization system
active-directory Product Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-data-sources.md
# View and configure settings for data collection
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
-- You can use the **Data Collectors** dashboard in Permissions Management to view and configure settings for collecting data from your authorization systems. It also provides information about the status of the data collection. ## Access and view data sources
active-directory Product Define Permission Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-define-permission-levels.md
# Define and manage users, roles, and access levels
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- In Permissions Management, a key component of the interface is the User management dashboard. This topic describes how system administrators can define and manage users, their roles, and their access levels in the system. ## The User management dashboard
active-directory Product Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-integrations.md
# View integration information about an authorization system
-> [!IMPORTANT]
-> Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Integrations** dashboard in Permissions Management allows you to view all your authorization systems in one place, and to ensure all applications are functioning as one. This information helps improve quality and performance as a whole. ## Display integration information about an authorization system
active-directory Product Permission Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permission-analytics.md
# Create and view permission analytics triggers
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how you can create and view permission analytics triggers in Permissions Management. ## View permission analytics triggers
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
# Generate and download the Permissions analytics report
-> [!IMPORTANT]
-> Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to generate and download the **Permissions analytics report** in Permissions Management. > [!NOTE]
active-directory Product Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-reports.md
# View system reports in the Reports dashboard
-> [!IMPORTANT]
-> Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- Permissions Management has various types of system report types available that capture specific sets of data. These reports allow management to: - Make timely decisions.
active-directory Product Rule Based Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-rule-based-anomalies.md
# Create and view rule-based anomaly alerts and anomaly triggers
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- Rule-based anomalies identify recent activity in Permissions Management that is determined to be unusual based on explicit rules defined in the activity trigger. The goal of rule-based anomaly is high precision detection. ## View rule-based anomaly alerts
active-directory Product Statistical Anomalies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-statistical-anomalies.md
# Create and view statistical anomalies and anomaly triggers
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- Statistical anomalies can detect outliers in an identity's behavior if recent activity is determined to be unusual based on models defined in an activity trigger. The goal of this anomaly trigger is a high recall rate. ## View statistical anomalies in an identity's behavior
active-directory Report Create Custom Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-create-custom-report.md
# Create, view, and share a custom report
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to create, view, and share a custom report in Permissions Management. ## Create a custom report
active-directory Report View System Report https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/report-view-system-report.md
# Generate and view a system report
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to generate and view a system report in Permissions Management. ## Generate a system report
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/troubleshoot.md
# Troubleshoot issues with Permissions Management
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This section answers troubleshoot issues with Permissions Management. ## One time passcode (OTP) email
active-directory Ui Audit Trail https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-audit-trail.md
# Use queries to see how users access information
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Audit** dashboard in Permissions Management provides an overview of queries a Permissions Management user has created to review how users access their authorization systems and accounts. This article provides an overview of the components of the **Audit** dashboard.
active-directory Ui Autopilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-autopilot.md
# View rules in the Autopilot dashboard
-> [!IMPORTANT]
-> Micorosft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Autopilot** dashboard in Permissions Management provides a table of information about **Autopilot rules** for administrators.
active-directory Ui Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-dashboard.md
# View key statistics and data about your authorization system
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- Permissions Management provides a summary of key statistics and data about your authorization system regularly. This information is available for Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). ## View metrics related to avoidable risk
active-directory Ui Remediation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-remediation.md
# View roles/policies and requests for permission in the Remediation dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Remediation** dashboard in Permissions Management provides an overview of roles/policies, permissions, a list of existing requests for permissions, and requests for permissions you have made. This article provides an overview of the components of the **Remediation** dashboard.
active-directory Ui Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-tasks.md
# View information about active and completed tasks
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes the usage of the **Permissions Management Tasks** pane in Permissions Management. ## Display active and completed tasks
active-directory Ui Triggers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-triggers.md
# View information about activity triggers
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to use the **Activity triggers** dashboard in Permissions Management to view information about activity alerts and triggers. ## Display the Activity triggers dashboard
active-directory Ui User Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/ui-user-management.md
# Manage users and groups with the User management dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article describes how to use the Permissions Management **User management** dashboard to view and manage users and groups. **To display the User management dashboard**:
active-directory Usage Analytics Access Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-access-keys.md
# View analytic information about access keys
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Analytics** dashboard in Permissions Management provides details about identities, resources, and tasks that you can use make informed decisions about granting permissions, and reducing risk on unused permissions. - **Users**: Tracks assigned permissions and usage of various identities.
active-directory Usage Analytics Active Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-resources.md
# View analytic information about active resources
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for: - **Users**: Tracks assigned permissions and usage of various identities.
active-directory Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md
# View analytic information about active tasks
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for: - **Users**: Tracks assigned permissions and usage of various identities.
active-directory Usage Analytics Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-groups.md
# View analytic information about groups
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for: - **Users**: Tracks assigned permissions and usage of various identities.
active-directory Usage Analytics Home https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-home.md
# View analytic information with the Analytics dashboard
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- This article provides a brief overview of the Analytics dashboard in Permissions Management, and the type of analytic information it provides for Amazon Web Services (AWS), Microsoft Azure, or Google Cloud Platform (GCP). ## Display the Analytics dashboard
active-directory Usage Analytics Serverless Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-serverless-functions.md
# View analytic information about serverless functions
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for: - **Users**: Tracks assigned permissions and usage of various identities.
active-directory Usage Analytics Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-users.md
# View analytic information about users
-> [!IMPORTANT]
-> Microsoft Entra Permissions Management is currently in PREVIEW.
-> Some information relates to a prerelease product that may be substantially modified before it's released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
- The **Analytics** dashboard in Permissions Management collects detailed information, analyzes, reports on, and visualizes data about all identity types. System administrators can use the information to make informed decisions about granting permissions and reducing risk on unused permissions for: - **Users**: Tracks assigned permissions and usage of various identities.
active-directory How To Gmsa Cmdlets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-gmsa-cmdlets.md
Previously updated : 11/16/2020 Last updated : 2022-07-01
# Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets
-The purpose of this document is to describe the Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets. These cmdlets allow you to have more granularity on the permissions that are applied on the service account (gMSA). By default, Azure AD Connect cloud sync applies all permissions similar to Azure AD Connect on the default gMSA or a custom gMSA.
+The purpose of this document is to describe the Azure AD Connect cloud provisioning agent gMSA PowerShell cmdlets. These cmdlets allow you to have more granularity on the permissions that are applied on the service account (gMSA). By default, Azure AD Connect cloud sync applies all permissions similar to Azure AD Connect on the default gMSA or a custom gMSA, during cloud provisioning agent install.
This document will cover the following cmdlets:
-`Set-AADCloudSyncRestrictedPermissions`
- `Set-AADCloudSyncPermissions`
+`Set-AADCloudSyncRestrictedPermissions`
+ ## How to use the cmdlets: The following prerequisites are required to use these cmdlets.
The following prerequisites are required to use these cmdlets.
2. Import Provisioning Agent PS module into a PowerShell session. ```powershell
- Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.Powershell.dll"
+ Import-Module "C:\Program Files\Microsoft Azure AD Connect Provisioning Agent\Microsoft.CloudSync.Powershell.dll"
```
-3. Remove existing permissions. To remove all existing permissions on the service account, except SELF use: `Set-AADCloudSyncRestrictedPermission`.
-
- This cmdlet requires a parameter called `Credential` which can be passed, or it will prompt if called without it.
+3. These cmdlets require a parameter called `Credential` which can be passed, or will prompt the user if not provided in the command line. Depending on the cmdlet syntax used, these credentials must be an enterprise admin account or, at a minimum, a domain administrator of the target domain where you're setting the permissions.
- To create a variable, use:
+4. To create a variable for credentials, use:
`$credential = Get-Credential`
+
+5. To set Active Directory permissions for cloud provisioning agent, you can use the following cmdlet. This will grant permissions in the root of the domain allowing the service account to manage on-premises Active Directory objects. See [Using Set-AADCloudSyncPermissions](#using-set-aadcloudsyncpermissions) below for examples on setting the permissions.
- This will prompt the user to enter username and password. The credentials must be at a minimum domain administrator(of the domain where agent is installed), could be enterprise admin as well.
-
-4. Then you can call the cmdlet to remove extra permissions:
+ `Set-AADCloudSyncPermissions -EACredential $credential`
- ```powershell
- Set-AADCloudSyncRestrictedPermissions -Credential $credential
- ```
-
-5. Or you can simply call:
+6. To restrict Active Directory permissions set by default on the cloud provisioning agent account, you can use the following cmdlet. This will increase the security of the service account by disabling permission inheritance and removing all existing permissions, except SELF and Full Control for administrators. See [Using Set-AADCloudSyncRestrictedPermission](#using-set-aadcloudsyncrestrictedpermission) below for examples on restricting the permissions.
- `Set-AADCloudSyncRestrictedPermissions` which will prompt for credentials.
-
-6. Add specific permission type. Permissions added are same as Azure AD Connect. See [Using Set-AADCloudSyncPermissions](#using-set-aadcloudsyncpermissions) below for examples on setting the permissions.
+ `Set-AADCloudSyncRestrictedPermission -Credential $credential`
## Using Set-AADCloudSyncPermissions
-`Set-AADCloudSyncPermissions` supports the following permission types which are identical to the permissions used by Azure AD Connect. The following permission types are supported:
+`Set-AADCloudSyncPermissions` supports the following permission types which are identical to the permissions used by Azure AD Connect Classic Sync (ADSync). The following permission types are supported:
|Permission type|Description| |--|--|
The following prerequisites are required to use these cmdlets.
|HybridExchangePermissions|See [HybridExchangePermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-hybrid-deployment) permissions for Azure AD Connect| |ExchangeMailPublicFolderPermissions| See [ExchangeMailPublicFolderPermissions](../../active-directory/hybrid/how-to-connect-configure-ad-ds-connector-account.md#permissions-for-exchange-mail-public-folders) permissions for Azure AD Connect| |CloudHR| Applies 'Create/delete User objects' on 'This object and all descendant objects'|
-|All|adds all the above permissions.|
+|All| Applies all the above permissions|
You can use AADCloudSyncPermissions in one of two ways:-- [Grant a certain permission to all configured domains](#grant-a-certain-permission-to-all-configured-domains)-- [Grant a certain permission to a specific domain](#grant-a-certain-permission-to-a-specific-domain)
+- [Grant permissions to all configured domains](#grant-permissions-to-all-configured-domains)
+- [Grant permissions to a specific domain](#grant-permissions-to-a-specific-domain)
-## Grant a certain permission to all configured domains
+## Grant permissions to all configured domains
Granting certain permissions to all configured domains will require the use of an enterprise admin account. ```powershell
-Set-AADCloudSyncPermissions -PermissionType "Any mentioned above" -EACredential $credential (prepopulated same as above [$credential = Get-Credential])
+$credential = Get-Credential
+Set-AADCloudSyncPermissions -PermissionType "Any mentioned above" -EACredential $credential
```
-## Grant a certain permission to a specific domain
+## Grant permissions to a specific domain
-Granting certain permissions to a specific domain will require the use of, at minimum a domain admin account of the domain you are attempting to add.
+Granting certain permissions to a specific domain will require the use of a TargetDomainCredential that is enterprise admin or, domain admin of the target domain. The TargetDomain has to be already configured through wizard.
```powershell
-Set-AADCloudSyncPermissions -PermissionType "Any mentioned above" -TargetDomain "FQDN of domain" (has to be already configured through wizard) -TargetDomainCredential $credential(same as above)
+$credential = Get-Credential
+Set-AADCloudSyncPermissions -PermissionType "Any mentioned above" -TargetDomain "FQDN of domain" -TargetDomainCredential $credential
```
-Note: for 1. The credentials must be at a minimum Enterprise admin.
-
-For 2. The Credentials can be either Domain admin or enterprise admin.
+## Using Set-AADCloudSyncRestrictedPermissions
+For increased security, `Set-AADCloudSyncRestrictedPermissions` will tighten the permissions set on the cloud provisioning agent account itself. Hardening permissions on the cloud provisioning agent account involves the following changes:
+
+- Disable inheritance
+- Remove all default permissions, except ACEs specific to SELF.
+- Set Full Control permissions for SYSTEM, Administrators, Domain Admins, and Enterprise Admins.
+- Set Read permissions for Authenticated Users and Enterprise Domain Controllers.
+
+ The -Credential parameter is necessary to specify the Administrator account that has the necessary privileges to restrict Active Directory permissions on the cloud provisioning agent account. This is typically the domain or enterprise administrator.
+
+For Example:
+
+``` powershell
+$credential = Get-Credential
+Set-AADCloudSyncRestrictedPermissions -Credential $credential
+```
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
Title: "What's new in Azure Active Directory External Identities" description: "New and updated documentation for the Azure Active Directory External Identities." Previously updated : 06/01/2022 Last updated : 07/01/2022
Welcome to what's new in Azure Active Directory External Identities documentation. This article lists new docs that have been added and those that have had significant updates in the last three months. To learn what's new with the External Identities service, see [What's new in Azure Active Directory](../fundamentals/whats-new.md).
+## June 2022
+
+### Updated articles
+
+- [B2B direct connect overview](b2b-direct-connect-overview.md)
+- [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md)
+- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md)
+- [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)
+- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)
+- [External Identities in Azure Active Directory](external-identities-overview.md)
+- [Azure Active Directory B2B collaboration FAQs](faq.yml)
+- [External Identities documentation](index.yml)
+- [Leave an organization as an external user](leave-the-organization.md)
+- [B2B collaboration overview](what-is-b2b.md)
+- [Azure Active Directory External Identities: What's new](whats-new-docs.md)
+- [Quickstart: Add a guest user and send an invitation](b2b-quickstart-add-guest-users-portal.md)
+- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)
## May 2022
Welcome to what's new in Azure Active Directory External Identities documentatio
- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md) - [Properties of an Azure Active Directory B2B collaboration user](user-properties.md) - [B2B collaboration overview](what-is-b2b.md)-
-## March 2022
-
-### New articles
--- [B2B direct connect overview](b2b-direct-connect-overview.md)-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)-
-### Updated articles
--- [Troubleshooting Azure Active Directory B2B collaboration](troubleshoot.md)-- [B2B direct connect overview](b2b-direct-connect-overview.md)-- [Configure cross-tenant access settings for B2B direct connect](cross-tenant-access-settings-b2b-direct-connect.md)-- [External Identities documentation](index.yml)-- [Billing model for Azure AD External Identities](external-identities-pricing.md)-- [Federation with SAML/WS-Fed identity providers for guest users (preview)](direct-federation.md)-- [Azure Active Directory B2B collaboration code and PowerShell samples](code-samples.md)-- [Overview: Cross-tenant access with Azure AD External Identities](cross-tenant-access-overview.md)-- [Add Google as an identity provider for B2B guest users](google-federation.md)-- [Invite internal users to B2B collaboration](invite-internal-users.md)-- [Authentication and Conditional Access for External Identities](authentication-conditional-access.md)-- [Azure Active Directory B2B best practices](b2b-fundamentals.md)-- [Azure Active Directory B2B collaboration FAQs](faq.yml)-- [Configure cross-tenant access settings for B2B collaboration](cross-tenant-access-settings-b2b-collaboration.md)-- [External Identities in Azure Active Directory](external-identities-overview.md)-- [Leave an organization as a B2B collaboration user](leave-the-organization.md)-- [Configure external collaboration settings](external-collaboration-settings-configure.md)-- [Reset redemption status for a guest user (Preview)](reset-redemption-status.md)
active-directory Admin Units Members Add https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/admin-units-members-add.md
Previously updated : 06/20/2022 Last updated : 06/30/2022
> Administrative units support for devices is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-In Azure Active Directory (Azure AD), you can add users, groups, or devices to an administrative unit to restrict the scope of role permissions. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
+In Azure Active Directory (Azure AD), you can add users, groups, or devices to an administrative unit to restrict the scope of role permissions. Adding a group to an administrative unit brings the group itself into the management scope of any group administrator who is also scoped to that administrative unit. For additional details on what scoped administrators can do, see [Administrative units in Azure Active Directory](administrative-units.md).
This article describes how to add users, groups, or devices to administrative units manually. For information about how to add users or devices to administrative units dynamically using rules, see [Manage users or devices for an administrative unit with dynamic membership rules](admin-units-members-dynamic.md).
active-directory Administrative Units https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/administrative-units.md
Previously updated : 06/23/2022 Last updated : 06/30/2022
A central administrator could:
![Screenshot of Devices and Administrative units page with Remove from administrative unit option.](./media/administrative-units/admin-unit-overview.png)
+## Constraints
+
+Here are some of the constraints for administrative units.
+
+- Administrative units can't be nested.
+- Administrative unit-scoped user account administrators can't create or delete users.
+- Administrative units are currently not available in [Azure AD Identity Governance](../governance/identity-governance-overview.md).
+
+## Groups
+
+Adding a group to an administrative unit brings the group itself into the management scope of the administrative unit, but **not** the members of the group. In other words, an administrator scoped to the administrative unit can manage properties of the group, such as group name or membership, but they cannot manage properties of the users or devices within that group (unless those users and devices are separately added as members of the administrative unit).
+
+For example, a [User Administrator](permissions-reference.md#user-administrator) scoped to an administrative unit that contains a group can and can't do the following:
+
+| Permissions | Can do |
+| | |
+| Manage the name of the group | :heavy_check_mark: |
+| Manage the membership of the group | :heavy_check_mark: |
+| Manage the user properties for individual **members** of the group | :x: |
+| Manage the user authentication methods of individual **members** of the group | :x: |
+| Reset the passwords of individual **members** of the group | :x: |
+
+In order for the [User Administrator](permissions-reference.md#user-administrator) to manage the user properties or user authentication methods of individual members of the group, the group members (users) must be added directly as members of the administrative unit.
+ ## License requirements Using administrative units requires an Azure AD Premium P1 license for each administrative unit administrator, and an Azure AD Free license for each administrative unit member. If you are using dynamic membership rules for administrative units, each administrative unit member requires an Azure AD Premium P1 license. To find the right license for your requirements, see [Comparing generally available features of the Free and Premium editions](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing).
The following sections describe current support for administrative unit scenario
| Administrative unit-scoped management of group properties and membership | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | | Administrative unit-scoped management of group licensing | :heavy_check_mark: | :heavy_check_mark: | :x: |
-> [!NOTE]
-> Adding a group to an administrative unit does not grant scoped group administrators the ability to manage properties for individual members of that group. For example, a scoped group administrator can manage group membership, but they can't manage authentication methods of users who are members of the group added to an administrative unit. To manage authentication methods of users who are members of the group that is added to an administrative unit, the individual group members must be directly added as users of the administrative unit, and the group administrator must also be assigned a role that can manage user authentication methods.
- ### Device management | Permissions | Microsoft Graph/PowerShell | Azure portal | Microsoft 365 admin center | | | :: | :: | :: | | Enable, disable, or delete devices | :heavy_check_mark: | :heavy_check_mark: | :x: |
-| Read Bitlocker recovery keys | :heavy_check_mark: | :heavy_check_mark: | :x: |
+| Read BitLocker recovery keys | :heavy_check_mark: | :heavy_check_mark: | :x: |
Managing devices in Intune is *not* supported at this time.
-## Constraints
-
-Here are some of the constraints for administrative units.
--- Administrative units can't be nested.-- Administrative unit-scoped user account administrators can't create or delete users.-- A scoped role assignment doesn't apply to members of groups added to an administrative unit, unless the group members are directly added to the administrative unit. For more information, see [Add members to an administrative unit](admin-units-members-add.md).-- Administrative units are currently not available in [Azure AD Identity Governance](../governance/identity-governance-overview.md).- ## Next steps - [Create or delete administrative units](admin-units-manage.md)
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Previously updated : 06/17/2022 Last updated : 06/27/2022
For more information, see [Manage access to custom security attributes in Azure
## Authentication Administrator
-Users with this role can set or reset any authentication method (including passwords) for non-administrators and some roles. Authentication Administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. For a list of the roles that an Authentication Administrator can read or update authentication methods, see [Password reset permissions](#password-reset-permissions).
+Users with this role can set or reset any authentication method (including passwords) for non-administrators and some roles. Authentication Administrators can require users who are non-administrators or assigned to some roles to re-register against existing non-password credentials (for example, MFA or FIDO), and can also revoke **remember MFA on the device**, which prompts for MFA on the next sign-in. For a list of the roles that an Authentication Administrator can read or update authentication methods, see [Who can reset passwords](#who-can-reset-passwords).
+
+Authentication Administrators can update sensitive attributes for some users. For a list of the roles that an Authentication Administrator can update sensitive attributes, see [Who can update sensitive attributes](#who-can-update-sensitive-attributes).
The [Privileged Authentication Administrator](#privileged-authentication-administrator) role has permission to force re-registration and multifactor authentication for all users. The [Authentication Policy Administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
-| - | - | - | - | - | - |
-| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
-| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No |
-| Authentication Policy Administrator | No |No | Yes | Yes | Yes |
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy | Update sensitive attributes |
+| - | - | - | - | - | - | - |
+| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No | Yes for some users (see above) |
+| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No | Yes for all users |
+| Authentication Policy Administrator | No |No | Yes | Yes | Yes | No |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
Users with this role can't change the credentials or reset MFA for members and o
> | | | > | microsoft.directory/users/authenticationMethods/create | Create authentication methods for users | > | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users |
-> | microsoft.directory/users/authenticationMethods/standard/restrictedRead | Read standard properties of authentication methods that do not include personally identifiable information for users |
+> | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users |
> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
+> | microsoft.directory/users/delete | Delete users |
+> | microsoft.directory/users/disable | Disable users |
+> | microsoft.directory/users/enable | Enable users |
> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/restore | Restore deleted users |
+> | microsoft.directory/users/basic/update | Update basic properties on users |
+> | microsoft.directory/users/manager/update | Update manager for users |
> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Users with this role can't change the credentials or reset MFA for members and o
## Authentication Policy Administrator
-Users with this role can configure the authentication methods policy, tenant-wide MFA settings, and password protection policy. This role grants permission to manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list.
+Users with this role can configure the authentication methods policy, tenant-wide MFA settings, and password protection policy. This role grants permission to manage Password Protection settings: smart lockout configurations and updating the custom banned passwords list. Authentication Policy Administrators cannot update sensitive attributes for users.
The [Authentication Administrator](#authentication-administrator) and [Privileged Authentication Administrator](#privileged-authentication-administrator) roles have permission to manage registered authentication methods on users and can force re-registration and multifactor authentication for all users.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
-| - | - | - | - | - | - |
-| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
-| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No |
-| Authentication Policy Administrator | No | No | Yes | Yes | Yes |
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy | Update sensitive attributes |
+| - | - | - | - | - | - | - |
+| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No | Yes for some users (see above) |
+| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No | Yes for all users |
+| Authentication Policy Administrator | No | No | Yes | Yes | Yes | No |
> [!IMPORTANT] > This role can't manage MFA settings in the legacy MFA management portal or Hardware OATH tokens.
Makes purchases, manages subscriptions, manages support tickets, and monitors se
> | microsoft.directory/organization/basic/update | Update basic properties on organization | > | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets |
-> | microsoft.commerce.billing/allEntities/allTasks | Manage all aspects of Office 365 billing |
+> | microsoft.commerce.billing/allEntities/allProperties/allTasks | Manage all aspects of Office 365 billing |
> | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center |
Users with this role have access to all administrative features in Azure Active
> | microsoft.directory/passwordHashSync/allProperties/allTasks | Manage all aspects of Password Hash Synchronization (PHS) in Azure AD | > | microsoft.directory/policies/allProperties/allTasks | Create and delete policies, and read and update all properties | > | microsoft.directory/conditionalAccessPolicies/allProperties/allTasks | Manage all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/allProperties/allTasks | Manage all aspects of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
> | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management | > | microsoft.directory/provisioningLogs/allProperties/read | Read all properties of provisioning logs | > | microsoft.directory/roleAssignments/allProperties/allTasks | Create and delete role assignments, and read and update all role assignment properties |
Users with this role have access to all administrative features in Azure Active
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.cloudPC/allEntities/allProperties/allTasks | Manage all aspects of Windows 365 |
-> | microsoft.commerce.billing/allEntities/allTasks | Manage all aspects of Office 365 billing |
+> | microsoft.commerce.billing/allEntities/allProperties/allTasks | Manage all aspects of Office 365 billing |
> | microsoft.dynamics365/allEntities/allTasks | Manage all aspects of Dynamics 365 | > | microsoft.edge/allEntities/allProperties/allTasks | Manage all aspects of Microsoft Edge | > | microsoft.flow/allEntities/allTasks | Manage all aspects of Microsoft Power Automate |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/permissionGrantPolicies/standard/read | Read standard properties of permission grant policies | > | microsoft.directory/policies/allProperties/read | Read all properties of policies | > | microsoft.directory/conditionalAccessPolicies/allProperties/read | Read all properties of conditional access policies |
-> | microsoft.directory/crossTenantAccessPolicies/allProperties/read | Read all properties of cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
> | microsoft.directory/deviceManagementPolicies/standard/read | Read standard properties on device management application policies | > | microsoft.directory/deviceRegistrationPolicy/standard/read | Read standard properties on device registration policies | > | microsoft.directory/privilegedIdentityManagement/allProperties/read | Read all resources in Privileged Identity Management |
Users in this role can read settings and administrative information across Micro
> | microsoft.directory/verifiableCredentials/configuration/allProperties/read | Read configuration required to create and manage verifiable credentials | > | microsoft.directory/lifecycleManagement/workflows/allProperties/read | Read all properties of lifecycle management workflows and tasks in Azure AD | > | microsoft.cloudPC/allEntities/allProperties/read | Read all aspects of Windows 365 |
-> | microsoft.commerce.billing/allEntities/read | Read all resources of Office 365 billing |
+> | microsoft.commerce.billing/allEntities/allProperties/read | Read all resources of Office 365 billing |
> | microsoft.edge/allEntities/allProperties/read | Read all aspects of Microsoft Edge | > | microsoft.insights/allEntities/allProperties/read | Read all aspects of Viva Insights | > | microsoft.office365.exchange/allEntities/standard/read | Read all resources of Exchange Online |
Users in this role can manage Azure Active Directory B2B guest user invitations
## Helpdesk Administrator
-Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. For a list of the roles that a Helpdesk Administrator can reset passwords for and invalidate refresh tokens, see [Password reset permissions](#password-reset-permissions).
+Users with this role can change passwords, invalidate refresh tokens, create and manage support requests with Microsoft for Azure and Microsoft 365 services, and monitor service health. Invalidating a refresh token forces the user to sign in again. Whether a Helpdesk Administrator can reset a user's password and invalidate refresh tokens depends on the role the user is assigned. For a list of the roles that a Helpdesk Administrator can reset passwords for and invalidate refresh tokens, see [Who can reset passwords](#who-can-reset-passwords).
> [!IMPORTANT] > Users with this role can change passwords for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example:
Users in this role can access the full set of administrative capabilities in the
Assign the Insights Analyst role to users who need to do the following: - Analyze data in the Microsoft Viva Insights app, but can't manage any configuration settings-- Create, manage, and run queries
+- Create, manage, and run queries
- View basic settings and reports in the Microsoft 365 admin center - Create and manage service requests in the Microsoft 365 admin center
If the Modern Commerce User role is unassigned from a user, they lose access to
> [!div class="mx-tableFixed"] > | Actions | Description | > | | |
-> | microsoft.commerce.billing/partners/read | Read partner property of Microsoft 365 Billing |
+> | microsoft.commerce.billing/partners/read | |
> | microsoft.commerce.volumeLicenseServiceCenter/allEntities/allTasks | Manage all aspects of Volume Licensing Service Center | > | microsoft.office365.supportTickets/allEntities/allTasks | Create and manage Microsoft 365 service requests | > | microsoft.office365.webPortal/allEntities/basic/read | Read basic properties on all resources in the Microsoft 365 admin center |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts | > | microsoft.directory/deletedItems.groups/restore | Restore soft deleted groups to original state |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
> | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/delete | Delete Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/restore | Restore groups from soft-deleted container |
Do not use. This role has been deprecated and will be removed from Azure AD in t
> | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts | > | microsoft.directory/deletedItems.groups/restore | Restore soft deleted groups to original state |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
> | microsoft.directory/domains/allProperties/allTasks | Create and delete domains, and read and update all properties | > | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups | > | microsoft.directory/groups/delete | Delete Security groups and Microsoft 365 groups, excluding role-assignable groups |
Do not use. This role has been deprecated and will be removed from Azure AD in t
## Password Administrator
-Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Password reset permissions](#password-reset-permissions).
+Users with this role have limited ability to manage passwords. This role does not grant the ability to manage service requests or monitor service health. Whether a Password Administrator can reset a user's password depends on the role the user is assigned. For a list of the roles that a Password Administrator can reset passwords for, see [Who can reset passwords](#who-can-reset-passwords).
Users with this role can't change the credentials or reset MFA for members and owners of a [role-assignable group](groups-concept.md).
Users with this role can register printers and manage printer status in the Micr
## Privileged Authentication Administrator
-Users with this role can set or reset any authentication method (including passwords) for any user, including Global Administrators. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users.
+Users with this role can set or reset any authentication method (including passwords) for any user, including Global Administrators. Privileged Authentication Administrators can force users to re-register against existing non-password credential (such as MFA or FIDO) and revoke 'remember MFA on the device', prompting for MFA on the next sign-in of all users. Privileged Authentication Administrators can update sensitive attributes for all users.
The [Authentication Administrator](#authentication-administrator) role has permission to force re-registration and multifactor authentication for standard users and users with some admin roles. The [Authentication Policy Administrator](#authentication-policy-administrator) role has permissions to set the tenant's authentication method policy that determines which methods each user can register and use.
-| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy |
-| - | - | - | - | - | - |
-| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No |
-| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No |
-| Authentication Policy Administrator | No | No | Yes | Yes | Yes |
+| Role | Manage user's auth methods | Manage per-user MFA | Manage MFA settings | Manage auth method policy | Manage password protection policy | Update sensitive attributes |
+| - | - | - | - | - | - | - |
+| Authentication Administrator | Yes for some users (see above) | Yes for some users (see above) | No | No | No | Yes for some users (see above) |
+| Privileged Authentication Administrator| Yes for all users | Yes for all users | No | No | No | Yes for all users |
+| Authentication Policy Administrator | No | No | Yes | Yes | Yes | No |
> [!IMPORTANT] > Users with this role can change credentials for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the credentials of a user may mean the ability to assume that user's identity and permissions. For example:
The [Authentication Policy Administrator](#authentication-policy-administrator)
> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users | > | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users | > | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
+> | microsoft.directory/users/delete | Delete users |
+> | microsoft.directory/users/disable | Disable users |
+> | microsoft.directory/users/enable | Enable users |
> | microsoft.directory/users/invalidateAllRefreshTokens | Force sign-out by invalidating user refresh tokens |
+> | microsoft.directory/users/restore | Restore deleted users |
+> | microsoft.directory/users/basic/update | Update basic properties on users |
+> | microsoft.directory/users/manager/update | Update manager for users |
> | microsoft.directory/users/password/update | Reset passwords for all users |
+> | microsoft.directory/users/userPrincipalName/update | Update User Principal Name of users |
> | microsoft.azure.serviceHealth/allEntities/allTasks | Read and configure Azure Service Health | > | microsoft.azure.supportTickets/allEntities/allTasks | Create and manage Azure support tickets | > | microsoft.office365.serviceHealth/allEntities/allTasks | Read and configure Service Health in the Microsoft 365 admin center |
Azure Advanced Threat Protection | Monitor and respond to suspicious security ac
> | microsoft.directory/auditLogs/allProperties/read | Read all properties on audit logs, including privileged properties | > | microsoft.directory/authorizationPolicy/standard/read | Read standard properties of authorization policy | > | microsoft.directory/bitlockerKeys/key/read | Read bitlocker metadata and key on devices |
-> | microsoft.directory/crossTenantAccessPolicies/create | Create cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/delete | Delete cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/standard/read | Read basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/read | Read owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/policyAppliedTo/read | Read the policyAppliedTo property of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/basic/update | Update basic properties of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/owners/update | Update owners of cross-tenant access policies |
-> | microsoft.directory/crossTenantAccessPolicies/tenantDefault/update | Update the default tenant for cross-tenant access policies |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
> | microsoft.directory/entitlementManagement/allProperties/read | Read all properties in Azure AD entitlement management | > | microsoft.directory/identityProtection/allProperties/read | Read all resources in Azure AD Identity Protection | > | microsoft.directory/identityProtection/allProperties/update | Update all resources in Azure AD Identity Protection |
Users in this role can manage all aspects of the Microsoft Teams workload via th
> | microsoft.office365.usageReports/allEntities/allProperties/read | Read Office 365 usage reports | > | microsoft.office365.webPortal/allEntities/standard/read | Read basic properties on all resources in the Microsoft 365 admin center | > | microsoft.teams/allEntities/allProperties/allTasks | Manage all resources in Teams |
+> | microsoft.directory/crossTenantAccessPolicy/standard/read | Read basic properties of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/allowedCloudEndpoints/update | Update allowed cloud endpoints of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/basic/update | Update basic settings of cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/standard/read | Read basic properties of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bCollaboration/update | Update Azure AD B2B collaboration settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/default/tenantRestrictions/update | Update tenant restrictions of the default cross-tenant access policy |
+> | microsoft.directory/crossTenantAccessPolicy/partners/create | Create cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/delete | Delete cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/standard/read | Read basic properties of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bCollaboration/update | Update Azure AD B2B collaboration settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/b2bDirectConnect/update | Update Azure AD B2B direct connect settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/crossCloudMeetings/update | Update cross-cloud Teams meeting settings of cross-tenant access policy for partners |
+> | microsoft.directory/crossTenantAccessPolicy/partners/tenantRestrictions/update | Update tenant restrictions of cross-tenant access policy for partners |
## Teams Communications Administrator
Users with this role can create users, and manage all aspects of users with some
| Create users and groups<br/>Create and manage user views<br/>Manage Office support tickets<br/>Update password expiration policies | | | Manage licenses<br/>Manage all user properties except User Principal Name | Applies to all users, including all admins | | Delete and restore<br/>Disable and enable<br/>Manage all user properties including User Principal Name<br/>Update (FIDO) device keys | Applies to users who are non-admins or in any of the following roles:<ul><li>Helpdesk Administrator</li><li>User with no role</li><li>User Administrator</li></ul> |
-| Invalidate refresh Tokens<br/>Reset password | For a list of the roles that a User Administrator can reset passwords for and invalidate refresh tokens, see [Password reset permissions](#password-reset-permissions). |
+| Invalidate refresh Tokens<br/>Reset password | For a list of the roles that a User Administrator can reset passwords for and invalidate refresh tokens, see [Who can reset passwords](#who-can-reset-passwords). |
+| Update sensitive attributes | For a list of the roles that a User Administrator can update sensitive attributes for, see [Who can update sensitive attributes](#who-can-update-sensitive-attributes). |
> [!IMPORTANT] > Users with this role can change passwords for people who may have access to sensitive or private information or critical configuration inside and outside of Azure Active Directory. Changing the password of a user may mean the ability to assume that user's identity and permissions. For example:
Users with this role can't change the credentials or reset MFA for members and o
> | microsoft.directory/accessReviews/definitions.groups/create | Create access reviews for membership in Security and Microsoft 365 groups. | > | microsoft.directory/accessReviews/definitions.groups/delete | Delete access reviews for membership in Security and Microsoft 365 groups. | > | microsoft.directory/accessReviews/definitions.groups/allProperties/read | Read all properties of access reviews for membership in Security and Microsoft 365 groups, including role-assignable groups. |
+> | microsoft.directory/users/authenticationMethods/create | Create authentication methods for users |
+> | microsoft.directory/users/authenticationMethods/delete | Delete authentication methods for users |
+> | microsoft.directory/users/authenticationMethods/standard/read | Read standard properties of authentication methods for users |
+> | microsoft.directory/users/authenticationMethods/basic/update | Update basic properties of authentication methods for users |
> | microsoft.directory/contacts/create | Create contacts | > | microsoft.directory/contacts/delete | Delete contacts | > | microsoft.directory/contacts/basic/update | Update basic properties on contacts | > | microsoft.directory/deletedItems.groups/restore | Restore soft deleted groups to original state |
+> | microsoft.directory/deletedItems.users/restore | Restore soft deleted users to original state |
> | microsoft.directory/entitlementManagement/allProperties/allTasks | Create and delete resources, and read and update all properties in Azure AD entitlement management | > | microsoft.directory/groups/assignLicense | Assign product licenses to groups for group-based licensing | > | microsoft.directory/groups/create | Create Security groups and Microsoft 365 groups, excluding role-assignable groups |
Restricted Guest User | Not shown because it can't be used | NA
User | Not shown because it can't be used | NA Workplace Device Join | Deprecated | [Deprecated roles documentation](#deprecated-roles)
-## Password reset permissions
+## Who can reset passwords
-Column headings represent the roles that can reset passwords. Table rows contain the roles for which their password can be reset.
+In the following table, the columns list the roles that can reset passwords. The rows list the roles for which their password can be reset.
The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
-Password can be reset | Password Admin | Helpdesk Admin | Authentication Admin | User Admin | Privileged Authentication Admin | Global Admin
+Role that password can be reset | Password Admin | Helpdesk Admin | Auth Admin | User Admin | Privileged Auth Admin | Global Admin
| | | | | |
-Authentication Admin | &nbsp; | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Auth Admin | &nbsp; | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Global Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:\* Groups Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :
Helpdesk Admin | &nbsp; | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Message Center Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
-Privileged Authentication Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Privileged Auth Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
Privileged Role Admin | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators.
+## Who can update sensitive attributes
+
+Some administrators can update the following sensitive attributes for some users. All users can read these sensitive attributes.
+
+- accountEnabled
+- businessPhones
+- mobilePhone
+- onPremisesImmutableId
+- otherMails
+- passwordProfile
+- userPrincipalName
+
+In the following table, the columns list the roles that can update the sensitive attributes. The rows list the roles for which their sensitive attributes can be updated.
+
+The following table is for roles assigned at the scope of a tenant. For roles assigned at the scope of an administrative unit, [further restrictions apply](admin-units-assign-roles.md#roles-that-can-be-assigned-with-administrative-unit-scope).
+
+Role that sensitive attributes can be updated | Auth Admin | User Admin | Privileged Auth Admin | Global Admin
+ | | | |
+Auth Admin | :heavy_check_mark: | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Directory Readers | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Global Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Groups Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Guest Inviter | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Helpdesk Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Message Center Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Password Admin | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Privileged Auth Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Privileged Role Admin | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+User<br/>(no admin role, but member or owner of a role-assignable group) | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark:
+User Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+ ## Next steps - [Assign Azure AD roles to groups](groups-assign-role.md)
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
After creating and connecting to the cluster, install the [Open Liberty Operator
```azurecli-interactive # Install Open Liberty Operator
+OPERATOR_VERSION=0.8.2
mkdir -p overlays/watch-all-namespaces
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/overlays/watch-all-namespaces/olo-all-namespaces.yaml -q -P ./overlays/watch-all-namespaces
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/overlays/watch-all-namespaces/cluster-roles.yaml -q -P ./overlays/watch-all-namespaces
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/overlays/watch-all-namespaces/kustomization.yaml -q -P ./overlays/watch-all-namespaces
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/overlays/watch-all-namespaces/olo-all-namespaces.yaml -q -P ./overlays/watch-all-namespaces
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/overlays/watch-all-namespaces/cluster-roles.yaml -q -P ./overlays/watch-all-namespaces
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/overlays/watch-all-namespaces/kustomization.yaml -q -P ./overlays/watch-all-namespaces
mkdir base
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/base/kustomization.yaml -q -P ./base
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/base/open-liberty-crd.yaml -q -P ./base
-wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/0.8.0/kustomize/base/open-liberty-operator.yaml -q -P ./base
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/base/kustomization.yaml -q -P ./base
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/base/open-liberty-crd.yaml -q -P ./base
+wget https://raw.githubusercontent.com/OpenLiberty/open-liberty-operator/main/deploy/releases/${OPERATOR_VERSION}/kustomize/base/open-liberty-operator.yaml -q -P ./base
kubectl apply -k overlays/watch-all-namespaces ```
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
The following output example resembles successful creation of the resource group
Create an AKS cluster using the [az aks create][az-aks-create] command with the *--enable-addons monitoring* parameter to enable [Container insights][azure-monitor-containers]. The following example creates a cluster named *myAKSCluster* with one node and enables a system-assigned managed identity: ```azurecli-interactive
-az aks create -g myResourceGroup -n myManagedCluster --enable-managed-identity --node-count 1 --enable-addons monitoring
+az aks create -g myResourceGroup -n myAKSCluster --enable-managed-identity --node-count 1 --enable-addons monitoring
``` After a few minutes, the command completes and returns JSON-formatted information about the cluster.
This quickstart is for introductory purposes. For guidance on a creating full so
[kubernetes-deployment]: ../concepts-clusters-workloads.md#deployments-and-yaml-manifests [kubernetes-service]: ../concepts-network.md#services [windows-container-cli]: ../windows-container-cli.md
-[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
+[aks-solution-guidance]: /azure/architecture/reference-architectures/containers/aks-start-here?WT.mc_id=AKSDOCSPAGE
api-management Self Hosted Gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-overview.md
documentationcenter: ''
-+ Last updated 03/18/2022
As of v2.1.1 and above, you can manage the ciphers that are being used through t
- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md) - [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md) - [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
+- [Self-hosted gateway configuration settings](self-hosted-gateway-settings-reference.md)
- Learn about [observability capabilities](observability.md) in API Management
api-management Self Hosted Gateway Settings Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/self-hosted-gateway-settings-reference.md
+
+ Title: Reference - Self-hosted gateway settings - Azure API Management
+description: Reference for the required and optional settings to configure the Azure API Management self-hosted gateway.
+++++ Last updated : 06/28/2022+++
+# Reference: Self-hosted gateway configuration settings
+
+This article provides a reference for required and optional settings that are used to configure the API Management [self-hosted gateway](self-hosted-gateway-overview.md).
+
+> [!IMPORTANT]
+> This reference applies only to the self-hosted gateway v2.
+
+## Deployment
+
+| Name | Description | Required | Default |
+|-||-|-|
+| config.service.endpoint | Configuration endpoint in Azure API Management for the self-hosted gateway. Find this value in the Azure portal under **Gateways** > **Deployment**. | Yes | N/A |
+| config.service.auth | Access token (authentication key) of the self-hosted gateway. Find this value in the Azure portal under **Gateways** > **Deployment**. | Yes | N/A |
++
+## Metrics
+
+| Name | Description | Required | Default |
+|-||-|-|
+| telemetry.metrics.local | Enable [local metrics collection](how-to-configure-local-metrics-logs.md) through StatsD. Value is one of the following: `none`, `statsd`. | No | `none` |
+| telemetry.metrics.local.statsd.endpoint | StatsD endpoint. | Yes, if `telemetry.metrics.local` is set to `statsd`; otherwise no. | N/A |
+| telemetry.metrics.local.statsd.sampling | StatsD metrics sampling rate. Value must be between 0 and 1, for example, 0.5. | No | N/A |
+| telemetry.metrics.local.statsd.tag-format | StatsD exporter [tagging format](https://github.com/prometheus/statsd_exporter#tagging-extensions). Value is one of the following: `ibrato`, `dogStatsD`, `influxDB`. | No | N/A |
+| telemetry.metrics.cloud | Whether or not to [enable emitting metrics to Azure Monitor](how-to-configure-cloud-metrics-logs.md). | No | `true` |
+| observability.opentelemetry.enabled | Whether or not to enable [emitting metrics to an OpenTelemetry collector](how-to-deploy-self-hosted-gateway-kubernetes-opentelemetry.md) on Kubernetes. | No | `false` |
+| observability.opentelemetry.collector.uri | URI of the OpenTelemetry collector to send metrics to. | Yes, if `observability.opentelemetry.enabled` is set to `true`; otherwise no. | N/A |
+| observability.opentelemetry.histogram.buckets | Histogram buckets in which OpenTelemetry metrics should be reported. Format: "*x,y,z*,...". | No | "5,10,25,50,100,250,500,1000,2500,5000,10000" |
+
+## Logs
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| telemetry.logs.std |[Enable logging](how-to-configure-local-metrics-logs.md#logs) to a standard stream. Value is one of the following: `none`, `text`, `json`. | No | `text` |
+| telemetry.logs.local | [Enable local logging](how-to-configure-local-metrics-logs.md#logs). Value is one of the following: `none`, `auto`, `localsyslog`, `rfc5424`, `journal`, `json` | No | `auto` |
+| telemetry.logs.local.localsyslog.endpoint | localsyslog endpoint. | Yes if `telemetry.logs.local` is set to `localsyslog`; otherwise no. | N/A |
+| telemetry.logs.local.localsyslog.facility | Specifies localsyslog [facility code](https://en.wikipedia.org/wiki/Syslog#Facility), for example, `7`. | No | N/A |
+| telemetry.logs.local.rfc5424.endpoint | rfc5424 endpoint. | Yes if `telemetry.logs.local` is set to `rfc5424`; otherwise no. | N/A |
+| telemetry.logs.local.rfc5424.facility | Facility code per [rfc5424](https://tools.ietf.org/html/rfc5424), for example, `7` | No | N/A |
+| telemetry.logs.local.journal.endpoint | Journal endpoint. |Yes if `telemetry.logs.local` is set to `journal`; otherwise no. | N/A |
+| telemetry.logs.local.json.endpoint | UDP endpoint that accepts JSON data, specified as file path, IP:port, or hostname:port. | Yes if `telemetry.logs.local` is set to `json`; otherwise no. | 127.0.0.1:8888 |
+
+## Ciphers
+
+| Name | Description | Required | Default |
+| - | - | - | -|
+| net.server.tls.ciphers.allowed-suites | Comma-separated list of ciphers to use for TLS connection between API client and the self-hosted gateway. | No | N/A |
+| net.client.tls.ciphers.allowed-suites | Comma-separated list of ciphers to use for TLS connection between the self-hosted gateway and the backend. | No | N/A |
+
+## How to configure settings
+
+### Kubernetes YAML file
+
+When deploying the self-hosted gateway to Kubernetes using a [YAML file](how-to-deploy-self-hosted-gateway-kubernetes.md), configure settings as name-value pairs in the `data` element of the gateway's ConfigMap. For example:
+
+```yml
+apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: contoso-gateway-environment
+ data:
+ config.service.endpoint: "contoso.configuration.azure-api.net"
+ telemetry.logs.std: "text"
+ telemetry.logs.local.localsyslog.endpoint: "/dev/log"
+ telemetry.logs.local.localsyslog.facility: "7"
+
+[...]
+
+```
+
+### Helm chart
+
+When using [Helm](how-to-deploy-self-hosted-gateway-kubernetes-helm.md) to deploy the self-hosted gateway to Kubernetes, pass [chart configuration settings](https://artifacthub.io/packages/helm/azure-api-management/azure-api-management-gateway) as parameters to the `helm install` command. For example:
+
+```
+helm install azure-api-management-gateway \
+ --set gateway.configuration.uri='contoso.configuration.azure-api.net' \
+ --set gateway.auth.key='GatewayKey contosogw&xxxxxxxxxxxxxx...' \
+ --set secret.createSecret=false \
+ --set secret.existingSecretName=`mysecret` \
+ azure-apim-gateway/azure-api-management-gateway
+```
++
+## Next steps
+
+- Learn more about guidance for [running the self-hosted gateway on Kubernetes in production](how-to-self-hosted-gateway-on-kubernetes-in-production.md)
+- [Deploy self-hosted gateway to Docker](how-to-deploy-self-hosted-gateway-docker.md)
+- [Deploy self-hosted gateway to Kubernetes](how-to-deploy-self-hosted-gateway-kubernetes.md)
+- [Deploy self-hosted gateway to Azure Arc-enabled Kubernetes cluster](how-to-deploy-self-hosted-gateway-azure-arc.md)
++++
app-service Manage Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-backup.md
In [Azure App Service](overview.md), you can easily restore app backups. You can also make on-demand custom backups or configure scheduled custom backups. You can restore a backup by overwriting an existing app by restoring to a new app or slot. This article shows you how to restore a backup and make custom backups.
-Backup and restore**Standard**, **Premium**, **Isolated**. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
+Back up and restore **Standard**, **Premium**, **Isolated**. For more information about scaling your App Service plan to use a higher tier, see [Scale up an app in Azure](manage-scale-up.md).
## Automatic vs custom backups
There are two types of backups in App Service. Automatic backups made for your a
az webapp config snapshot restore --name <target-app-name> --resource-group <target-group-name> --source-name <source-app-name> --source-resource-group <source-group-name> --time <source-snapshot-timestamp> ```
- To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/webapp/config/snapshot#az-webapp-config-snapshot-restore).
+ To restore app content only and not the app configuration, use the `--restore-content-only` parameter. For more information, see [az webapp config snapshot restore](/cli/azure/webapp/config/snapshot#az-webapp-config-snapshot-restore).
<!-- # [Custom backups](#tab/custom)
app-service Quickstart Arc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-arc.md
Title: 'Quickstart: Create a web app on Azure Arc' description: Get started with App Service on Azure Arc deploying your first web app. Previously updated : 11/02/2021 Last updated : 06/30/2022 ms.devlang: azurecli
The following example creates a Node.js app. Replace `<app-name>` with a name th
--resource-group myResourceGroup \ --name <app-name> \ --custom-location $customLocationId \
- --runtime 'NODE|12-lts'
+ --runtime 'NODE|14-lts'
``` ## 4. Deploy some code
az webapp create \
--resource-group myResourceGroup \ --name <app-name> \ --custom-location $customLocationId \
- --deployment-container-image-name mcr.microsoft.com/appsvc/node:12-lts
+ --deployment-container-image-name mcr.microsoft.com/appsvc/node:14-lts
``` <!-- `TODO: currently gets an error but the app is successfully created: "Error occurred in request., RetryError: HTTPSConnectionPool(host='management.azure.com', port=443): Max retries exceeded with url: /subscriptions/62f3ac8c-ca8d-407b-abd8-04c5496b2221/resourceGroups/myResourceGroup/providers/Microsoft.Web/sites/cephalin-arctest4/config/appsettings?api-version=2020-12-01 (Caused by ResponseError('too many 500 error responses',))"` -->
app-service Quickstart Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-php.md
You can create the web app using the [Azure CLI](/cli/azure/get-started-with-azu
![Screenshot of the Create a new fork page in GitHub for creating a new fork of Azure-Samples/php-docs-hello-world.](media/quickstart-php/fork-details-php-docs-hello-world-repo.png) >[!NOTE]
-> This should take you to the new fork. Your fork URL will look something like this: https://github.com/YOUR_GITHUB_ACCOUNT_NAME/php-docs-hello-world
+> This should take you to the new fork. Your fork URL will look something like this: `https://github.com/YOUR_GITHUB_ACCOUNT_NAME/php-docs-hello-world`
automation Automation Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-faq.md
This Microsoft FAQ is a list of commonly asked questions about Azure Automation. If you have any other questions about its capabilities, go to the discussion forum and post your questions. When a question is frequently asked, we add it to this article so that it's found quickly and easily.
+## Why can't I create new Automation job in West Europe region?
+
+You might experience a delay or failure of job creation because of scalability issues in West Europe region. For more information, see [creation of new Automation job in West Europe region](./troubleshoot/runbooks.md#scenario-unable-to-create-new-automation-job-in-west-europe-region).
++ ## Can Update Management prevent unexpected OS-level upgrades? Yes. For more information, see [Exclude updates](./update-management/manage-updates-for-vm.md#exclude-updates).
automation Automation Hrw Run Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md
By default, the Hybrid jobs run under the context of System account. However, to
1. Select **Settings**. 1. Change the value of **Hybrid Worker credentials** from **Default** to **Custom**. 1. Select the credential and click **Save**.
-1. If the following permissions are not assigned for Custom users, jobs might get suspended.
+1. If the following permissions are not assigned for Custom users, jobs might get suspended. Add these permission to the Hybrid Runbook Worker account on the runbook worker machine, instead of adding the account to **Administrators** group because the `Filtered Token` feature of UAC would grant standard user rights to this account when logging-in. For more details, refer to - [Information about UAC on Windows Server](/troubleshoot/windows-server/windows-security/disable-user-account-control#more-information).
Use your discretion in assigning the elevated permissions corresponding to the following registry keys/folders: **Registry path**
To help troubleshoot issues with your runbooks running on a hybrid runbook worke
* If your runbooks aren't completing successfully, review the troubleshooting guide for [runbook execution failures](troubleshoot/hybrid-runbook-worker.md#runbook-execution-fails). * For more information on PowerShell, including language reference and learning modules, see [PowerShell Docs](/powershell/scripting/overview). * Learn about [using Azure Policy to manage runbook execution](enforce-job-execution-hybrid-worker.md) with Hybrid Runbook Workers.
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/runbooks.md
When you receive errors during runbook execution in Azure Automation, you can us
If you're running your runbooks on a Hybrid Runbook Worker instead of in Azure Automation, you might need to [troubleshoot the hybrid worker itself](hybrid-runbook-worker.md).
+## Scenario: Unable to create new Automation job in West Europe region
+
+### Issue
+When creating new Automation jobs, you might experience a delay or failure of job creation.
+
+### Cause
+This is because of scalability limits with the Automation service in the West Europe region.
+
+### Resolution
+Do one of the following actions if it is feasible as per your requirement and environment to reduce the chance of failure:
+
+- During the peak hours of job creation, typically on the hour, and half hour, move the job start time to five minutes before or after the hour/half hour.
+- Run the Automation jobs from alternate data centres until the transition work is complete.
+
+>[!NOTE]
+> The optimization of existing load and transitioning the load to a new design by the product group is in progress.
+ ## <a name="runbook-fails-no-permission"></a>Scenario: Runbook fails with "this.Client.SubscriptionId cannot be null." error message ### Issue
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
For more information, see [Virtual network service tags](../../virtual-network/s
The table below lists the URLs that must be available in order to install and use the Connected Machine agent.
+# [Azure Cloud](#tab/azure-cloud)
+ | Agent resource | Description | When required| Endpoint used with private link | |||--|| |`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
The table below lists the URLs that must be available in order to install and us
|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured | |`dc.services.visualstudio.com`|Agent telemetry|Optional| Public |
+# [Azure Government](#tab/azure-government)
+
+| Agent resource | Description | When required| Endpoint used with private link |
+|||--||
+|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
+|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
+|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
+|`login.microsoftonline.us`|Azure Active Directory|Always| Public |
+|`pasff.usgovcloudapi.net`|Azure Active Directory|Always| Public |
+|`management.usgovcloudapi.net`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
+|`*.his.arc.azure.us`|Metadata and hybrid identity services|Always| Private |
+|`*.guestconfiguration.azure.us`| Extension management and guest configuration services |Always| Private |
+|`*.blob.core.usgovcloudapi.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
+|`dc.applicationinsights.us`|Agent telemetry|Optional| Public |
+ ## Transport Layer Security 1.2 protocol To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
azure-cache-for-redis Cache Aspnet Session State Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-aspnet-session-state-provider.md
Once these steps are performed, your application is configured to use the Azure
* In Memory Session State Provider - This provider stores the Session State in memory. The benefit of using this provider is simplicity and speed. However, you can't scale your Web Apps if you're using in memory provider since it isn't distributed. * Sql Server Session State Provider - This provider stores the Session State in Sql Server. Use this provider if you want to store the Session state in persistent storage. You can scale your Web App but using Sql Server for Session has a performance effect on your Web App. You can also use this provider with an [In-Memory OLTP configuration](/archive/blogs/sqlserverstorageengine/asp-net-session-state-with-sql-server-in-memory-oltp) to help improve performance.
-* Distributed In Memory Session State Provider such as Azure Cache for Redis Session State Provider - This provider gives you the best of both worlds. Your Web App can have a simple, fast, and scalable Session State Provider. Because this provider stores the Session state in a Cache, your app has to take in consideration all the characteristics associated when talking to a Distributed In Memory Cache, such as transient network failures. For best practices on using Cache, see [Caching guidance](/azure/architecture/best-practices/caching) from Microsoft Patterns & Practices [Azure Cloud Application Design and Implementation Guidance](https://github.com/mspnp/azure-guidance).
+* Distributed In Memory Session State Provider such as Azure Cache for Redis Session State Provider - This provider gives you the best of both worlds. Your Web App can have a simple, fast, and scalable Session State Provider. Because this provider stores the Session state in a Cache, your app has to take in consideration all the characteristics associated when talking to a Distributed In Memory Cache, such as transient network failures. For best practices on using Cache, see [Caching guidance](/azure/architecture/best-practices/caching) from Microsoft Patterns & Practices Azure Cloud Application Design and Implementation Guidance.
For more information about session state and other best practices, see [Web Development Best Practices (Building Real-World Cloud Apps with Azure)](https://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/building-real-world-cloud-apps-with-windows-azure/web-development-best-practices).
azure-functions Consumption Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/consumption-plan.md
Title: Azure Functions Consumption plan hosting
-description: Learn about how Azure Function Consumption plan hosting lets you run your code in an environment that scales dynamically, but you only pay for resources used during execution.
+description: Learn about how Azure Functions Consumption plan hosting lets you run your code in an environment that scales dynamically, but you only pay for resources used during execution.
Last updated 8/31/2020 # Customer intent: As a developer, I want to understand the benefits of using the Consumption plan so I can get the scalability benefits of Azure Functions without having to pay for resources I don't need.
azure-functions Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/quickstart-java.md
Add a `host.json` file to your project directory. It should look similar to the
It's important to note that only the Azure Functions v4 _Preview_ bundle currently has the necessary support for Durable Functions for Java.
+> [!WARNING]
+> Be aware that the Azure Functions v4 preview bundles do not yet support Cosmos DB bindings for Java function apps. For more information, see [Azure Cosmos DB trigger and bindings reference documentation](../functions-bindings-cosmosdb-v2.md?tabs=in-process%2Cextensionv4&pivots=programming-language-java#install-bundle).
+ Add a `local.settings.json` file to your project directory. You should have the connection string of your Azure Storage account configured for `AzureWebJobsStorage`: ```json
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
The following considerations apply when using an Azure Resource Manager (ARM) te
To learn more, see [Automate resource deployment for your function app](functions-infrastructure-as-code.md?tabs=windows#create-a-function-app).
-## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
-
-The WEBSITE_CONTENTAZUREFILECONNECTIONSTRING and WEBSITE_CONTENTSHARE settings have additional validation checks to ensure that the app can be properly started. Creation of application settings will fail if the Function App cannot properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
-
-|Key|Sample value|
-|||
-|WEBSITE_SKIP_CONTENTSHARE_VALIDATION|`1`|
-
-If validation is skipped and either the connection string or content share are not valid, the app will be unable to start properly and will only serve HTTP 500 errors.
- ## WEBSITE\_DNS\_SERVER Sets the DNS server used by an app when resolving IP addresses. This setting is often required when using certain networking functionality, such as [Azure DNS private zones](functions-networking-options.md#azure-dns-private-zones) and [private endpoints](functions-networking-options.md#restrict-your-storage-account-to-a-virtual-network).
Sets the version of Node.js to use when running your function app on Windows. Yo
||| |WEBSITE\_NODE\_DEFAULT_VERSION|`~10`|
+## WEBSITE\_OVERRIDE\_STICKY\_EXTENSION\_VERSIONS
+
+By default, the version settings for function apps are specific to each slot. This setting is used when upgrading functions by using [deployment slots](functions-deployment-slots.md). This prevents unanticipated behavior due to changing versions after a swap. Set to `0` in production and in the slot to make sure that all version settings are also swapped. For more information, see [Migrate using slots](functions-versions.md#migrate-using-slots).
+
+|Key|Sample value|
+|||
+|WEBSITE\_OVERRIDE\_STICKY\_EXTENSION\_VERSIONS|`0`|
+ ## WEBSITE\_RUN\_FROM\_PACKAGE Enables your function app to run from a mounted package file.
Enables your function app to run from a mounted package file.
Valid values are either a URL that resolves to the location of a deployment package file, or `1`. When set to `1`, the package must be in the `d:\home\data\SitePackages` folder. When using zip deployment with this setting, the package is automatically uploaded to this location. In preview, this setting was named `WEBSITE_RUN_FROM_ZIP`. For more information, see [Run your functions from a package file](run-functions-from-deployment-package.md).
+## WEBSITE\_SKIP\_CONTENTSHARE\_VALIDATION
+
+The [WEBSITE_CONTENTAZUREFILECONNECTIONSTRING](#website_contentazurefileconnectionstring) and [WEBSITE_CONTENTSHARE](#website_contentshare) settings have additional validation checks to ensure that the app can be properly started. Creation of application settings will fail if the Function App cannot properly call out to the downstream Storage Account or Key Vault due to networking constraints or other limiting factors. When WEBSITE_SKIP_CONTENTSHARE_VALIDATION is set to `1`, the validation check is skipped; otherwise the value defaults to `0` and the validation will take place.
+
+|Key|Sample value|
+|||
+|WEBSITE_SKIP_CONTENTSHARE_VALIDATION|`1`|
+
+If validation is skipped and either the connection string or content share are not valid, the app will be unable to start properly and will only serve HTTP 500 errors.
+ ## WEBSITE\_TIME\_ZONE Allows you to set the timezone for your function app.
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
def main(mytimer: azure.functions.TimerRequest, context: azure.functions.Context
logging.info(f'Current retry count: {context.retry_context.retry_count}') if context.retry_context.retry_count == context.retry_context.max_retry_count:
- logging.info(
+ logging.warn(
f"Max retries of {context.retry_context.max_retry_count} for " f"function {context.function_name} has been reached")
azure-functions Functions Deployment Slots https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-deployment-slots.md
There are a number of advantages to using deployment slots. The following scenar
- **Different environments for different purposes**: Using different slots gives you the opportunity to differentiate app instances before swapping to production or a staging slot. - **Prewarming**: Deploying to a slot instead of directly to production allows the app to warm up before going live. Additionally, using slots reduces latency for HTTP-triggered workloads. Instances are warmed up before deployment, which reduces the cold start for newly deployed functions. - **Easy fallbacks**: After a swap with production, the slot with a previously staged app now has the previous production app. If the changes swapped into the production slot aren't as you expect, you can immediately reverse the swap to get your "last known good instance" back.
+- **Minimize restarts**: Changing app settings in a production slot requires a restart of the running app. You can instead change settings in a staging slot and swap the settings change into productions with a prewarmed instance. This is the recommended way to upgrade between Functions runtime versions while maintaining the highest availability. To learn more, see [Minimum downtime upgrade](functions-versions.md#minimum-downtime-upgrade).
## Swap operations
azure-functions Functions How To Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-github-actions.md
env:
AZURE_FUNCTIONAPP_NAME: your-app-name # set this to your function app name on Azure POM_XML_DIRECTORY: '.' # set this to the directory which contains pom.xml file POM_FUNCTIONAPP_NAME: your-app-name # set this to the function app name in your local development environment
- JAVA_VERSION: '1.8.x' # set this to the dotnet version to use
+ JAVA_VERSION: '1.8.x' # set this to the java version to use
jobs: build-and-deploy:
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
The following table shows the PowerShell versions available to each major versio
| Functions version | PowerShell version | .NET version | |-|--||
-| 3.x (recommended) | PowerShell 7 (recommended)<br/>PowerShell Core 6 | .NET Core 3.1<br/>.NET Core 2.1 |
+| 4.x (recommended) | PowerShell 7.2 (preview)<br/>PowerShell 7 (recommended) | .NET 6 |
+| 3.x | PowerShell 7<br/>PowerShell Core 6 | .NET Core 3.1<br/>.NET Core 2.1 |
| 2.x | PowerShell Core 6 | .NET Core 2.2 | You can see the current version by printing `$PSVersionTable` from any function.
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Azure Functions supports the following Python versions. These are official Pytho
| 3.x | 3.9<br/> 3.8<br/>3.7<br/>3.6 | | 2.x | 3.7<br/>3.6 |
-To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The `--functions-version` option sets the Azure Functions runtime version. The Python version is set when the function app is created and can't be changed.
-
-The runtime uses the available Python version, when you run it locally.
+To request a specific Python version when you create your function app in Azure, use the `--runtime-version` option of the [`az functionapp create`](/cli/azure/functionapp#az-functionapp-create) command. The `--functions-version` option sets the Azure Functions runtime version.
### Changing Python version
To set a Python function app to a specific language version, you need to specify
To learn more about the Azure Functions runtime support policy, see [Language runtime support policy](./language-support-policy.md).
-To see the full list of supported Python versions for function apps, see [Supported languages in Azure Functions](./supported-languages.md).
-
-# [Azure CLI](#tab/azurecli-linux)
- You can view and set `linuxFxVersion` from the Azure CLI by using the [az functionapp config show](/cli/azure/functionapp/config) command. Replace `<function_app>` with the name of your function app. Replace `<my_resource_group>` with the name of the resource group for your function app. ```azurecli-interactive
You can run the command from [Azure Cloud Shell](../cloud-shell/overview.md) by
The function app restarts after you change the site configuration.
-
+### Local Python version
+When running locally, the Azure Functions Core Tools uses the available Python version.
## Package management
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
zone_pivot_groups: programming-languages-set-functions
| 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. | > [!IMPORTANT]
-> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these runtime versions. This requirement affects all Azure Functions runtime languages.
+> Beginning on December 3, 2022, function apps running on versions 2.x and 3.x of the Azure Functions runtime can no longer be supported. Before that time, please test, verify, and migrate your function apps to version 4.x of the Functions runtime. For more information, see [Migrating from 3.x to 4.x](#migrating-from-3x-to-4x).
+>End of support for these runtime versions is due to the ending of support for .NET Core 3.1, which is required by these older runtime versions. This requirement affects all Azure Functions runtime languages.
>Functions version 1.x is still supported for C# function apps that require the .NET Framework. Preview support is now available in Functions 4.x to [run C# functions on .NET Framework 4.8](dotnet-isolated-process-guide.md#supported-versions). This article details some of the differences between these versions, how you can create each version, and how to change the version on which your functions run.
By default, function apps created in the Azure portal and by the Azure CLI are s
+ [Between 2.x and 3.x](#breaking-changes-between-2x-and-3x) + [Between 1.x and later versions](#migrating-from-1x-to-later-versions)
-Before making a change to the major version of the runtime, you should first test your existing code by deploying to another function app running on the latest major version. This testing helps to make sure it runs correctly after the upgrade. You can also verify your code locally by using the runtime-specific version of the [Azure Functions Core Tools](functions-run-local.md), which includes the Functions runtime.
+Before making a change to the major version of the runtime, you should first test your existing code on the new runtime version. You can verify your app runs correctly after the upgrade by deploying to another function app running on the latest major version. You can also verify your code locally by using the runtime-specific version of the [Azure Functions Core Tools](functions-run-local.md), which includes the Functions runtime.
Downgrades to v2.x aren't supported. When possible, you should always run your apps on the latest supported version of the Functions runtime.
To learn more, see [How to target Azure Functions runtime versions](set-runtime-
### Pinning to a specific minor version
-To resolve issues your function app may have when running on the latest major version, you have to temporarily pin your app to a specific minor version. This gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
+To resolve issues your function app may have when running on the latest major version, you have to temporarily pin your app to a specific minor version. Pinning gives you time to get your app running correctly on the latest major version. The way that you pin to a minor version differs between Windows and Linux. To learn more, see [How to target Azure Functions runtime versions](set-runtime-version.md).
Older minor versions are periodically removed from Functions. For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor [Azure App Service announcements](https://github.com/Azure/app-service-announcements/issues).
Any function app pinned to `~2.0` continues to run on .NET Core 2.2, which no lo
::: zone pivot="programming-language-csharp" There's technically not a correlation between binding extension versions and the Functions runtime version. However, starting with version 4.x the Functions runtime enforces a minimum version for all trigger and binding extensions.
-If you receive a warning about a package not meeting a minimum required version, you should update that NuGet package to the minimum version as you normally would. The minimum version requirements for extensions used in Functions v4.x can be found in [this configuration file](https://github.com/Azure/azure-functions-host/blob/v4.x/src/WebJobs.Script/extensionrequirements.json).
+If you receive a warning about a package not meeting a minimum required version, you should update that NuGet package to the minimum version as you normally would. The minimum version requirements for extensions used in Functions v4.x can be found in [the linked configuration file](https://github.com/Azure/azure-functions-host/blob/v4.x/src/WebJobs.Script/extensionrequirements.json).
For C# script, update the extension bundle reference in the host.json as follows:
To learn more about extension bundles, see [Extension bundles](functions-binding
## <a name="migrating-from-3x-to-4x"></a>Migrating from 3.x to 4.x
-Azure Functions version 4.x is highly backwards compatible to version 3.x. Many apps should safely upgrade to 4.x without significant code changes. Be sure to fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md) or [in a staging slot](functions-deployment-slots.md) before changing the major version in production apps.
+Azure Functions version 4.x is highly backwards compatible to version 3.x. Most apps should safely upgrade to 4.x without requiring significant code changes. An upgrade is initiated when you set the `FUNCTIONS_EXTENSION_VERSION` app setting to a value of `~4`. For function apps running on Windows, you also need to set the `netFrameworkVersion` site setting to target .NET 6.
-### Upgrading an existing app
+Before you upgrade your app to version 4.x of the Functions runtime, you should do the following tasks:
-When you develop your function app locally, you must upgrade both your local project environment and your function app running in Azure.
+* Review the list of [breaking changes between 3.x and 4.x](#breaking-changes-between-3x-and-4x).
+* [Run the pre-upgrade validator](#run-the-pre-upgrade-validator).
+* When possible, [upgrade your local project environment to version 4.x](#upgrade-your-local-project). Fully test your app locally using version 4.x of the [Azure Functions Core Tools](functions-run-local.md). When you use Visual Studio to publish a version 4.x project to an existing function app at a lower version, you're prompted to let Visual Studio upgrade the function app to version 4.x during deployment. This upgrade uses the same process defined in [Migrate without slots](#migrate-without-slots).
+* Consider using a [staging slot](functions-deployment-slots.md) to test and verify your app in Azure on the new runtime version. You can then deploy your app with the updated version settings to the production slot. For more information, see [Migrate using slots](#migrate-using-slots).
-#### Local project
+### Run the pre-upgrade validator
-Upgrading instructions may be language dependent. If you don't see your language, please select it from the switcher at the [top of the article](#top).
+Azure Functions provides a pre-upgrade validator to help you identify potential issues when migrating your function app to 4.x. To run the pre-upgrade validator:
-To update a C# class library app to .NET 6 and Azure Functions 4.x, update the `TargetFramework` and `AzureFunctionsVersion`:
+1. In the [Azure portal](https://portal.azure.com), navigate to your function app.
-```xml
-<TargetFramework>net6.0</TargetFramework>
-<AzureFunctionsVersion>v4</AzureFunctionsVersion>
+1. Open the **Diagnose and solve problems** page.
+
+1. In **Function App Diagnostics**, start typing `Functions 4.x Pre-Upgrade Validator` and then choose it from the list.
+
+1. After validation completes, review the recommendations and address any issues in your app. If you need to make changes to your app, make sure to validate the changes against version 4.x of the Functions runtime, either [locally using Azure Functions Core Tools v4](#upgrade-your-local-project) or by [using a staging slot](#migrate-using-slots).
+
+### Migrate without slots
+
+The simplest way to upgrade to v4.x is to set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4` on your function app in Azure. When your function app runs on Windows, you also need to update the `netFrameworkVersion` site setting in Azure. You must follow a [different procedure](#migrate-using-slots) on a site with slots.
+
+# [Azure CLI](#tab/azure-cli)
+
+```azurecli
+az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
```
-You must also make sure the NuGet packages references by your app are updated to the latest versions. See [breaking changes](#breaking-changes-between-3x-and-4x) for more information. Specific packages depend on whether your functions run in-process or out-of-process.
+# [Azure PowerShell](#tab/azure-powershell)
-# [In-process](#tab/in-process)
+```azurepowershell
+Update-AzFunctionAppSetting -AppSetting @{FUNCTIONS_EXTENSION_VERSION = "~4"} -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -Force
+```
-* [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) 4.0.0 or later
+
-# [Isolated process](#tab/isolated-process)
+When running on Windows, you also need to enable .NET 6.0, which is required by version 4.x of the runtime.
-* [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) 1.5.2 or later
-* [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) 1.2.0 or later
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
+```
-To update your app to Azure Functions 4.x, update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to 4.x and update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. See [breaking changes](#breaking-changes-between-3x-and-4x) for more information.
-> [!NOTE]
-> Node.js 10 and 12 are not supported in Azure Functions 4.x.
-> [!NOTE]
-> PowerShell 6 is not supported in Azure Functions 4.x.
-> [!NOTE]
-> Python 3.6 isn't supported in Azure Functions 4.x.
+In these examples, replace `<APP_NAME>` with the name of your function app and `<RESOURCE_GROUP_NAME>` with the name of the resource group.
-#### Azure
+### Migrate using slots
-A pre-upgrade validator is available to help identify potential issues when migrating a function app to 4.x. Before you migrate an existing app, follow these steps to run the validator:
+Using [deployment slots](functions-deployment-slots.md) is a good way to migrate your function app to the v4.x runtime from a previous version. By using a staging slot, you can run your app on the new runtime version in the staging slot and switch to production after verification. Slots also provide a way to minimize downtime during upgrade. If you need to minimize downtime, follow the steps in [Minimum downtime upgrade](#minimum-downtime-upgrade).
-1. In the Azure portal, navigate to your function app
+After you've verified your app in the upgraded slot, you can swap the app and new version settings into production. This swap requires setting [`WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0`](functions-app-settings.md#website_override_sticky_extension_versions) in the production slot. How you add this setting affects the amount of downtime required for the upgrade.
-1. Open the *Diagnose and solve problems* blade
+#### Standard upgrade
-1. In *Search for common problems or tools*, enter and select **Functions 4.x Pre-Upgrade Validator**
+If your slot-enabled function app can handle the downtime of a full restart, you can update the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` setting directly in the production slot. Because changing this setting directly in the production slot causes a restart that impacts availability, consider doing this change at a time of reduced traffic. You can then swap in the upgraded version from the staging slot.
-Once you have validated that the app can be upgraded, you can begin the process of migration. See the subsections below for instructions for [migration without slots](#migration-without-slots) and [migration with slots](#migration-with-slots).
+The [`Update-AzFunctionAppSetting`](/powershell/module/az.functions/update-azfunctionappsetting) PowerShell cmdlet doesn't currently support slots. You must use Azure CLI or the Azure portal.
-> [!NOTE]
-> If you are using a slot to manage the migration, you will need to set the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` application setting to "0" on _both_ slots. This allows the version changes you make to be included in the slot swap operation. You can then upgrade your staging (non-production) slot, and then you can perform the swap.
+1. Use the following command to set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the production slot:
-To migrate an app from 3.x to 4.x, you will:
+ ```azurecli
+ az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
+ ```
+ This command causes the app running in the production slot to restart.
-- Set the `FUNCTIONS_EXTENSION_VERSION` application setting to `~4`-- **For Windows function apps only**, enable .NET 6.0 through the `netFrameworkVersion` setting
+1. Use the following command to also set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` in the staging slot:
-##### Migration without slots
+ ```azurecli
+ az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
-You can use the following Azure CLI or Azure PowerShell commands to perform this upgrade directly on a site without slots:
+1. Use the following command to change `FUNCTIONS_EXTENSION_VERSION` and upgrade the staging slot to the new runtime version:
-# [Azure CLI](#tab/azure-cli)
+ ```azurecli
+ az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
-```azurecli
-az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
+1. (Windows only) For function apps running on Windows, use the following command so that the runtime can run on .NET 6:
+
+ ```azurecli
+ az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
-# For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
-az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
-```
+ Version 4.x of the Functions runtime requires .NET 6 when running on Windows.
-# [Azure PowerShell](#tab/azure-powershell)
+1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
-```azurepowershell
-Update-AzFunctionAppSetting -AppSetting @{FUNCTIONS_EXTENSION_VERSION = "~4"} -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME> -Force
+1. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
-# For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
-Set-AzWebApp -NetFrameworkVersion v6.0 -Name <APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
-```
+1. Use the following command to swap the upgraded staging slot to production:
-
+ ```azurecli
+ az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
+ ```
-##### Migration with slots
+#### Minimum downtime upgrade
-You can use the following Azure CLI commands to perform this upgrade using deployment slots:
+To minimize the downtime in your production app, you can swap the `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS` setting from the staging slot into production. After that, you can swap in the upgraded version from a prewarmed staging slot.
-First, update the production slot with `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0`. If your app can tolerate a restart (which impacts availability), it is recommended that you update the setting directly on the production slot, possibly at a time of lower traffic. If you instead choose to swap this setting into place, you should immediately update the staging slot after the swap. A consequence of swapping when only staging has `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` is that it will remove the `FUNCTIONS_EXTENSION_VERSION` setting in staging, putting the slot into a bad state. Updating the staging slot with a version right after the swap enables you to roll your changes back if necessary. However, in such a situation, you should still be prepared to directly update settings on production to remove `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` before the swap back.
+1. Use the following command to set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the staging slot:
-```azurecli
-# Update production with WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS
-az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME>
+ ```azurecli
+ az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
+1. Use the following commands to swap the slot with the new setting into production, and at the same time restore the version setting in the staging slot.
-# OR
+ ```azurecli
+ az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
+ az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~3 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
-# Alternatively get production prepared with WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS via a swap
-az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
-# The swap actions should be accompanied with a version specification for the slot. You may see errors from staging during the time between these actions.
-az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
-az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~3 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
-```
+ You may see errors from the staging slot during the time between the swap and the runtime version being restored on staging. This can happen because having `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` only in staging during a swap removes the `FUNCTIONS_EXTENSION_VERSION` setting in staging. Without the version setting, your slot is in a bad state. Updating the version in the staging slot right after the swap should put the slot back into a good state, and you call roll back your changes if needed. However, any rollback of the swap also requires you to directly remove `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` from production before the swap back to prevent the same errors in production seen in staging. This change in the production setting would then cause a restart.
-After the production slot has `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` configured, you can configure everything else in the staging slot and then swap:
+1. Use the following command to again set `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` in the staging slot:
-```azurecli
-# Get staging configured with WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS
-az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
-# Get staging configured with the new extension version
-az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
-# For Windows function apps only, also enable .NET 6.0 that is needed by the runtime
-az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```azurecli
+ az functionapp config appsettings set --settings WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
-# Be sure to confirm that your staging environment is working as expected before swapping.
+ At this point, both slots have `WEBSITE_OVERRIDE_STICKY_EXTENSION_VERSIONS=0` set.
-# Swap to migrate production to the new version
-az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
-```
+1. Use the following command to change `FUNCTIONS_EXTENSION_VERSION` and upgrade the staging slot to the new runtime version:
+
+ ```azurecli
+ az functionapp config appsettings set --settings FUNCTIONS_EXTENSION_VERSION=~4 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
+
+1. (Windows only) For function apps running on Windows, use the following command so that the runtime can run on .NET 6:
+
+ ```azurecli
+ az functionapp config set --net-framework-version v6.0 -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME>
+ ```
+
+ Version 4.x of the Functions runtime requires .NET 6 when running on Windows.
+
+1. If your code project required any updates to run on version 4.x, deploy those updates to the staging slot now.
+
+1. Confirm that your function app runs correctly in the upgraded staging environment before swapping.
+
+1. Use the following command to swap the upgraded and prewarmed staging slot to production:
+
+ ```azurecli
+ az functionapp deployment slot swap -g <RESOURCE_GROUP_NAME> -n <APP_NAME> --slot <SLOT_NAME> --target-slot production
+ ```
+
+### Upgrade your local project
+
+Upgrading instructions are language dependent. If you don't see your language, choose it from the switcher at the [top of the article](#top).
+
+To update a C# class library project to .NET 6 and Azure Functions 4.x:
+
+1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.
+
+1. Update the `TargetFramework` and `AzureFunctionsVersion`, as follows:
+
+ ```xml
+ <TargetFramework>net6.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
+ ```
+
+1. Update the NuGet packages referenced by your app to the latest versions. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
+ Specific packages depend on whether your functions run in-process or out-of-process.
+
+ # [In-process](#tab/in-process)
+
+ * [Microsoft.NET.Sdk.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions/) 4.0.0 or later
+
+ # [Isolated process](#tab/isolated-process)
+
+ * [Microsoft.Azure.Functions.Worker](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker/) 1.5.2 or later
+ * [Microsoft.Azure.Functions.Worker.Sdk](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Sdk/) 1.2.0 or later
+
+
+To update your project to Azure Functions 4.x:
+
+1. Update your local installation of [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to version 4.x.
+
+1. Update your app's [Azure Functions extensions bundle](functions-bindings-register.md#extension-bundles) to 2.x or above. For more information, see [breaking changes](#breaking-changes-between-3x-and-4x).
+
+1. If you're using Node.js version 10 or 12, move to one of the [supported version](functions-reference-node.md#node-version).
+1. If you're using PowerShell Core 6, move to one of the [supported versions](functions-reference-powershell.md#powershell-versions).
+1. If you're using Python 3.6, move to one of the [supported versions](functions-reference-python.md#python-version).
### Breaking changes between 3.x and 4.x
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
#### Runtime -- Azure Functions Proxies are no longer supported in 4.x. You are recommended to use [Azure API Management](../api-management/import-function-app-as-api.md).
+- Azure Functions Proxies are no longer supported in 4.x. You're recommended to use [Azure API Management](../api-management/import-function-app-as-api.md).
- Logging to Azure Storage using *AzureWebJobsDashboard* is no longer supported in 4.x. You should instead use [Application Insights](./functions-monitoring.md). ([#1923](https://github.com/Azure/Azure-Functions/issues/1923))
The following are some changes to be aware of before upgrading a 3.x app to 4.x.
::: zone-end ::: zone pivot="programming-language-javascript,programming-language-typescript" -- Node.js versions 10 and 12 are not supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+- Node.js versions 10 and 12 aren't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
- Output serialization in Node.js apps was updated to address previous inconsistencies. ([#2007](https://github.com/Azure/Azure-Functions/issues/2007)) ::: zone-end ::: zone pivot="programming-language-powershell" -- PowerShell 6 is not supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+- PowerShell 6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
-- Default thread count has been updated. Functions that are not thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
+- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
::: zone-end ::: zone pivot="programming-language-python" -- Python 3.6 is not supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
+- Python 3.6 isn't supported in Azure Functions 4.x. ([#1999](https://github.com/Azure/Azure-Functions/issues/1999))
- Shared memory transfer is enabled by default. ([#1973](https://github.com/Azure/Azure-Functions/issues/1973)) -- Default thread count has been updated. Functions that are not thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
+- Default thread count has been updated. Functions that aren't thread-safe or have high memory usage may be impacted. ([#1962](https://github.com/Azure/Azure-Functions/issues/1962))
::: zone-end ## Migrating from 2.x to 3.x
In version 2.x, the following changes were made:
* The default timeout for functions in an App Service plan is changed to 30 minutes. You can manually change the timeout back to unlimited by using the [functionTimeout](functions-host-json.md#functiontimeout) setting in host.json.
-* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
+* HTTP concurrency throttles are implemented by default for Consumption plan functions, with a default of 100 concurrent requests per instance. You can change this behavior in the [`maxConcurrentRequests`](functions-host-json.md#http) setting in the host.json file.
* Because of [.NET Core limitations](https://github.com/Azure/azure-functions-host/issues/3414), support for F# script (`.fsx` files) functions has been removed. Compiled F# functions (.fs) are still supported.
In Visual Studio, you select the runtime version when you create a project. Azur
<AzureFunctionsVersion>v4</AzureFunctionsVersion> ```
-You can also choose `net6.0` or `net48` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net48` is currently in preview.
+You can also choose `net6.0` or `net48` as the target framework if you're using [.NET isolated process functions](dotnet-isolated-process-guide.md). Support for `net48` is currently in preview.
> [!NOTE] > Azure Functions 4.x requires the `Microsoft.NET.Sdk.Functions` extension be at least `4.0.0`.
You can also choose `net6.0` or `net48` as the target framework if you are using
<AzureFunctionsVersion>v3</AzureFunctionsVersion> ```
-You can also choose `net5.0` as the target framework if you are using [.NET isolated process functions](dotnet-isolated-process-guide.md).
+You can also choose `net5.0` as the target framework if you're using [.NET isolated process functions](dotnet-isolated-process-guide.md).
> [!NOTE] > Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
You can also choose `net5.0` as the target framework if you are using [.NET isol
###### Updating 2.x apps to 3.x in Visual Studio
-You can open an existing function targeting 2.x and move to 3.x by editing the `.csproj` file and updating the values above. Visual Studio manages runtime versions automatically for you based on project metadata. However, it's possible if you have never created a 3.x app before that Visual Studio doesn't yet have the templates and runtime for 3.x on your machine. This may present itself with an error like "no Functions runtime available that matches the version specified in the project." To fetch the latest templates and runtime, go through the experience to create a new function project. When you get to the version and template select screen, wait for Visual Studio to complete fetching the latest templates. After the latest .NET Core 3 templates are available and displayed, you can run and debug any project configured for version 3.x.
+You can open an existing function targeting 2.x and move to 3.x by editing the `.csproj` file and updating the values above. Visual Studio manages runtime versions automatically for you based on project metadata. However, it's possible if you've never created a 3.x app before that Visual Studio doesn't yet have the templates and runtime for 3.x on your machine. This issue may present itself with an error like "no Functions runtime available that matches the version specified in the project." To fetch the latest templates and runtime, go through the experience to create a new function project. When you get to the version and template select screen, wait for Visual Studio to complete fetching the latest templates. After the latest .NET Core 3 templates are available and displayed, you can run and debug any project configured for version 3.x.
> [!IMPORTANT] > Version 3.x functions can only be developed in Visual Studio if using Visual Studio version 16.4 or newer.
You can open an existing function targeting 2.x and move to 3.x by editing the `
[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 3.x, install version 3.x of the Core Tools. Version 2.x development requires version 2.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
-For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation. To create apps in `~3` you would update the `azureFunctions.projectRuntime` user setting to `~3`.
+For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation. To create apps in `~3`, you update the `azureFunctions.projectRuntime` user setting to `~3`.
![Azure Functions extension runtime setting](./media/functions-versions/vs-code-version-runtime.png)
Starting with version 2.x, the runtime uses a new [binding extensibility model](
* A lighter execution environment, where only the bindings in use are known and loaded by the runtime.
-With the exception of HTTP and timer triggers, all bindings must be explicitly added to the function app project, or registered in the portal. For more information, see [Register binding extensions](./functions-bindings-expressions-patterns.md).
+Except for HTTP and timer triggers, all bindings must be explicitly added to the function app project, or registered in the portal. For more information, see [Register binding extensions](./functions-bindings-expressions-patterns.md).
The following table shows which bindings are supported in each runtime version.
azure-functions Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/start-stop-vms/deploy.md
After the Start/Stop deployment completes, perform the following steps to enable
To manage the automation method to control the start and stop of your VMs, you configure one or more of the included logic apps based on your requirements. -- Scheduled - Start and stop actions are based on a schedule you specify against Azure Resource Manager and classic VMs.**ststv2_vms_Scheduled_start** and **ststv2_vms_Scheduled_stop** configure the scheduled start and stop.
+- Scheduled - Start and stop actions are based on a schedule you specify against Azure Resource Manager and classic VMs. **ststv2_vms_Scheduled_start** and **ststv2_vms_Scheduled_stop** configure the scheduled start and stop.
- Sequenced - Start and stop actions are based on a schedule targeting VMs with pre-defined sequencing tags. Only two named tags are supported - **sequencestart** and **sequencestop**. **ststv2_vms_Sequenced_start** and **ststv2_vms_Sequenced_stop** configure the sequenced start and stop.
azure-monitor Availability Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-alerts.md
Title: Set up availability alerts with Azure Application Insights | Microsoft Docs
+ Title: Set up availability alerts with Application Insights - Azure Monitor | Microsoft Docs
description: Learn how to set up web tests in Application Insights. Get alerts if a website becomes unavailable or responds slowly. Last updated 06/19/2019
Alerts are now automatically enabled by default, but in order to fully configure
![Screenshot shows the Rules management page where you can edit the rule.](./media/availability-alerts/set-action-group.png)
+### Alert frequency
+
+Availability alerts which are created through this experience are state-based. When the alert criteria is met, a single alert gets generated when the website is detected as unavailable. If the website is still down the next time the alert criteria is evaluated, it will not generate a new alert.
+
+For example, if your website is down for an hour and you have set up an e-mail alert with an evaluation frequency of 15 minutes, you will only receive an e-mail when the website goes down, and a subsequent e-mail when it is back up. You will not receive continuous alerts every 15 minutes reminding you that the website is still unavailable.
+ > [!NOTE]
-> Availability alerts created through this experience are state-based. This means that when the alert criteria is met a single alert is generated when the site is detected as unavailable. If the site is still down the next time the alert criteria is evaluated this won't generate a new alert. So if your site was down for an hour and you had setup an e-mail alert, you would only receive an e-mail when the site went down, and a subsequent e-mail when the site was back up. You would not receive continuous alerts reminding you that the site was still unavailable.
+> If you don't want to receive notifications when your website is down for only a short period of time (e.g. during maintenance) you can change the evaluation frequency to a higher value than the expected downtime, up to 15 minutes. You can also increase the alert location threshold, so it only triggers an alert if the website is down for a certain amount of regions.
+
+To make changes to location threshold, aggregation period, and test frequency, select the condition on the edit page of the alert rule, which will open the **Configure signal logic** window.
+
+![Screenshot showing Configure signal logic.](./media/availability-alerts/configure-signal-logic.png)
+
+> [!TIP]
+> For longer downtimes, we recommend to temporarily deactivate the alert rule, or to create a custom rule as shown below. This will give you more options to account for the downtime.
+
+### Custom alert rule
+
+Auto-generated alerts from availability tests have a limited set of options to change the logic. If you need advanced capabilities, you can create a custom alert rule from the **Alerts** tab. Click on **Create** and select **Alert rule**. Choose **Metrics** for **Signal type** to show all available signals, and select **Availability**.
+
+A custom alert rule offers higher values for aggregation period (up to 24 hours instead of 6 hours) and test frequency (up to 1 hour instead of 15 minutes). It also adds options to further define the logic by selecting different operators, aggregation types, and threshold values.
+
+![Screenshot showing Create custom alert.](./media/availability-alerts/create-custom-alert.png)
### Alert on X out of Y locations reporting failures
azure-monitor Convert Classic Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/convert-classic-resource.md
Title: Migrate an Azure Monitor Application Insights classic resource to a workspace-based resource | Microsoft Docs
-description: Learn about the steps required to upgrade your Azure Monitor Application Insights classic resource to the new workspace-based model.
+ Title: Migrate an Application Insights classic resource to a workspace-based resource - Azure Monitor | Microsoft Docs
+description: Learn about the steps required to upgrade your Application Insights classic resource to the new workspace-based model.
Last updated 09/23/2020
When you migrate to a workspace-based resource, no data is transferred from your
Your classic resource data will persist and be subject to the retention settings on your classic Application Insights resource. All new data ingested post migration will be subject to the [retention settings](../logs/data-retention-archive.md) of the associated Log Analytics workspace, which also supports [different retention settings by data type](../logs/data-retention-archive.md#set-retention-and-archive-policy-by-table). The migration process is **permanent, and cannot be reversed**. Once you migrate a resource to workspace-based Application Insights, it will always be a workspace-based resource. However, once you migrate you're able to change the target workspace as often as needed.
-If you don't need to migrate an existing resource, and instead want to create a new workspace-based Application Insights resource use the [workspace-based resource creation guide](create-workspace-resource.md).
+If you don't need to migrate an existing resource and instead want to create a new workspace-based Application Insights resource, use the [workspace-based resource creation guide](create-workspace-resource.md).
## Pre-requisites
To write queries against the [new workspace-based table structure/schema](#works
To ensure the queries successfully run, validate that the query's fields align with the [new schema fields](#appmetrics).
-If you have multiple Application Insights resources store their telemetry in one Log Analytics workspace but you only want to query data from one specific Application Insights resource, you have two options:
+If you have multiple Application Insights resources store their telemetry in one Log Analytics workspace, but you only want to query data from one specific Application Insights resource, you have two options:
- Option 1: Go to the desired Application Insights resource and open the **Logs** tab. All queries from this tab will automatically pull data from the selected Application Insights resource. - Option 2: Go to the Log Analytics workspace that you configured as the destination for your Application Insights telemetry and open the **Logs** tab. To query data from a specific Application Insights resource, filter for the built-in ```_ResourceId``` property that is available in all application specific tables.
Legacy table: customMetrics
|valueCount|int|ValueCount|int| |valueMax|real|ValueMax|real| |valueMin|real|ValueMin|real|
-|valueStdDev|real|ValueStdDev|real|
|valueSum|real|ValueSum|real|
+> [!NOTE]
+> Older versions of Application Insights SDKs used to report standard deviation (valueStdDev) in the metrics pre-aggregation. Due to little adoption in metrics analysis, the field was removed and is no longer aggregated by the SDKs. If the value is received by the Application Insights data collection end point, it gets dropped during ingestion and is not sent to the Log Analytics workspace. If you are interested in using standard deviation in your analysis, we recommend using queries against Application Insights raw events.
+ #### AppPageViews Legacy table: pageViews
azure-monitor Export Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/export-telemetry.md
Want to keep your telemetry for longer than the standard retention period? Or process it in some specialized way? Continuous Export is ideal for this purpose. The events you see in the Application Insights portal can be exported to storage in Microsoft Azure in JSON format. From there, you can download your data and write whatever code you need to process it. > [!IMPORTANT]
-> * Continuous export has been deprecated and is only supported for classic Application Insights resources.
+> * On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.
> * When [migrating to a workspace-based Application Insights resource](convert-classic-resource.md), you must use [diagnostic settings](#diagnostic-settings-based-export) for exporting telemetry. All [workspace-based Application Insights resources](./create-workspace-resource.md) must use [diagnostic settings](./create-workspace-resource.md#export-telemetry). > * Diagnostic settings export may increase costs. ([more information](export-telemetry.md#diagnostic-settings-based-export))
Continuous Export is supported in the following regions:
Continuous Export **does not support** the following Azure storage features/configurations:
-* Use of [VNET/Azure Storage firewalls](../../storage/common/storage-network-security.md) in conjunction with Azure Blob storage.
+* Use of [VNET/Azure Storage firewalls](../../storage/common/storage-network-security.md) with Azure Blob storage.
* [Azure Data Lake Storage Gen2](../../storage/blobs/data-lake-storage-introduction.md).
On larger scales, consider [HDInsight](https://azure.microsoft.com/services/hdin
## Diagnostic settings based export
-Diagnostic settings export is preferred because it provides additional features.
+Diagnostic settings export is preferred because it provides extra features.
> [!div class="checklist"] > * Azure storage accounts with virtual networks, firewalls, and private links > * Export to Event Hubs
Diagnostic settings export further differs from continuous export in the followi
> [!IMPORTANT] > Additional costs may be incurred due to an increase in calls to the destination, such as a storage account.
-To migrate to diagnostic settings-based export:
+To migrate to diagnostic settings export:
1. Disable current continuous export. 2. [Migrate application to workspace-based](convert-classic-resource.md).
azure-monitor Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ip-addresses.md
You need to open some outgoing ports in your server's firewall to allow the Appl
| Purpose | URL | IP | Ports | | | | | |
-| Telemetry |dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com<br/>*.in.applicationinsights.azure.com | | 443 |
-| Live Metrics Stream | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com|23.96.28.38<br/>13.92.40.198<br/>40.112.49.101<br/>40.117.80.207<br/>157.55.177.6<br/>104.44.140.84<br/>104.215.81.124<br/>23.100.122.113| 443 |
+| Telemetry | dc.applicationinsights.azure.com<br/>dc.applicationinsights.microsoft.com<br/>dc.services.visualstudio.com<br/>*.in.applicationinsights.azure.com<br/><br/> || 443 |
+| Live Metrics | live.applicationinsights.azure.com<br/>rt.applicationinsights.microsoft.com<br/>rt.services.visualstudio.com<br/><br/>{region}.livediagnostics.monitor.azure.com<br/>*Example for {region}: westus2*<br/><br/> |20.49.111.32/29<br/>13.73.253.112/29| 443 |
+
+> [!NOTE]
+> These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`.
## Status Monitor
Open port 80 (HTTP) and port 443 (HTTPS) for incoming traffic from these address
### IP addresses
-If you're looking for the actual IP addresses so that you can add them to the list of allowed IPs in your firewall, download the JSON file that describes Azure IP ranges. These files contain the most up-to-date information. For Azure public cloud, you might also look up the IP address ranges by location using the following table.
+If you're looking for the actual IP addresses so that you can add them to the list of allowed IPs in your firewall, download the JSON file that describes Azure IP ranges. These files contain the most up-to-date information. After you download the appropriate file, open it by using your favorite text editor. Search for **ApplicationInsightsAvailability** to go straight to the section of the file that describes the service tag for availability tests.
-After you download the appropriate file, open it by using your favorite text editor. Search for **ApplicationInsightsAvailability** to go straight to the section of the file that describes the service tag for availability tests.
-
-> [!NOTE]
-> These addresses are listed by using Classless Interdomain Routing notation. As an example, an entry like `51.144.56.112/28` is equivalent to 16 IPs that start at `51.144.56.112` and end at `51.144.56.127`.
+For Azure public cloud, you need to allow both the global IP ranges and the ones specific for the region of your Application Insights resource which receives live data. You can find the global IP ranges in the [Outgoing ports](#outgoing-ports) table at the top of this document, and the regional IP ranges in the [Addresses grouped by region](#addresses-grouped-by-region-azure-public-cloud) table below.
#### Azure public cloud
Download [US Government cloud IP addresses](https://www.microsoft.com/download/d
Download [China cloud IP addresses](https://www.microsoft.com/download/details.aspx?id=57062).
-#### Addresses grouped by location (Azure public cloud)
-
-```
-Australia East
-20.40.124.176/28
--
-Brazil South
-191.233.26.176/28
--
-France Central (Formerly France South)
-20.40.129.96/28
--
-France Central
-20.40.129.32/28
--
-East Asia
-52.229.216.48/28
--
-North Europe
-52.158.28.64/28
--
-Japan East
-52.140.232.160/28
--
-West Europe
-51.144.56.96/28
--
-UK South
-51.105.9.128/28
--
-UK West
-20.40.104.96/28
--
-Southeast Asia
-52.139.250.96/28
--
-West US
-40.91.82.48/28
--
-Central US
-13.86.97.224/28
--
-North Central US
-23.100.224.16/28
--
-South Central US
-20.45.5.160/28
-
-East US
-20.42.35.32/28
--
-```
+#### Addresses grouped by region (Azure public cloud)
+
+| Continent/Country | Region | IP |
+| | | |
+|Asia|East Asia|52.229.216.48/28<br/>20.189.111.16/29|
+||Southeast Asia|52.139.250.96/28<br/>23.98.106.152/29|
+|Australia|Australia Central|20.37.227.104/29<br/><br/>|
+||Australia Central 2|20.53.60.224/31<br/><br/>|
+||Australia East|20.40.124.176/28<br/>20.37.198.232/29|
+||Australia Southeast|20.42.230.224/29<br/><br/>|
+|Brazil|Brazil South|191.233.26.176/28<br/>191.234.137.40/29|
+||Brazil Southeast|20.206.0.196/31<br/><br/>|
+|Canada|Canada Central|52.228.86.152/29<br/><br/>|
+||Canada East|52.242.40.208/31<br/><br/>|
+|Europe|North Europe|52.158.28.64/28<br/>20.50.68.128/29|
+||West Europe|51.144.56.96/28<br/>40.113.178.32/29|
+|France|France Central|20.40.129.32/28<br/>20.43.44.216/29|
+||France South|20.40.129.96/28<br/>52.136.191.12/31|
+|Germany|Germany North|51.116.75.92/31<br/><br/>|
+||Germany West Central|20.52.95.50/31<br/><br/>|
+|India|Central India|52.140.108.216/29<br/><br/>|
+||South India|20.192.153.106/31<br/><br/>|
+||West India|20.192.84.164/31<br/><br/>|
+||Jio India Central|20.192.50.200/29<br/><br/>|
+||Jio India West|20.193.194.32/29<br/><br/>|
+|Israel|Israel Central|20.217.44.250/31<br/><br/>|
+|Japan|Japan East|52.140.232.160/28<br/>20.43.70.224/29|
+||Japan West|20.189.194.102/31<br/><br/>|
+|Korea|Korea Central|20.41.69.24/29<br/><br/>|
+|Norway|Norway East|51.120.235.248/29<br/><br/>|
+||Norway West|51.13.143.48/31<br/><br/>|
+|Poland|Poland Central|20.215.4.250/31<br/><br/>|
+|Qatar|Qatar Central|20.21.39.224/29<br/><br/>|
+|South Africa|South Africa North|102.133.219.136/29<br/><br/>|
+||South Africa West|102.37.86.196/31<br/><br/>|
+|Sweden|Sweden Central|51.12.25.192/29<br/><br/>|
+||Sweden South|51.12.17.128/29<br/><br/>|
+|Switzerland|Switzerland North|51.107.52.200/29<br/><br/>|
+||Switzerland West|51.107.148.8/29<br/><br/>|
+|Taiwan|Taiwan North|51.53.28.214/31<br/><br/>|
+||Taiwan Northwest|51.53.172.214/31<br/><br/>|
+|United Arab Emirates|UAE Central|20.45.95.68/31<br/><br/>|
+||UAE North|20.38.143.44/31<br/>40.120.87.204/31|
+|United Kingdom|UK South|51.105.9.128/28<br/>51.104.30.160/29|
+||UK West|20.40.104.96/28<br/>51.137.164.200/29|
+|United States|Central US|13.86.97.224/28<br/>20.40.206.232/29|
+||East US|20.42.35.32/28<br/>20.49.111.32/29|
+||East US 2|20.49.102.24/29<br/><br/>|
+||North Central US|23.100.224.16/28<br/>20.49.114.40/29|
+||South Central US|20.45.5.160/28<br/>13.73.253.112/29|
+||West Central US|52.150.154.24/29<br/><br/>|
+||West US|40.91.82.48/28<br/>52.250.228.8/29|
+||West US 2|40.64.134.128/29<br/><br/>|
+||West US 3|20.150.241.64/29<br/><br/>|
### Discovery API
azure-monitor Activity Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/activity-log.md
description: View the Azure Monitor activity log and send it to Azure Monitor Lo
Previously updated : 09/09/2021 Last updated : 07/01/2022
You can also access activity log events by using the following methods:
- Use log queries to perform complex analysis and gain deep insights on activity log entries. - Use log alerts with Activity entries for more complex alerting logic. - Store activity log entries for longer than the activity log retention period.-- Incur no data ingestion charges for activity log data stored in a Log Analytics workspace.-- Incur no data retention charges for the first 90 days for activity log data stored in a Log Analytics workspace.
+- Incur no data ingestion or retention charges for activity log data stored in a Log Analytics workspace.
+- The default retention period in Log Analytics is 90 days
Select **Export Activity Logs** to send the activity log to a Log Analytics workspace.
azure-monitor Data Collection Rule Edit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/data-collection-rule-edit.md
+
+ Title: Tutorial - Editing Data Collection Rules
+description: This article describes how to make changes in Data Collection Rule definition using command line tools and simple API calls.
++++ Last updated : 05/31/2022++
+# Tutorial: Editing Data Collection Rules
+This tutorial will describe how to edit the definition of Data Collection Rule (DCR) that has been already provisioned using command line tools.
+
+In this tutorial, you learn how to:
+> [!div class="checklist"]
+> * Leverage existing portal functionality to pre-create DCRs
+> * Get the content of a Data Collection Rule using ARM API call
+> * Apply changes to a Data Collection Rule using ARM API call
+> * Automate the process of DCR update using PowerShell scripts
+
+## Prerequisites
+To complete this tutorial you need the following:
+- Log Analytics workspace where you have at least [contributor rights](../logs/manage-access.md#azure-rbac).
+- [Permissions to create Data Collection Rule objects](data-collection-rule-overview.md#permissions) in the workspace.
+- Up to date version of PowerShell. Using Azure Cloud Shell is recommended.
+
+## Overview of tutorial
+While going through the wizard on the portal is the simplest way to set up the ingestion of your custom data to Log Analytics, in some cases you might want to update your Data Collection Rule later to:
+- Change data collection settings (e.g. Data Collection Endpoint, associated with the DCR)
+- Update data parsing or filtering logic for your data stream
+- Change data destination (e.g. send data to an Azure table, as this option is not directly offered as part of the DCR-based custom log wizard)
+
+In this tutorial, you will, first, set up ingestion of a custom log, then. you will modify the KQL transformation for your custom log to include additional filtering and apply the changes to your DCR. Finally, we are going to combine all editing operations into a single PowerShell script, which can be used to edit any DCR for any of the above mentioned reasons.
+
+## Set up new custom log
+Start by setting up a new custom log. Follow [Tutorial: Send custom logs to Azure Monitor Logs using the Azure portal (preview)]( ../logs/tutorial-custom-logs.md). Note the resource ID of the DCR created.
+
+## Retrieve DCR content
+In order to update DCR, we are going to retrieve its content and save it as a file, which can be further edited.
+1. Click the **Cloud Shell** button in the Azure portal and ensure the environment is set to **PowerShell**.
+
+ :::image type="content" source="../logs/media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" lightbox="../logs/media/tutorial-ingestion-time-transformations-api/open-cloud-shell.png" alt-text="Screenshot of opening cloud shell":::
+
+2. Execute the following commands to retrieve DCR content and save it to a file. Replace `<ResourceId>` with DCR ResourceID and `<FilePath>` with the name of the file to store DCR.
+
+ ```PowerShell
+ $ResourceId = ΓÇ£<ResourceId>ΓÇ¥ # Resource ID of the DCR to edit
+ $FilePath = ΓÇ£<FilePath>ΓÇ¥ # Store DCR content in this file
+ $DCR = Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method GET
+ $DCR.Content | ConvertFrom-Json | ConvertTo-Json -Depth 20 | Out-File -FilePath $FilePath
+ ```
+## Edit DCR
+Now, when DCR content is stored as a JSON file, you can use an editor of your choice to make changes in the DCR. You may [prefer to download the file from the Cloud Shell environment](../../cloud-shell/using-the-shell-window.md#upload-and-download-files), if you are using one.
+
+Alternatively you can use code editors supplied with the environment. For example, if you saved your DCR in a file named `temp.dcr` on your Cloud Drive, you could use the following command to open DCR for editing right in the Cloud Shell window:
+```PowerShell
+code "temp.dcr"
+```
+
+LetΓÇÖs modify the KQL transformation within DCR to drop rows where RequestType is anything, but ΓÇ£GETΓÇ¥.
+1. Open the file created in the previous part for editing using an editor of your choice.
+2. Locate the line containing `ΓÇ¥transformKqlΓÇ¥` attribute, which, if you followed the tutorial for custom log creation, should look similar to this:
+ ``` JSON
+ "transformKql": " source\n | extend TimeGenerated = todatetime(Time)\n | parse RawData with \n ClientIP:string\n ' ' *\n ' ' *\n ' [' * '] \"' RequestType:string\n \" \" Resource:string\n \" \" *\n '\" ' ResponseCode:int\n \" \" *\n | where ResponseCode != 200\n | project-away Time, RawData\n"
+ ```
+3. Modify KQL transformation to include additional filter by RequestType
+ ``` JSON
+ "transformKql": " source\n | where RawData contains \"GET\"\n | extend TimeGenerated = todatetime(Time)\n | parse RawData with \n ClientIP:string\n ' ' *\n ' ' *\n ' [' * '] \"' RequestType:string\n \" \" Resource:string\n \" \" *\n '\" ' ResponseCode:int\n \" \" *\n | where ResponseCode != 200\n | project-away Time, RawData\n"
+ ```
+4. Save the file with modified DCR content.
+
+## Apply changes
+Our final step is to update DCR back in the system. This is accomplished by ΓÇ£PUTΓÇ¥ HTTP call to ARM API, with updated DCR content sent in the HTTP request body.
+1. If you are using Azure Cloud Shell, save the file and close the embedded editor, or [upload modified DCR file back to the Cloud Shell environment](../../cloud-shell/using-the-shell-window.md#upload-and-download-files).
+2. Execute the following commands to load DCR content from the file and place HTTP call to update the DCR in the system. Replace `<ResourceId>` with DCR ResourceID and `<FilePath>` with the name of the file modified in the previous part of the tutorial. You can omit first two lines if you read and write to the DCR within the same PowerShell session.
+ ```PowerShell
+ $ResourceId = ΓÇ£<ResourceId>ΓÇ¥ # Resource ID of the DCR to edit
+ $FilePath = ΓÇ£<FilePath>ΓÇ¥ # Store DCR content in this file
+ $DCRContent = Get-Content $FilePath -Raw
+ Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method PUT -Payload $DCRContent
+ ```
+3. Upon successful call, you should get the response with status code ΓÇ£200ΓÇ¥, indicating that your DCR is now updated.
+4. You can now navigate to your DCR and examine its content on the portal via ΓÇ£JSON ViewΓÇ¥ function, or you could repeat the first part of the tutorial to retrieve DCR content into a file.
+
+## Putting everything together
+Now, when we know how to read and update the content of a DCR, letΓÇÖs put everything together into utility script, which can be used to perform both operations together.
+
+```PowerShell
+param ([Parameter(Mandatory=$true)] $ResourceId)
+
+# get DCR content and put into a file
+$FilePath = "temp.dcr"
+$DCR = Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method GET
+$DCR.Content | ConvertFrom-Json | ConvertTo-Json -Depth 20 | Out-File $FilePath
+
+# Open DCR in code editor
+code $FilePath | Wait-Process
+
+#Wait for confirmation to apply changes
+$Output = Read-Host "Apply changes to DCR (Y/N)? "
+if ("Y" -eq $Output.toupper())
+{
+ #write DCR content back from the file
+ $DCRContent = Get-Content $FilePath -Raw
+ Invoke-AzRestMethod -Path ("$ResourceId"+"?api-version=2021-09-01-preview") -Method PUT -Payload $DCRContent
+}
+
+#Delete temporary file
+Remove-Item $FilePath
+```
+### How to use this utility
+
+ Assuming you saved the script as a file, named `DCREditor.ps1` and need to modify a Data Collection Rule with resource ID of `/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/foo/providers/Microsoft.Insights/dataCollectionRules/bar`, this could be accomplished by running the following command:
+
+```PowerShell
+.\DCREditor.ps1 "/subscriptions/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx/resourceGroups/foo/providers/Microsoft.Insights/dataCollectionRules/bar"
+```
+
+DCR content will open in embedded code editor. Once editing is complete, entering "Y" on script prompt will apply changes back to the DCR.
+
+## Next steps
+
+- [Read more about data collection rules and options for creating them.](data-collection-rule-overview.md)
azure-netapp-files Use Dfs N And Dfs Root Consolidation With Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/use-dfs-n-and-dfs-root-consolidation-with-azure-netapp-files.md
+
+ Title: Use DFS-N and DFS Root Consolidation with Azure NetApp Files | Microsoft Docs
+description: Learn how to configure DFS-N and DFS Root Consolidation with Azure NetApp Files
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
+ Last updated : 06/30/2022+++
+# How to use DFS Namespaces with Azure NetApp Files
+
+[Distributed File Systems Namespaces](/windows-server/storage/dfs-namespaces/dfs-overview), commonly referred to as DFS Namespaces or DFS-N, is a Windows Server server role that is widely used to simplify the deployment and maintenance of SMB file shares in production. DFS Namespaces is a storage namespace virtualization technology, which means that it enables you to provide a layer of indirection between the UNC path of your file shares and the actual file shares themselves. DFS Namespaces works with SMB file shares, agnostic of where those file shares are hosted: it can be used with SMB shares hosted on an on-premises Windows File Server with or without Azure File Sync, Azure file shares directly, SMB file shares hosted in Azure NetApp Files, and even with file shares hosted in other clouds.
+
+At its core, DFS Namespaces provide a mapping between a user-friendly UNC path, like `\\contoso\shares\ProjectX` and the underlying UNC path of the SMB share like `\\Server01-Prod\ProjectX` or `\\anf-xxxx\projectx`. When the end user wants to navigate to their file share, they type in the user-friendly UNC path, but their SMB client accesses the underlying SMB path of the mapping. You can also extend this basic concept to take over an existing file server name, such as `\\MyServer\ProjectX` using DFS root consolidation. You can use these capabilities to achieve the following scenarios:
+
+- **Provide a migration-proof name for a logical set of data**
+In this example, you have a mapping like `\\contoso\shares\Engineering` that maps to `\\OldServer\Engineering`. When you complete your migration to Azure NetApp Files, you can change your mapping so your user-friendly UNC path points to `\\anf-xxxx\engineering`. When an end user accesses the user-friendly UNC path, they will be seamlessly redirected to the Azure NetApp Files share path.
+
+- **Extend a logical set of data across size, IO, or other scale thresholds**
+This is common when dealing with corporate shares, where different folders have different performance requirements, or with scratch shares, where users get arbitrary space to handle temporary data needs. With DFS Namespaces, you stitch together multiple folders into a cohesive namespace. For example, `\\contoso\shares\engineering` maps to `\\anf-xxxx\engineering` (Azure NetApp Files, ultra tier), `\\contoso\shares\sales` maps to `\\anf-yyyy\sales` (Azure NetApp Files, standard tier), and so on.
+
+- **Preserve the logical name of one or more legacy file servers after the data has been migrated to Azure NetApp Files**
+Using DFS-N with root consolidation allows you to take over the hostname and share paths exactly as they are. This leaves document shortcuts, embedded document links, and UNC paths unchanged after the migration.
+
+If you already have a DFS Namespace in place, no special steps are required to use it with Azure NetApp Files. If you're accessing your Azure NetApp Files share from on-premises, normal networking considerations apply; see [Guidelines for Azure NetApp Files network planning](./azure-netapp-files-network-topologies.md) for more information.
+
+## Applies to
+
+| File share type | SMB | NFS | dual-protocol* |
+|-|:-:|:-:|:-:|
+| Azure NetApp Files | ![Yes](../media/azure-netapp-files/icons/yes-icon.png) | ![No](../media/azure-netapp-files/icons/no-icon.png) | ![Yes](../media/azure-netapp-files/icons/yes-icon.png) |
+
+> [!IMPORTANT]
+> This functionality applies to the SMB side of Azure NetApp Files dual-protocol volumes.
+
+## Namespace types
+
+DFS Namespaces provides three namespace types:
+
+- **Domain-based namespace**:
+A namespace hosted as part of your Windows Server AD domain. Namespaces hosted as part of AD will have a UNC path containing the name of your domain, for example, `\\contoso.com\shares\myshare`, if your domain is `contoso.com`. Domain-based namespaces support larger scale limits and built-in redundancy through AD. Domain-based namespaces can't be a clustered resource on a failover cluster.
+
+- **Standalone namespace**:
+A namespace hosted on an individual server or a Windows Server failover cluster, not hosted as part of Windows Server AD. Standalone namespaces will have a name based on the name of the standalone server, such as `\\MyStandaloneServer\shares\myshare`, where your standalone server is named `MyStandaloneServer`. Standalone namespaces support lower scale targets than domain-based namespaces but can be hosted as a clustered resource on a failover cluster.
+
+- **Standalone namespace with root consolidation**:
+One or more namespaces hosted on an individual server or on a Windows Server failover cluster, not hosted as part of Windows Server AD. Standalone namespaces with root consolidation will have a UNC path that matches the name of the old file server you would like to take over, such as `\\oldserver`, where your namespace is named `#oldserver`. Standalone namespaces support lower scale targets than domain-based namespaces but can be hosted as a clustered resource on a Windows Server failover cluster.
+
+## Requirements
+
+To use DFS Namespaces with Azure NetApp Files, you must have the following resources:
+
+- An Active Directory domain. This can be hosted anywhere you like: an on-premises environment, an Azure virtual machine (VM), or even in another cloud.
+
+- A Windows Server that can host the namespace. For domain-based namespaces, a common deployment pattern is to use the Active Directory domain controller to host the namespaces, however the namespaces can be setup from any server with the DFS Namespaces server role installed. DFS Namespaces are available on all supported Windows Server versions.
+
+- For namespace root consolidation, Active Directory domain controllers can not be used to host the namespace. It is required to use a dedicated standalone Windows Server or a Windows Server failover cluster to host the namespace(s).
+
+- One or more Azure NetApp Files SMB file shares hosted in a domain-joined environment.
+
+## Install the DFS Namespaces server role
+
+For all DFS Namespace types, the **DFS Namespaces** server role must be installed. If you are already using DFS Namespaces, you may skip these steps.
+
+# [GUI](#tab/windows-gui)
+
+1. Open **Server Manager**
+
+2. Select **Manage**
+
+3. Select **Add Roles and Features**.
+
+4. For the **Installation Type**, select **Role-based or feature-based installation**
+
+5. Click **Next**.
+
+6. For **Server Selection**, select the desired server(s) on which you would like to install the DFS Namespaces server role
+
+7. Click **Next**.
+
+8. In the **Server Roles** section, select and check the **DFS Namespaces** role from role list under **File and Storage Services** > **File and iSCSI Services**.
+
+![A screenshot of the **Add Roles and Features** wizard with the **DFS Namespaces** role selected.](../media/azure-netapp-files/azure-netapp-files-dfs-namespaces-install.png)
+
+9. Click **Next** until the **Install** button is available
+
+10. Click **Install**
+
+# [PowerShell](#tab/azure-powershell)
+
+From an elevated PowerShell session (or using PowerShell remoting), execute the following command:
+
+```PowerShell
+Install-WindowsFeature -Name "FS-DFS-Namespace", "RSAT-DFS-Mgmt-Con"
+```
+++
+## Configure a DFS-N Namespace with Azure NetApp Files SMB volumes
+
+If you do not need to take over an existing legacy file server, a domain-based namespace is recommended. Domain-based namespaces are hosted as part of AD and will have a UNC path containing the name of your domain, for example, `\\contoso.com\corporate\finance`, if your domain is `contoso.com`. An example of this architecture is shown in the graphic below.
+
+![A screenshot of the architecture for DFS-N with Azure NetApp Files volumes.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-architecture-example.png)
++
+>[!IMPORTANT]
+>If you wish to use DFS Namespaces to take over an existing server name with root consolidation, skip to [Take over existing server names with root consolidation](#take-over-existing-server-names-with-root-consolidation).
+
+### Create a namespace
+
+The basic unit of management for DFS Namespaces is the namespace. The namespace root, or name, is the starting point of the namespace, such that in the UNC path `\\contoso.com\corporate\`, the namespace root is `corporate`.
+
+# [GUI](#tab/windows-gui)
+
+1. From a domain controller, open the **DFS Management** console. This can be found by selecting the **Start** button and typing **DFS Management**. The resulting management console has two sections **Namespaces** and **Replication**, which refer to DFS Namespaces and DFS Replication (DFS-R) respectively.
+2. Select the **Namespaces** section, and select the **New Namespace** button (you may also right-click on the **Namespaces** section). The resulting **New Namespace Wizard** walks you through creating a namespace.
+
+3. The first section in the wizard requires you to pick the DFS Namespace server to host the namespace. Multiple servers can host a namespace, but you will need to set up DFS Namespaces with one server at a time. Enter the name of the desired DFS Namespace server and select **Next**.
+
+4. In the **Namespace Name and Settings** section, you can enter the desired name of your namespace and select **Next**.
+
+5. The **Namespace Type** section allows you to choose between a **Domain-based namespace** and a **Stand-alone namespace**. Select a domain-based namespace. Refer to [namespace types](#namespace-types) above for more information on choosing between namespace types.
+
+![A screenshot of selecting domain-based namespace **New Namespace Wizard**.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-namespace-type.png)
+
+6. Select **Create** to create the namespace and **Close** when the dialog completes.
+
+# [PowerShell](#tab/azure-powershell)
+
+From a PowerShell session on the DFS Namespace server, execute the following PowerShell commands, populating `$namespace` and `$type` with the relevant values for your environment:
+
+```PowerShell
+# Variables
+$namespace = "corporate"
+$type = "DomainV2"
+
+$dfsnServer = $env:ComputerName
+$namespaceServer = Get-CimInstance -ClassName "Win32_ComputerSystem" | `
+Select-Object -ExpandProperty Domain
++
+# Create share for DFS-N namespace
+$smbShare = "C:\DFSRoots\$namespace"
+if (!(Test-Path -Path $smbShare)) { New-Item -Path $smbShare -ItemType Directory }
+New-SmbShare -Name $namespace -Path $smbShare -FullAccess Everyone
+
+# Create DFS-N namespace
+Import-Module -Name DFSN
+$namespacePath = "\\$namespaceServer\$namespace"
+$targetPath = "\\$dfsnServer\$namespace"
+New-DfsnRoot -Path $namespacePath -TargetPath $targetPath -Type $type
+```
+++
+### Configure folders and folder targets
+
+For a namespace to be useful, it must have folders and folder targets. Each folder can have one or more folder targets, which are pointers to the SMB file share(s) that host that content. When users browse a folder with folder targets, the client computer receives a referral that transparently redirects the client computer to one of the folder targets. You can also have folders without folder targets to add structure and hierarchy to the namespace.
+
+You can think of DFS Namespaces folders as analogous to file shares.
+
+# [GUI](#tab/windows-gui)
+
+1. In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog will allow you to create both the folder and its targets.
+
+![A screenshot of the **New Folder** domain-based dialog.](../media/azure-netapp-files/azure-netapp-files-dfs-domain-folder-targets.png)
+
+2. In the textbox labeled **Name** provide the name of the share.
+
+3. Select **Add...** to add folder targets for this folder. The resulting **Add Folder Target** dialog provides a textbox labeled **Path to folder target** where you can provide the UNC path to your Azure NetApp Files SMB share.
+
+4. Select **OK** on the **Add Folder Target** dialog.
+
+5. Select **OK** on the **New Folder** dialog to create the folder and folder targets.
+
+# [PowerShell](#tab/azure-powershell)
+
+```PowerShell
+# Variables
+$shareName = "finance"
+$targetUNC = "\\anf-xxxx.contoso.com\finance"
+
+# Create folder and folder targets
+$sharePath = "$namespacePath\$shareName"
+New-DfsnFolder -Path $sharePath -TargetPath $targetUNC
+```
+++
+Now that you have created a namespace, a folder, and a folder target, you should be able to mount your file share through DFS Namespaces. The full path for your share should be `\\contoso.com\corporate\finance`.
+
+## Take over existing server names with root consolidation
+
+An important use for DFS Namespaces is to take over an existing server name for the purposes of refactoring the physical layout of the file shares. For example, you may wish to consolidate file shares from multiple old file servers together on Azure NetApp Files volume(s) during a modernization migration. Traditionally, end user familiarity and document-linking limit your ability to consolidate file shares from disparate file servers together on one host, but the DFS Namespace root consolidation feature allows you to stand-up a single server or failover cluster to take over multiple server names and route to the appropriate Azure NetApp Files share name(s).
+
+Although useful for various datacenter migration scenarios, root consolidation is especially useful for adopting Azure NetApp Files shares because Azure NetApp Files shares don't allow you to keep existing on-premises server names.
+
+Root consolidation may only be used with standalone namespaces. If you already have an existing domain-based namespace for your file shares, you do not need to create a root consolidated namespace.
+
+This section outlines the steps to configure DFS Namespace root consolidation on a standalone server. For a highly available architecture please work with your Microsoft technical team to configure Windows Server failover clustering and an Azure Load Balancer as required. An example of a highly available architecture is shown in the graphic below.
+
+![A screenshot of the architecture for root consolidation with Azure NetApp Files.](../media/azure-netapp-files/azure-netapp-files-root-consolidation-architecture-example.png)
++
+### Enabling root consolidation
+
+Root consolidation can be enabled by setting the following registry keys from an elevated PowerShell session (or using PowerShell remoting) on the standalone DFS Namespace server or on the failover cluster.
+
+```PowerShell
+New-Item `
+ -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs" `
+ -Type Registry `
+ -ErrorAction SilentlyContinue
+New-Item `
+ -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters" `
+ -Type Registry `
+ -ErrorAction SilentlyContinue
+New-Item `
+ -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" `
+ -Type Registry `
+ -ErrorAction SilentlyContinue
+Set-ItemProperty `
+ -Path "HKLM:SYSTEM\CurrentControlSet\Services\Dfs\Parameters\Replicated" `
+ -Name "ServerConsolidationRetry" `
+ -Value 1
+```
+
+### Creating DNS entries for existing file server names
+
+In order for DFS Namespaces to respond to existing file server names, **you must** create alias (CNAME) records for your existing file servers that point at the DFS Namespaces server name. The exact procedure for updating your DNS records may depend on what servers your organization is using and if your organization is using custom tooling to automate the management of DNS. The following steps are shown for the DNS server included with Windows Server and automatically used by Windows AD. In this example, the DFS-N server name is `mydfscluster`.
+
+# [GUI](#tab/windows-gui)
+
+1. From a Windows DNS server, open the DNS management console.
+
+2. Navigate to the forward lookup zone for your domain. For example, if your domain is `contoso.com`, the forward lookup zone can be found under **Forward Lookup Zones** > **`contoso.com`** in the management console. The exact hierarchy shown in this dialog will depend on the DNS configuration for your network.
+
+3. Right-click on your forward lookup zone and select **New Alias (CNAME)**.
+
+4. In the resulting dialog, enter the short name for the file server you're replacing (the fully qualified domain name will be auto-populated in the textbox labeled **Fully qualified domain name**)
+
+5. In the textbox labeled **Fully qualified domain name (FQDN) for the target host**, enter the name of the DFS-N server you have set up. You can use the **Browse** button to help you select the server if desired.
+
+![A screenshot depicting the **New Resource Record** for a CNAME DNS entry.](../media/azure-netapp-files/azure-netapp-files-root-consolidation-cname.png)
+
+6. Select **OK** to create the CNAME record for your server.
+
+# [PowerShell](#tab/azure-powershell)
+
+On a Windows DNS server, open a PowerShell session (or use PowerShell remoting) to execute the following commands, populating `$oldServer` and `$dfsnServer`, with the relevant values for your environment (`$domain` will auto-populate with the domain name, but you can also manually type this out as well).
+
+```PowerShell
+# Variables
+$oldServer = "fileserver01"
+$domain = Get-CimInstance -ClassName "Win32_ComputerSystem" | `
+ Select-Object -ExpandProperty Domain
+$dfsnServer = "mydfscluster.$domain"
+
+# Create CNAME record
+Import-Module -Name DnsServer
+Add-DnsServerResourceRecordCName `
+ -Name $oldServer `
+ -HostNameAlias $dfsnServer `
+ -ZoneName $domain
+```
+++
+### Create a namespace
+
+The basic unit of management for DFS Namespaces is the namespace. The namespace root, or name, is the starting point of the namespace, such that in the UNC path `\\contoso.com\Public\`, the namespace root is `Public`.
+
+To take over an existing server name with root consolidation, the name of the namespace should be the name of server name you want to take over, prepended with the `#` character. For example, if you wanted to take over an existing server named `MyServer`, you would create a DFS-N namespace called `#MyServer`. The PowerShell section below takes care of prepending the `#`, but if you create via the DFS Management console, you will need to prepend as appropriate.
+
+# [GUI](#tab/windows-gui)
+
+1. Open the **DFS Management** console. This can be found by selecting the **Start** button and typing **DFS Management**. The resulting management console has two sections **Namespaces** and **Replication**, which refer to DFS Namespaces and DFS Replication (DFS-R) respectively.
+
+2. Select the **Namespaces** section, and select the **New Namespace** button (you may also right-click on the **Namespaces** section). The resulting **New Namespace Wizard** walks you through creating a namespace.
+
+3. The first section in the wizard requires you to pick the DFS Namespace server to host the namespace. Multiple servers can host a namespace, but you will need to set up DFS Namespaces with one server at a time. Enter the name of the desired DFS Namespace server and select **Next**.
+
+4. In the **Namespace Name and Settings** section, you can enter the desired name of your namespace and select **Next**.
+
+5. The **Namespace Type** section allows you to choose between a **Domain-based namespace** and a **Stand-alone namespace**. If you intend to use DFS Namespaces to preserve an existing file server/NAS device name, you should select the standalone namespace option. For any other scenarios, you should select a domain-based namespace. Refer to [namespace types](#namespace-types) above for more information on choosing between namespace types.
+
+6. Select the desired namespace type for your environment and select **Next**. The wizard will then summarize the namespace to be created.
+
+![A screenshot of selecting standalone namespace in the **New Namespace Wizard**.](../media/azure-netapp-files/azure-netapp-files-dfs-namespace-type.png)
+
+7. Select **Create** to create the namespace and **Close** when the dialog completes.
+
+# [PowerShell](#tab/azure-powershell)
+
+From a PowerShell session on the DFS Namespace server, execute the following PowerShell commands, populating `$namespace` and `$type` with the relevant values for your environment:
+
+```PowerShell
+# Variables
+$namespace = "#fileserver01"
+$type = "Standalone"
+
+$dfsnServer = $env:ComputerName
+$namespaceServer = $dfsnServer
+
+# Create share for DFS-N namespace
+$smbShare = "C:\DFSRoots\$namespace"
+if (!(Test-Path -Path $smbShare)) { New-Item -Path $smbShare -ItemType Directory }
+New-SmbShare -Name $namespace -Path $smbShare -FullAccess Everyone
+
+# Create DFS-N namespace
+Import-Module -Name DFSN
+$namespacePath = "\\$namespaceServer\$namespace"
+$targetPath = "\\$dfsnServer\$namespace"
+New-DfsnRoot -Path $namespacePath -TargetPath $targetPath -Type $type
+```
+++
+### Configure folders and folder targets
+
+For a namespace to be useful, it must have folders and folder targets. Each folder can have one or more folder targets, which are pointers to the SMB file share(s) that host that content. When users browse a folder with folder targets, the client computer receives a referral that transparently redirects the client computer to one of the folder targets. You can also have folders without folder targets to add structure and hierarchy to the namespace.
+
+You can think of DFS Namespaces folders as analogous to file shares.
+
+# [GUI](#tab/windows-gui)
+
+1. In the DFS Management console, select the namespace you just created and select **New Folder**. The resulting **New Folder** dialog will allow you to create both the folder and its targets.
+
+![A screenshot of the **New Folder** dialog.](../media/azure-netapp-files/azure-netapp-files-dfs-folder-targets.png)
+
+2. In the textbox labeled **Name** provide the name of the share.
+
+3. Select **Add...** to add folder targets for this folder. The resulting **Add Folder Target** dialog provides a textbox labeled **Path to folder target** where you can provide the UNC path to your Azure NetApp Files SMB share.
+
+4. Select **OK** on the **Add Folder Target** dialog.
+
+5. Select **OK** on the **New Folder** dialog to create the folder and folder targets.
+
+# [PowerShell](#tab/azure-powershell)
+
+```PowerShell
+# Variables
+$shareName = "finance"
+$targetUNC = "\\anf-xxxx.contoso.com\finance"
+
+# Create folder and folder targets
+$sharePath = "$namespacePath\$shareName"
+New-DfsnFolder -Path $sharePath -TargetPath $targetUNC
+```
+++
+Now that you have created a namespace, a folder, and a folder target, you should be able to mount your file share through DFS Namespaces. Using a standalone namespace with root consolidation, you can access directly through your old server name, such as `\\fileserver01\finance`.
+
+## See also
+
+- [Create an SMB volume for Azure NetApp Files](./azure-netapp-files-create-volumes-smb.md)
+- [Guidelines for Azure NetApp Files network planning](./azure-netapp-files-network-topologies.md)
+- [Windows Server Distributed File System Namespaces](/windows-server/storage/dfs-namespaces/dfs-overview)
azure-resource-manager Bicep Functions Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-files.md
Title: Bicep functions - files description: Describes the functions to use in a Bicep file to load content from a file. Previously updated : 09/30/2021 Last updated : 07/01/2022 # File functions for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Use this function when you have binary content you would like to include in deployment. Rather than manually encoding the file to a base64 string and adding it to your Bicep file, load the file with this function. The file is loaded when the Bicep file is compiled to a JSON template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
-This function requires **Bicep version 0.4.412 or later**.
+This function requires **Bicep version 0.4.412 or later**.
The maximum allowed size of the file is **96 Kb**.
The maximum allowed size of the file is **96 Kb**.
The file as a base64 string.
+## loadJsonContent
+
+`loadJsonContent(filePath, [jsonPath], [encoding])`
+
+Loads the specified JSON file as an Any object.
+
+Namespace: [sys](bicep-functions.md#namespaces-for-functions).
+
+### Parameters
+
+| Parameter | Required | Type | Description |
+|: |: |: |: |
+| filePath | Yes | string | The path to the file to load. The path is relative to the deployed Bicep file. |
+| jsonPath | No | string | JSONPath expression to take only a part of the JSON into ARM. |
+| encoding | No | string | The file encoding. The default value is `utf-8`. The available options are: `iso-8859-1`, `us-ascii`, `utf-16`, `utf-16BE`, or `utf-8`. |
+
+### Remarks
+
+Use this function when you have JSON content or minified JSON content that is stored in a separate file. Rather than duplicating the JSON content in your Bicep file, load the content with this function. You can load a part of a JSON file by specifying a JSON path. The file is loaded when the Bicep file is compiled to the JSON template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
+
+In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
+
+This function requires **Bicep version 0.7.4 or later**.
+
+### Return value
+
+The contents of the file as an Any object.
+
+### Examples
+
+The following example creates a JSON file that contains values for a network security group.
++
+You load that file and convert it to a JSON object. You use the object to assign values to the resource.
++
+You can reuse the file of values in other Bicep files that deploy a network security group.
+ ## loadTextContent `loadTextContent(filePath, [encoding])`
-Loads the content of the specified file as a string.
+Loads the content of the specified file as a string.
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
Use this function when you have content that is more stored in a separate file. Rather than duplicating the content in your Bicep file, load the content with this function. For example, you can load a deployment script from a file. The file is loaded when the Bicep file is compiled to the JSON template. During deployment, the JSON template contains the contents of the file as a hard-coded string.
-When loading a JSON file, you can use the [json](bicep-functions-object.md#json) function with the loadTextContent function to create a JSON object. In VS Code, the properties of the loaded object are available intellisense. For example, you can create a file with values to share across many Bicep files. An example is shown in this article.
+Use the [`loadJsonContent()`](#loadjsoncontent) function to load JSON files.
This function requires **Bicep version 0.4.412 or later**.
The following example loads a script from a file and uses it for a deployment sc
::: code language="bicep" source="~/azure-docs-bicep-samples/syntax-samples/functions/loadTextContent/loaddeploymentscript.bicep" highlight="13" :::
-In the next example, you create a JSON file that contains values you want to use for a network security group.
--
-You load that file and convert it to a JSON object. You use the object to assign values to the resource.
--
-You can reuse the file of values in other Bicep files that deploy a network security group.
- ## Next steps * For a description of the sections in a Bicep file, see [Understand the structure and syntax of Bicep files](./file.md).
azure-resource-manager Bicep Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions.md
Title: Bicep functions description: Describes the functions to use in a Bicep file to retrieve values, work with strings and numerics, and retrieve deployment information. Previously updated : 05/02/2022 Last updated : 07/01/2022 # Bicep functions
The following functions are available for getting values related to the deployme
The following functions are available for loading the content from external files into your Bicep file. All of these functions are in the `sys` namespace. * [loadFileAsBase64](bicep-functions-files.md#loadfileasbase64)
+* [loadJsonContent](bicep-functions-files.md#loadjsoncontent)
* [loadTextContent](bicep-functions-files.md#loadtextcontent) ## Logical functions
azure-resource-manager Patterns Shared Variable File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/patterns-shared-variable-file.md
description: Describes the shared variable file pattern.
Previously updated : 08/18/2021 Last updated : 07/01/2022 # Shared variable file pattern
Furthermore, when you work with variables defined as arrays, you might have a se
## Solution
-Create a JSON file that includes the variables you need to share. Use the [`json()` function](bicep-functions-object.md#json) and [`loadTextContent()` function](bicep-functions-files.md#loadtextcontent) to load the file and access the variables. For array variables, use the [`concat()` function](bicep-functions-array.md#concat) to combine the shared values with any custom values for the specific resource.
+Create a JSON file that includes the variables you need to share. Use the [`loadJsonContent()` function](bicep-functions-files.md#loadjsoncontent) to load the file and access the variables. For array variables, use the [`concat()` function](bicep-functions-array.md#concat) to combine the shared values with any custom values for the specific resource.
## Example 1: Naming prefixes
azure-resource-manager Scenarios Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scenarios-monitoring.md
description: Describes how to create monitoring resources by using Bicep.
Previously updated : 05/16/2022 Last updated : 07/01/2022 # Create monitoring resources by using Bicep
You can create Log Analytics workspaces with the resource type [Microsoft.Operat
## Diagnostic settings
+Diagnostic settings enable you to configure Azure Monitor to export your logs and metrics to a number of destinations, including Log Analytics and Azure Storage.
+ When creating [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md) in Bicep, remember that this resource is an [extension resource](scope-extension-resources.md), which means it's applied to another resource. You can create diagnostic settings in Bicep by using the resource type [Microsoft.Insights/diagnosticSettings](/azure/templates/microsoft.insights/diagnosticsettings?tabs=bicep).
-When creating diagnostic settings in Bicep, you need to apply the scope of the diagnostic setting. The scope can be applied at the management, subscription, or resource group level. [Use the scope property on this resource to set the scope for this resource](../../azure-resource-manager/bicep/scope-extension-resources.md).
+When creating diagnostic settings in Bicep, you need to apply the scope of the diagnostic setting. The diagnostic setting can be applied at the management, subscription, or resource group level. [Use the scope property on this resource to set the scope for this resource](../../azure-resource-manager/bicep/scope-extension-resources.md).
Consider the following example:
Consider the following example:
In the preceding example, you create a diagnostic setting for the App Service plan and send those diagnostics to Log Analytics. You can use the `scope` property to define your App Service plan as the scope for your diagnostic setting, and use the `workspaceId` property to define the Log Analytics workspace to send the diagnostic logs to. You can also export diagnostic settings to Event Hubs and Azure Storage Accounts.
-Diagnostic settings differ between resources, so ensure that the diagnostic settings you want to create are applicable for the resource you're using.
+Log types differ between resources, so ensure that the logs you want to export are applicable for the resource you're using.
+
+### Activity log diagnostic settings
+
+To use Bicep to configure diagnostic settings to export the Azure activity log, deploy a diagnostic setting resource at the [subscription scope](deploy-to-subscription.md).
+
+The following example shows how to export several activity log types to a Log Analytics workspace:
+ ## Alerts
azure-video-indexer Add Contributor Role On The Media Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/add-contributor-role-on-the-media-service.md
Title: Add Contributor role on the Media Services
-description: This topic explains how to add contributor role on the Media Services.
+ Title: Add Contributor role on the Media Services account
+description: This topic explains how to add contributor role on the Media Services account.
Last updated 10/13/2021
-# Add contributor role on the Media Services
+# Add contributor role to Media Services
-This article describes how to assign contributor role on the Media Services.
+This article describes how to assign contributor role on the Media Services account.
> [!NOTE] > If you are creating your Azure Video Indexer through the Azure portal UI, the selected Managed identity will be automatically assigned with a contributor permission on the selected Media Service account.
This article describes how to assign contributor role on the Media Services.
1. Azure Media Services (AMS) 2. User-assigned managed identity+ > [!NOTE] > You'll need an Azure subscription where you have access to both the [Contributor][docs-role-contributor] role and the [User Access Administrator][docs-role-administrator] role to the Azure Media Services and the User-assigned managed identity. If you don't have the right permissions, ask your account administrator to grant you those permissions. The associated Azure Media Services must be in the same region as the Azure Video Indexer account. - ## Add Contributor role on the Media Services
-### [Azure Portal](#tab/portal/)
+### [Azure portal](#tab/portal/)
-### Add Contributor role on the Media Services in the Azure portal
+### Add Contributor role to Media Services using Azure portal
1. Sign in at the [Azure portal](https://portal.azure.com/). * Using the search bar at the top, enter **Media Services**.
This article describes how to assign contributor role on the Media Services.
1. Once you have found the security principal, click to select it. 1. To assign the role, click **Review + assign**
+## Next steps
+
+[Create a new Azure Resource Manager based account](create-account-portal.md)
+ <!-- links --> [docs-role-contributor]: ../role-based-access-control/built-in-roles.md#contributor [docs-role-administrator]: ../role-based-access-control/built-in-roles.md#user-access-administrator
azure-vmware Deploy Zerto Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-zerto-disaster-recovery.md
In this scenario, the primary site is an Azure VMware Solution private cloud in
- Network connectivity, ExpressRoute based, from Azure VMware Solution to the vNET used for disaster recovery. -- Follow the [Zerto Virtual Replication Azure Enterprise Guidelines](http://s3.amazonaws.com/zertodownload_docs/Latest/Zerto%20Virtual%20Replication%20Azure%20Enterprise%20Guidelines.pdf) for the rest of the prerequisites.
+- Follow the [Zerto Virtual Replication Azure Enterprise Guidelines](https://www.zerto.com/wp-content/uploads/2016/11/Zerto-Virtual-Replication-5.0-for-Azure.pdf) for the rest of the prerequisites.
baremetal-infrastructure About The Public Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/about-the-public-preview.md
+
+ Title: About NC2 on Azure Public Preview
+description: Learn about NC2 on Azure Public Preview and the benefits it offers.
++ Last updated : 03/31/2021++
+# About NC2 on Azure Public Preview
+
+The articles in this section are intended for the professionals participating in the Public Preview of NC2 on Azure.
+
+ To provide input, email [NC2-on-Azure Docs](mailto:AzNutanixPM@microsoft.com).
++
+In particular, this article highlights Public Preview features.
+
+## Unlock the benefits of Azure
+
+* Establish a consistent hybrid deployment strategy
+* Operate seamlessly with on-premise Nutanix Clusters in Azure
+* Build and scale without constraints
+* Invent for today and be prepared for tomorrow with NC2 on Azure
+
+### Scale and flexibility that align with your needs
+
+Get scale, automation, and fast provisioning for your Nutanix workloads on global Azure infrastructure to invent with purpose.
+
+### Optimize your investment
+
+Keep using your existing Nutanix investments, skills, and tools to quickly increase business agility with Azure cloud services.
+
+### Gain cloud cost efficiencies
+
+Manage your cloud spending with license portability to significantly reduce the cost of running workloads in the cloud.
+
+### Modernize through the power of Azure
+
+Adapt quicker with unified data governance and gain immediate insights with transformative analytics to drive innovation.
+
+### SKUs
+
+We offer two SKUs: AN36 and AN36P. For specifications, see [SKUs](skus.md).
+
+### More benefits
+
+* Microsoft Azure Consumption Contract (MACC) credits
+* Nutanix Bring your own License (BYOL)
+
+> [!NOTE]
+> During the public preview, RI is not supported.
+An additional discount may be available.
+
+## Support
+
+Nutanix (for software-related issues) and Microsoft (for infrastructure-related issues) will provide end-user support.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Use cases and supported scenarios](use-cases-and-supported-scenarios.md)
baremetal-infrastructure Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/architecture.md
+
+ Title: Architecture of BareMetal Infrastructure for NC2
+description: Learn about the architecture of several configurations of BareMetal Infrastructure for NC2.
++ Last updated : 04/14/2021++
+# Architecture of BareMetal Infrastructure for Nutanix
+
+In this article, we look at the architectural options for BareMetal Infrastructure for Nutanix and the features each option supports.
+
+## Deployment example
+
+The image in this section shows one example of an NC2 on Azure deployment.
++
+### Cluster Management virtual network
+
+* Contains the Nutanix Ready Nodes
+* Nodes reside in a delegated subnet (special BareMetal construct)
+
+### Hub virtual network
+
+* Contains a gateway subnet and VPN Gateway
+* VPN Gateway is entry point from on-premises to cloud
+
+### PC virtual network
+
+* Contains Prism Central - Nutanix's software appliance that enables advanced functionality within the Prism portal.
+
+## Connect from cloud to on-premises
+
+Connecting from cloud to on-premises is supported by two traditional products: Express Route and VPN Gateway.
+One example deployment is to have a VPN gateway in the Hub virtual network.
+This virtual network is peered with both the PC virtual network and Cluster Management virtual network, providing connectivity across the network and to your on-premises site.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Requirements](requirements.md)
baremetal-infrastructure Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/faq.md
+
+ Title: FAQ
+description: Questions frequently asked about NC2 on Azure
++ Last updated : 07/01/2022++
+# Frequently asked questions about NC2 on Azure
+
+This article addresses questions most frequently asked about NC2 on Azure.
+
+## What is Hyperconverged Infrastructure (HCI)?
+
+Hyper-converged infrastructure (HCI) uses locally attached storage resources to combine common data center hardware with intelligent software to create flexible building blocks that replace legacy infrastructure consisting of separate servers, storage networks, and storage arrays. [Video explanation](https://www.youtube.com/watch?v=OPYA5-V0yRo)
+
+## How can I create a VM on a node?
+
+After a customer provisions a cluster of Nutanix Ready Nodes, they can spin up a VM through the Nutanix Prism Portal.
+This operation should be exactly the same as on-premises in the prism portal.
+
+## Is NC2 on Azure a third party or first party offering?
+
+NC2 on Azure is a 3rd-party offering on Azure Marketplace.
+However, we're working hand in hand with Nutanix to offer the best product experience.
+
+## How will I be billed?
+
+Customers will be billed on a pay-as-you-go basis. Additionally, customers are able to use their existing Microsoft Azure Consumption Contract (MACC).
+
+## What software advantages does Nutanix have over competitors?
+
+Data locality
+Shadow Clones (which lead to faster boot time)
+Cluster level microservices that lead to world-class performance
+
+## Will this solution integrate with the rest of the Azure cloud?
+
+Yes! You can use the products and services in Azure that you already have and love.
+
+## Who supports NC2 on Azure?
+
+Microsoft delivers support for BareMetal infrastructure of NC2 on Azure.
+You can submit a support request. For Cloud Solution Provider (CSP) managed subscriptions, the first level of support provides the Solution Provider in the same fashion as CSP does for other Azure services.
+
+Nutanix delivers support for Nutanix software of NC2 on Azure.
+Nutanix offers a support tier called Production Support for NC2.
+For more information about Production Support tiers and SLAs, see Product Support Programs under Cloud Services Support.
+
+## Can I use my existing VPN or ER gateway for the DR scenario?
+
+Technically, yes. Raise a support ticket from Azure portal to get this functionality enabled.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Getting started](get-started.md)
baremetal-infrastructure Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/get-started.md
+
+ Title: Getting started
+description: Learn how to sign up, set up, and use NC2 on Azure Public Preview.
++ Last updated : 07/01/2021++
+# Getting started with NC2 on Azure
+
+Learn how to sign up for, set up, and use NC2 on Azure Public Preview.
+
+## Sign up for the Public Preview
+
+Once you've satisfied the [requirements](requirements.md), go to [Nutanix Cloud Clusters
+on Azure Deployment
+and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf) to sign up for the Preview.
+
+## Set up NC2 on Azure
+
+To set up NC2 on Azure, go to [Nutanix Cloud Clusters
+on Azure Deployment and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+
+## Use NC2 on Azure
+
+For more information about using NC2 on Azure, see [Nutanix Cloud Clusters
+on Azure Deployment
+and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [About the Public Preview](about-the-public-preview.md)
baremetal-infrastructure Nc2 Baremetal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/nc2-baremetal-overview.md
+
+ Title: What is BareMetal Infrastructure for NC2 on Azure?
+description: Learn about the features BareMetal Infrastructure offers for NC2 workloads.
++ Last updated : 07/01/2022++
+# What is BareMetal Infrastructure for NC2 on Azure?
+
+In this article, we'll give an overview of the features BareMetal Infrastructure offers for Nutanix workloads.
+
+Nutanix Cloud Clusters (NC2) on Microsoft Azure provides a hybrid cloud solution that operates as a single cloud, allowing you to manage applications and infrastructure in your private cloud and Azure. With NC2 running on Azure, you can seamlessly move your applications between on-premises and Azure using a single management console. With NC2 on Azure, you can use your existing Azure accounts and networking setup (VPN, VNets, and Subnets), eliminating the need to manage any complex network overlays. With this hybrid offering, you use the same Nutanix software and licenses across your on-premises cluster and Azure to optimize your IT investment efficiently.
+
+You use the NC2 console to create a cluster, update the cluster capacity (the number of nodes), and delete a Nutanix cluster. After you create a Nutanix cluster in Azure using NC2, you can operate the cluster in the same manner as you operate your on-premises Nutanix cluster with minor changes in the Nutanix command-line interface (nCLI), Prism Element and Prism Central web consoles, and APIs.
+
+## Supported protocols
+
+The following protocols are used for different mount points within BareMetal servers for Nutanix workload.
+
+- OS mount ΓÇô internet small computer systems interface (iSCSI)
+- Data/log ΓÇô [Network File System version 3 (NFSv3)](/windows-server/storage/nfs/nfs-overview#nfs-version-3-continuous-availability)
+- Backup/archive ΓÇô [Network File System version 4 (NFSv4)](/windows-server/storage/nfs/nfs-overview#nfs-version-41)
+
+## Licensing
+
+You can bring your own on-premises capacity-based Nutanix licenses (CBLs).
+Alternatively, you can purchase licenses from Nutanix or from Azure Marketplace.
+
+## Operating system and hypervisor
+
+NC2 runs Nutanix Acropolis Operating System (AOS) and Nutanix Acropolis Hypervisor (AHV).
+
+- Servers are pre-loaded with [AOS 6.1](https://www.nutanixbible.com/4-book-of-aos.html).
+- AHV 6.1 is built into this product as the default hypervisor at no extra cost.
+- AHV hypervisor is based on open source Kernel-based Virtual Machine (KVM).
+- AHV will determine the lowest processor generation in the cluster and constrain all Quick Emulator (QEMU) domains to that level.
+
+This functionality allows mixing of processor generations within an AHV cluster and ensures the ability to live-migrate between hosts.
+
+AOS abstracts kvm, virsh, qemu, libvirt, and iSCSI from the end-user and handles all backend configuration.
+Thus users can use Prism to manage everything they would want to manage, while not needing to be concerned with low-level management.
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Getting started with NC2 on Azure](get-started.md)
baremetal-infrastructure Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/requirements.md
+
+ Title: Requirements
+description: Learn what you need to run NC2 on Azure, including Azure, Nutanix, networking, and other requirements.
++ Last updated : 03/31/2021++
+# Requirements
+
+This article assumes prior knowledge of the Nutanix stack and Azure services to operate significant deployments on Azure.
+The following sections identify the requirements to use Nutanix Clusters on Azure:
+
+## Azure account requirements
+
+* An Azure account with a new subscription
+* An Azure Active Directory
+
+## My Nutanix account requirements
+
+For more information, see "NC2 on Azure Subscription and Billing" in [Nutanix Cloud Clusters on Azure Deployment and User Guide](https://download.nutanix.com/documentation/hosted/Nutanix-Cloud-Clusters-Azure.pdf).
+
+## Networking requirements
+
+* Connectivity between your on-premises datacenter and Azure. Both ExpressRoute and VPN are supported.
+* After a cluster is created, you'll need Virtual IP addresses for both the on-premises cluster and the cluster running in Azure.
+* Outbound internet access on your Azure portal.
+* Azure Directory Service resolves the FQDN:
+gateway-external-api.console.nutanix.com.
+
+## Other requirements
+
+* Minimum of three (or more) Azure Nutanix Ready nodes per cluster
+* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure
+* Prism Central instance deployed on NC2 on Azure to manage the Nutanix clusters in Azure
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Supported instances and regions](supported-instances-and-regions.md)
baremetal-infrastructure Skus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/skus.md
+
+ Title: SKUs
+description: Learn about SKU options for NC2 on Azure Public Preview, including core, RAM, storage, and network.
++ Last updated : 07/01/2021++
+# SKUs
+
+This article identifies options associated with SKUs available for NC2 on Azure Public Preview, including core, RAM, storage, and network.
+
+## Options
+
+The following table presents component options for each available SKU.
+
+| Component |Ready Node for Nutanix AN36|Ready Node for Nutanix AN36P|
+| :- | -: |::|
+|Core|Intel 6140, 36 Core, 2.3 GHz|Intel 6240, 36 Core, 2.6 GHz|
+|vCPUs|72|72|
+|RAM|576 GB|768 GB|
+|Storage|18.56 TB (8 x 1.92 TB SATA SSD, 2x1.6TB NVMe)|19.95 TB (2x375G Optane, 6x3.2TB NVMe)|
+|Network|100 Gbps (four links * 25 Gbps)|100 Gbps (four links * 25 Gbps)|
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [FAQ](faq.md)
baremetal-infrastructure Solution Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/solution-design.md
+
+ Title: Solution design
+description: Learn about topologies and constraints for NC2 on Azure Public Preview.
++ Last updated : 07/01/2022++
+# Solution design
+
+This article identifies topologies and constraints for NC2 on Azure Public Preview.
+
+## Supported topologies
+
+The following table describes the network topologies supported by each network features configuration of NC2 on Azure.
+
+|Topology |Basic network features |
+| :- |::|
+|Connectivity to BareMetal (BM) in a local VNet| Yes |
+|Connectivity to BM in a peered VNet (Same region)|Yes |
+|Connectivity to BM in a peered VNet (Cross region or global peering)|No |
+|Connectivity to a BM over ExpressRoute gateway |Yes|
+|ExpressRoute (ER) FastPath |No |
+|Connectivity from on-premises to a BM in a spoke VNet over ExpressRoute gateway and VNet peering with gateway transit|Yes |
+|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway| Yes |
+|Connectivity from on-premises to a BM in a spoke VNet over VPN gateway and VNet peering with gateway transit| Yes |
+|Connectivity over Active/Passive VPN gateways| Yes |
+|Connectivity over Active/Active VPN gateways| No |
+|Connectivity over Active/Active Zone Redundant gateways| No |
+|Connectivity over Virtual WAN (VWAN)| No |
+
+## Constraints
+
+The following table describes whatΓÇÖs supported for each network features configuration:
+
+|Features |Basic network features |
+| :- | -: |
+|Delegated subnet per VNet |1|
+|[Network Security Groups](/azure/virtual-network/network-security-groups-overview) on NC2 on Azure-delegated subnets|No|
+|[User-defined routes (UDRs)](/azure/virtual-network/virtual-networks-udr-overview#user-defined) on NC2 on Azure-delegated subnets|No|
+|Connectivity to [private endpoints](/azure/private-link/private-endpoint-overview)|No|
+|Load balancers for NC2 on Azure traffic|No|
+|Dual stack (IPv4 and IPv6) virtual network|IPv4 only supported|
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [Architecture](architecture.md)
baremetal-infrastructure Supported Instances And Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/supported-instances-and-regions.md
+
+ Title: Supported instances and regions
+description: Learn about instances and regions supported for NC2 on Azure Public Preview.
+++ Last updated : 03/31/2021++
+# Supported instances and regions
+
+Learn about instances and regions supported for NC2 on Azure Public Preview.
+
+## Supported instances
+
+Nutanix Clusters on Azure supports:
+
+* Minimum of three bare metal nodes per cluster.
+* Maximum of nine bare metal nodes for public preview.
+* Only the Nutanix AHV hypervisor on Nutanix clusters running in Azure.
+* Prism Central instance deployed on Nutanix Clusters on Azure to manage the Nutanix clusters in Azure.
+
+## Supported regions
+
+This public preview release supports the following Azure regions:
+
+|Region name |Ready Node for Nutanix AN36 |Ready Node for Nutanix AN36P |
+| :- | -: |::|
+|East US (Virginia)|ΓÇïYes|No|
+|West US 2 (Washington)ΓÇï|Yes|No|
+|East US 2 (Virginia)|No|Yes|
+|North Central US (Illinois)ΓÇï|No|Yes|
+|UK South (London)*|No|Yes|
+|Germany West Central (Frankfurt)ΓÇï|No|Yes|
+|Australia East|No|Yes|
+|West Europe (Amsterdam)No|Yes|
+|Southeast Asia (Singapore)|No|Yes|
+
+## Next steps
+
+Learn more:
+
+> [!div class="nextstepaction"]
+> [SKUs](skus.md)
baremetal-infrastructure Use Cases And Supported Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/workloads/nc2-public-preview/use-cases-and-supported-scenarios.md
+
+ Title: Use cases and supported scenarios
+description: Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
+++ Last updated : 07/01/2022++
+# Use cases and supported scenarios
+
+ Learn about use cases and supported scenarios for NC2 on Azure, including cluster management, disaster recovery, on-demand elasticity, and lift-and-shift.
+
+## Unified management experience - cluster management
+
+That operations and cluster management be nearly identical to on-premises is critical to customers.
+Customers can update capacity, monitor alerts, replace hosts, monitor usage, and more by combining the respective strengths of Microsoft and Nutanix.
+
+## Disaster recovery
+
+Disaster recovery is critical to cloud functionality.
+A disaster can be any of the following:
+
+- Cyber attack
+- Data breach
+- Equipment failure
+- Natural disaster
+- Data loss
+- Human error
+- Malware and viruses
+- Network and internet blips
+- Hardware and/or software failure
+- Weather catastrophes
+- Flooding
+- Office vandalism
+
+ ...or anything else that puts your operations at risk.
+
+When a disaster strikes, the goal of any DR plan is to ensure operations run as normally as possible.
+While the business will be aware of the crisis, ideally, its customers and end-users shouldn't be affected.
+
+## On-demand elasticity
+
+Scale up and scale out as you like.
+We provide the flexibility that means you don't have to procure hardware yourself - with just a click of a button you can get additional nodes in the cloud nearly instantly.
+
+## Lift and shift
+
+Move applications to the cloud and modernize your infrastructure.
+Applications move with no changes, allowing for flexible operations and minimum downtime.
+
+> [!div class="nextstepaction"]
+> [Solution design](solution-design.md)
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.3.0 | Generally available | | Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.3.0 | Generally available | | Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.1.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.2.0 | Generally available |
## Prerequisites
cognitive-services Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/container-image-tags.md
This container image has the following tags available. You can also find a full
# [Latest version](#tab/current)
-Release notes for `v2.1.0`:
+Release notes for `v2.2.0`:
**Features** * Security upgrade.
+* Support `sv-se-hillevineural` and `sv-se-mattiasneural` and `sv-se-sofieneural`.
| Image Tags | Notes | ||:| | `latest` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `2.1.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.1.0-amd64-en-us-arianeural`. |
+| `2.2.0-amd64-<locale-and-voice>` | Replace `<locale>` with one of the available locales, listed below. For example `2.2.0-amd64-en-us-arianeural`. |
-| v2.1.0 Locales and voices | Notes |
+| v2.2.0 Locales and voices | Notes |
|-|:|
-| `am-et-amehaneural` | Container image with the `am-ET` locale and `am-ET-Amehaneural` voice. |
-| `am-et-mekdesneural` | Container image with the `am-ET` locale and `am-ET-Mekdesneural` voice. |
-| `ar-bh-lailaneural` | Container image with the `ar-BH` locale and `ar-BH-Lailaneural` voice. |
-| `ar-eg-salmaneura` | Container image with the `ar-EG` locale and `ar-eg-Salmaneura` voice. |
-| `ar-eg-shakirneural` | Container image with the `ar-EG` locale and `ar-eg-shakirneural` voice. |
-| `ar-sa-hamedneural` | Container image with the `ar-SA` locale and `ar-sa-Hamedneural` voice. |
-| `ar-sa-zariyahneural` | Container image with the `ar-SA` locale and `ar-sa-Zariyahneural` voice. |
-| `cs-cz-antoninneural` | Container image with the `cs-CZ` locale and `cs-CZ-Antoninneural` voice. |
-| `cs-cz-vlastaneural` | Container image with the `cs-CZ` locale and `cs-CZ-Vlastaneural` voice. |
-| `de-ch-janneural` | Container image with the `de-CH` locale and `de-CH-Janneural` voice. |
-| `de-ch-lenineural` | Container image with the `de-CH` locale and `de-CH-Lenineural` voice. |
-| `de-de-conradneural` | Container image with the `de-DE` locale and `de-DE-ConradNeural` voice. |
-| `de-de-katjaneural` | Container image with the `de-DE` locale and `de-DE-KatjaNeural` voice. |
-| `en-au-natashaneural` | Container image with the `en-AU` locale and `en-AU-NatashaNeural` voice. |
-| `en-au-williamneural` | Container image with the `en-AU` locale and `en-AU-WilliamNeural` voice. |
-| `en-ca-claraneural` | Container image with the `en-CA` locale and `en-CA-ClaraNeural` voice. |
-| `en-ca-liamneural` | Container image with the `en-CA` locale and `en-CA-LiamNeural` voice. |
-| `en-gb-libbyneural` | Container image with the `en-GB` locale and `en-GB-LibbyNeural` voice. |
-| `en-gb-ryanneural` | Container image with the `en-GB` locale and `en-GB-RyanNeural` voice. |
-| `en-gb-sonianeural` | Container image with the `en-GB` locale and `en-GB-SoniaNeural` voice. |
-| `en-us-arianeural` | Container image with the `en-US` locale and `en-US-AriaNeural` voice. |
-| `en-us-guyneural` | Container image with the `en-US` locale and `en-US-GuyNeural` voice. |
-| `en-us-jennyneural` | Container image with the `en-US` locale and `en-US-JennyNeural` voice. |
-| `es-es-alvaroneural` | Container image with the `es-ES` locale and `es-ES-AlvaroNeural` voice. |
-| `es-es-elviraneural` | Container image with the `es-ES` locale and `es-ES-ElviraNeural` voice. |
-| `es-mx-dalianeural` | Container image with the `es-MX` locale and `es-MX-DaliaNeural` voice. |
-| `es-mx-jorgeneural` | Container image with the `es-MX` locale and `es-MX-JorgeNeural` voice. |
-| `fr-ca-antoineneural` | Container image with the `fr-CA` locale and `fr-CA-AntoineNeural` voice. |
-| `fr-ca-jeanneural` | Container image with the `fr-CA` locale and `fr-CA-JeanNeural` voice. |
-| `fr-ca-sylvieneural` | Container image with the `fr-CA` locale and `fr-CA-SylvieNeural` voice. |
-| `fr-fr-deniseneural` | Container image with the `fr-FR` locale and `fr-FR-DeniseNeural` voice. |
-| `fr-fr-henrineural` | Container image with the `fr-FR` locale and `fr-FR-HenriNeural` voice. |
-| `hi-in-madhurneural` | Container image with the `hi-IN` locale and `hi-IN-MadhurNeural` voice. |
-| `hi-in-swaraneural` | Container image with the `hi-IN` locale and `hi-IN-Swaraneural` voice. |
-| `it-it-diegoneural` | Container image with the `it-IT` locale and `it-IT-DiegoNeural` voice. |
-| `it-it-elsaneural` | Container image with the `it-IT` locale and `it-IT-ElsaNeural` voice. |
-| `it-it-isabellaneural` | Container image with the `it-IT` locale and `it-IT-IsabellaNeural` voice. |
-| `ja-jp-keitaneural` | Container image with the `ja-JP` locale and `ja-JP-KeitaNeural` voice. |
-| `ja-jp-nanamineural` | Container image with the `ja-JP` locale and `ja-JP-NanamiNeural` voice. |
-| `ko-kr-injoonneural` | Container image with the `ko-KR` locale and `ko-KR-InJoonNeural` voice. |
-| `ko-kr-sunhineural` | Container image with the `ko-KR` locale and `ko-KR-SunHiNeural` voice. |
-| `pt-br-antonioneural` | Container image with the `pt-BR` locale and `pt-BR-AntonioNeural` voice. |
-| `pt-br-franciscaneural` | Container image with the `pt-BR` locale and `pt-BR-FranciscaNeural` voice. |
-| `so-so-muuseneural` | Container image with the `so-SO` locale and `so-SO-Muuseneural` voice. |
-| `so-so-ubaxneural` | Container image with the `so-SO` locale and `so-SO-Ubaxneural` voice. |
-| `tr-tr-ahmetneural` | Container image with the `tr-TR` locale and `tr-TR-AhmetNeural` voice. |
-| `tr-tr-emelneural` | Container image with the `tr-TR` locale and `tr-TR-EmelNeural` voice. |
-| `zh-cn-xiaoxiaoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoxiaoNeural` voice. |
-| `zh-cn-xiaoyouneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoYouNeural` voice. |
-| `zh-cn-yunyangneural` | Container image with the `zh-CN` locale and `zh-CN-YunYangNeural` voice. |
-| `zh-cn-yunyeneural` | Container image with the `zh-CN` locale and `zh-CN-YunYeNeural` voice. |
-| `zh-cn-xiaochenneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoChenNeural` voice. |
-| `zh-cn-xiaohanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoHanNeural` voice. |
-| `zh-cn-xiaomoneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoMoNeural` voice. |
-| `zh-cn-xiaoqiuneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoQiuNeural` voice. |
-| `zh-cn-xiaoruineural` | Container image with the `zh-CN` locale and `zh-CN-XiaoRuiNeural` voice. |
-| `zh-cn-xiaoshuangneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoShuangNeural` voice.|
-| `zh-cn-xiaoxuanneural` | Container image with the `zh-CN` locale and `zh-CN-XiaoXuanNeural` voice. |
-| `zh-cn-xiaoyanneural-preview` | Container image with the `zh-CN` locale and `zh-CN-XiaoYanNeural` voice. |
-| `zh-cn-yunxineural` | Container image with the `zh-CN` locale and `zh-CN-YunXiNeural` voice. |
+| `am-et-amehaneural`| Container image with the `am-ET` locale and `am-ET-amehaneural` voice.|
+| `am-et-mekdesneural`| Container image with the `am-ET` locale and `am-ET-mekdesneural` voice.|
+| `ar-bh-lailaneural`| Container image with the `ar-BH` locale and `ar-BH-lailaneural` voice.|
+| `ar-eg-salmaneural`| Container image with the `ar-EG` locale and `ar-EG-salmaneural` voice.|
+| `ar-eg-shakirneural`| Container image with the `ar-EG` locale and `ar-EG-shakirneural` voice.|
+| `ar-sa-hamedneural`| Container image with the `ar-SA` locale and `ar-SA-hamedneural` voice.|
+| `ar-sa-zariyahneural`| Container image with the `ar-SA` locale and `ar-SA-zariyahneural` voice.|
+| `cs-cz-antoninneural`| Container image with the `cs-CZ` locale and `cs-CZ-antoninneural` voice.|
+| `cs-cz-vlastaneural`| Container image with the `cs-CZ` locale and `cs-CZ-vlastaneural` voice.|
+| `de-ch-janneural`| Container image with the `de-CH` locale and `de-CH-janneural` voice.|
+| `de-ch-lenineural`| Container image with the `de-CH` locale and `de-CH-lenineural` voice.|
+| `de-de-conradneural`| Container image with the `de-DE` locale and `de-DE-conradneural` voice.|
+| `de-de-katjaneural`| Container image with the `de-DE` locale and `de-DE-katjaneural` voice.|
+| `en-au-natashaneural`| Container image with the `en-AU` locale and `en-AU-natashaneural` voice.|
+| `en-au-williamneural`| Container image with the `en-AU` locale and `en-AU-williamneural` voice.|
+| `en-ca-claraneural`| Container image with the `en-CA` locale and `en-CA-claraneural` voice.|
+| `en-ca-liamneural`| Container image with the `en-CA` locale and `en-CA-liamneural` voice.|
+| `en-gb-libbyneural`| Container image with the `en-GB` locale and `en-GB-libbyneural` voice.|
+| `en-gb-ryanneural`| Container image with the `en-GB` locale and `en-GB-ryanneural` voice.|
+| `en-gb-sonianeural`| Container image with the `en-GB` locale and `en-GB-sonianeural` voice.|
+| `en-us-arianeural`| Container image with the `en-US` locale and `en-US-arianeural` voice.|
+| `en-us-guyneural`| Container image with the `en-US` locale and `en-US-guyneural` voice.|
+| `en-us-jennyneural`| Container image with the `en-US` locale and `en-US-jennyneural` voice.|
+| `es-es-alvaroneural`| Container image with the `es-ES` locale and `es-ES-alvaroneural` voice.|
+| `es-es-elviraneural`| Container image with the `es-ES` locale and `es-ES-elviraneural` voice.|
+| `es-mx-dalianeural`| Container image with the `es-MX` locale and `es-MX-dalianeural` voice.|
+| `es-mx-jorgeneural`| Container image with the `es-MX` locale and `es-MX-jorgeneural` voice.|
+| `fr-ca-antoineneural`| Container image with the `fr-CA` locale and `fr-CA-antoineneural` voice.|
+| `fr-ca-jeanneural`| Container image with the `fr-CA` locale and `fr-CA-jeanneural` voice.|
+| `fr-ca-sylvieneural`| Container image with the `fr-CA` locale and `fr-CA-sylvieneural` voice.|
+| `fr-fr-deniseneural`| Container image with the `fr-FR` locale and `fr-FR-deniseneural` voice.|
+| `fr-fr-henrineural`| Container image with the `fr-FR` locale and `fr-FR-henrineural` voice.|
+| `hi-in-madhurneural`| Container image with the `hi-IN` locale and `hi-IN-madhurneural` voice.|
+| `hi-in-swaraneural`| Container image with the `hi-IN` locale and `hi-IN-swaraneural` voice.|
+| `it-it-diegoneural`| Container image with the `it-IT` locale and `it-IT-diegoneural` voice.|
+| `it-it-elsaneural`| Container image with the `it-IT` locale and `it-IT-elsaneural` voice.|
+| `it-it-isabellaneural`| Container image with the `it-IT` locale and `it-IT-isabellaneural` voice.|
+| `ja-jp-keitaneural`| Container image with the `ja-JP` locale and `ja-JP-keitaneural` voice.|
+| `ja-jp-nanamineural`| Container image with the `ja-JP` locale and `ja-JP-nanamineural` voice.|
+| `ko-kr-injoonneural`| Container image with the `ko-KR` locale and `ko-KR-injoonneural` voice.|
+| `ko-kr-sunhineural`| Container image with the `ko-KR` locale and `ko-KR-sunhineural` voice.|
+| `pt-br-antonioneural`| Container image with the `pt-BR` locale and `pt-BR-antonioneural` voice.|
+| `pt-br-franciscaneural`| Container image with the `pt-BR` locale and `pt-BR-franciscaneural` voice.|
+| `so-so-muuseneural`| Container image with the `so-SO` locale and `so-SO-muuseneural` voice.|
+| `so-so-ubaxneural`| Container image with the `so-SO` locale and `so-SO-ubaxneural` voice.|
+| `sv-se-hillevineural`| Container image with the `sv-SE` locale and `sv-SE-hillevineural` voice.|
+| `sv-se-mattiasneural`| Container image with the `sv-SE` locale and `sv-SE-mattiasneural` voice.|
+| `sv-se-sofieneural`| Container image with the `sv-SE` locale and `sv-SE-sofieneural` voice.|
+| `tr-tr-ahmetneural`| Container image with the `tr-TR` locale and `tr-TR-ahmetneural` voice.|
+| `tr-tr-emelneural`| Container image with the `tr-TR` locale and `tr-TR-emelneural` voice.|
+| `zh-cn-xiaochenneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaochenneural` voice.|
+| `zh-cn-xiaohanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaohanneural` voice.|
+| `zh-cn-xiaomoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaomoneural` voice.|
+| `zh-cn-xiaoqiuneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoqiuneural` voice.|
+| `zh-cn-xiaoruineural`| Container image with the `zh-CN` locale and `zh-CN-xiaoruineural` voice.|
+| `zh-cn-xiaoshuangneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoshuangneural` voice.|
+| `zh-cn-xiaoxiaoneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxiaoneural` voice.|
+| `zh-cn-xiaoxuanneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoxuanneural` voice.|
+| `zh-cn-xiaoyanneural-preview`| Container image with the `zh-CN` locale and `zh-CN-xiaoyanneural` voice.|
+| `zh-cn-xiaoyouneural`| Container image with the `zh-CN` locale and `zh-CN-xiaoyouneural` voice.|
+| `zh-cn-yunxineural`| Container image with the `zh-CN` locale and `zh-CN-yunxineural` voice.|
+| `zh-cn-yunyangneural`| Container image with the `zh-CN` locale and `zh-CN-yunyangneural` voice.|
+| `zh-cn-yunyeneural`| Container image with the `zh-CN` locale and `zh-CN-yunyeneural` voice.|
# [Previous version](#tab/previous)
+Release notes for `v2.1.0`:
+
+**Features**
+* Security upgrade.
++ Release notes for `v2.0.0`: **Features**
cognitive-services What Is Personalizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/what-is-personalizer.md
This documentation contains the following article types:
* [**Concepts**](how-personalizer-works.md) provide in-depth explanations of the service functionality and features. * [**Tutorials**](tutorial-use-personalizer-web-app.md) are longer guides that show you how to use the service as a component in broader business solutions.
-Before you get started, try out [Personalizer with this interactive demo](https://personalizationdemo.azurewebsites.net/).
+Before you get started, try out [Personalizer with this interactive demo](https://personalizerdevdemo.azurewebsites.net/).
## How does Personalizer select the best content item?
cognitive-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/personalizer/whats-new.md
The following preview features were released at the Build 2019 Conference:
## Next steps * [Quickstart: Create a feedback loop in C#](./quickstart-personalizer-sdk.md?pivots=programming-language-csharp%253fpivots%253dprogramming-language-csharp)
-* [Use the interactive demo](https://personalizationdemo.azurewebsites.net/)
+* [Use the interactive demo](https://personalizerdevdemo.azurewebsites.net/)
communication-services Call Recording https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-recording.md
Call Recording provides a set of APIs to start, stop, pause and resume recording
![Call recording concept diagram](../media/call-recording-concept.png) ## Media output types
-Call recording currently supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats. The mixed audio+video output media matches meeting recordings produced via Microsoft Teams recording.
+Call recording currently supports mixed audio+video MP4 and mixed audio-only MP3/WAV output formats in Public Preview. The mixed audio+video output media matches meeting recordings produced via Microsoft Teams recording.
-| Channel Type | Content Format | Video | Audio |
-| :-- | :- | :- | : |
-| audioVideo | mp4 | 1920x1080 8 FPS video of all participants in default tile arrangement | 16kHz mp4a mixed audio of all participants |
-| audioOnly| mp3/wav | N/A | 16kHz mp3/wav mixed audio of all participants |
+| Content Type | Content Format | Channel Type | Video | Audio |
+| :-- | :- | :-- | :- | : |
+| audioVideo | mp4 | mixed | 1920x1080 8 FPS video of all participants in default tile arrangement | 16kHz mp4a mixed audio of all participants |
+| audioOnly| mp3/wav | mixed | N/A | 16kHz mp3/wav mixed audio of all participants |
+| audioOnly| wav | unmixed | N/A | 16kHz wav, 0-5 channels for each participant |
+## Channel types
+> [!NOTE]
+> **Unmixed audio-only** is still in a **Private Preview** and NOT enabled for Teams Interop meetings.
+
+| Channel type | Content format | Output | Scenario |
+||--|||
+| Mixed audio-video | Mp4 | Single file, single channel | Keeping records and meeting notes Coaching and Training |
+| Mixed audio-only | Mp3 (lossy)/ wav (lossless) | Single file, single channel | Compliance & Adherence Coaching and Training |
+| **Unmixed audio-only** | Mp3/wav | Single file, multiple channels maximum number of channels is 6 for mp3 and 50 for wav | Quality Assurance Analytics |
## Run-time Control APIs Run-time control APIs can be used to manage recording via internal business logic triggers, such as an application creating a group call and recording the conversation, or from a user-triggered action that tells the server application to start recording. Call Recording APIs are [Out-of-Call APIs](./call-automation-apis.md#out-of-call-apis), using the `serverCallId` to initiate recording. When creating a call, a `serverCallId` is returned via the `Microsoft.Communication.CallLegStateChanged` event after a call has been established. The `serverCallId` can be found in the `data.serverCallId` field. See our [Call Recording Quickstart Sample](../../quickstarts/voice-video-calling/call-recording-sample.md) to learn about retrieving the `serverCallId` from the Calling Client SDK. A `recordingOperationId` is returned when recording is started, which is then used for follow-on operations like pause and resume.
communication-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/overview.md
Azure Communication Services are cloud-based services with REST APIs and client library SDKs available to help you integrate communication into your applications. You can add communication to your applications without being an expert in underlying technologies such as media encoding or telephony. Azure Communication Service is available in multiple [Azure geographies](concepts/privacy.md) and Azure for government.
-> [!VIDEO https://www.youtube.com/embed/chMHVHLFcao]
+>[!VIDEO https://www.youtube.com/embed/chMHVHLFcao]
Azure Communication Services supports various communication formats:
communication-services Call Recording Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/call-recording-sample.md
This quickstart gets you started recording voice and video calls. This quickstart assumes you've already used the [Calling client SDK](get-started-with-video-calling.md) to build the end-user calling experience. Using the **Calling Server APIs and SDKs** you can enable and manage recordings.
+> [!NOTE]
+> **Unmixed audio-only** is still in a **Private Preview** and NOT enabled for Teams Interop meetings.
+ ::: zone pivot="programming-language-csharp" [!INCLUDE [Build Call Recording server sample with C#](./includes/call-recording-samples/recording-server-csharp.md)] ::: zone-end
container-apps Compare Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/compare-options.md
# Comparing Container Apps with other Azure container options There are many options for teams to build and deploy cloud native and containerized applications on Azure. This article will help you understand which scenarios and use cases are best suited for Azure Container Apps and how it compares to other container options on Azure including:
+- [Azure Container Apps](#azure-container-apps)
- [Azure App Service](#azure-app-service) - [Azure Container Instances](#azure-container-instances) - [Azure Kubernetes Service](#azure-kubernetes-service)
You can get started building your first container app [using the quickstarts](ge
### Azure Functions [Azure Functions](../azure-functions/functions-overview.md) is a serverless Functions-as-a-Service (FaaS) solution. It's optimized for running event-driven applications using the functions programming model. It shares many characteristics with Azure Container Apps around scale and integration with events, but optimized for ephemeral functions deployed as either code or containers. The Azure Functions programming model provides productivity benefits for teams looking to trigger the execution of your functions on events and bind to other data sources. When building FaaS-style functions, Azure Functions is the ideal option. The Azure Functions programming model is available as a base container image, making it portable to other container based compute platforms allowing teams to reuse code as environment requirements change.
-### Azure Spring Cloud
-[Azure Spring Cloud](../spring-cloud/overview.md) makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Cloud is an ideal option.
+### Azure Spring Apps
+[Azure Spring Apps](../spring-cloud/overview.md) makes it easy to deploy Spring Boot microservice applications to Azure without any code changes. The service manages the infrastructure of Spring Cloud applications so developers can focus on their code. Azure Spring Cloud provides lifecycle management using comprehensive monitoring and diagnostics, configuration management, service discovery, CI/CD integration, blue-green deployments, and more. If your team or organization is predominantly Spring, Azure Spring Apps is an ideal option.
### Azure Red Hat OpenShift [Azure Red Hat OpenShift](../openshift/intro-openshift.md) is jointly engineered, operated, and supported by Red Hat and Microsoft to provide an integrated product and support experience for running Kubernetes-powered OpenShift. With Azure Red Hat OpenShift, teams can choose their own registry, networking, storage, and CI/CD solutions, or use the built-in solutions for automated source code management, container and application builds, deployments, scaling, health management, and more from OpenShift. If your team or organization is using OpenShift, Azure Red Hat OpenShift is an ideal option.
container-apps Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/containers.md
The following example shows how to configure Azure Container Registry credential
} ```
+> [!NOTE]
+> Docker Hub [limits](https://docs.docker.com/docker-hub/download-rate-limit/) the number of Docker image downloads. When the limit is reached, containers in your app will fail to start. You're recommended to use a registry with sufficient limits, such as [Azure Container Registry](../container-registry/container-registry-intro.md).
+ ### Managed identity with Azure Container Registry You can use an Azure managed identity to authenticate with Azure Container Registry instead of using a username and password. To use a managed identity:
You can use an Azure managed identity to authenticate with Azure Container Regis
- Assign a system-assigned or user-assigned managed identity to your container app. - Specify the managed identity you want to use for each registry.
+> [!NOTE]
+> You will need to [enable an admin user account](../container-registry/container-registry-authentication.md) in your Azure
+> Container Registry even when you use an Azure managed identity. You will not need to use the ACR admin credentials to pull images into Azure
+> Container Apps, however, it is a prequisite to have the ACR admin user account enabled in the registry Azure Container Apps is pulling from.
+ When assigned a managed identity to a registry, use the managed identity resource ID for a user-assigned identity, or "system" for the system-assigned identity. For more information about using managed identities see, [Managed identities in Azure Container Apps Preview](managed-identity.md). ```json
container-apps Get Started Existing Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/get-started-existing-container-image.md
The example shown in this article demonstrates how to use a custom container ima
- Define environment variables - Set container CPU or memory requirements - Enable and configure Dapr-- Enable internal or internal ingress
+- Enable external or internal ingress
- Provide minimum and maximum replica values or scale rules For details on how to provide values for any of these parameters to the `create` command, run `az containerapp create --help`.
container-apps Github Actions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/github-actions-cli.md
When adding or removing a GitHub Actions integration, you can authenticate by ei
- To pass a personal access token, use the `--token` parameter and provide a token value. - If you choose to use interactive login, use the `--login-with-github` parameter with no value.
+> [!Note]
+> Your GitHub personal access token needs to have the `workflow` scope selected.
+ ## Add The `containerapp github-action add` command creates a GitHub Actions integration with your container app.
+> [!Note]
+> Before you proceed with the example below, you must have your first container app already deployed.
+ The first time you attach GitHub Actions to your container app, you need to provide a service principal context. The following command shows you how to create a service principal. # [Bash](#tab/bash)
The following example shows you how to add an integration while using a personal
```azurecli az containerapp github-action add \ --repo-url "https://github.com/<OWNER>/<REPOSITORY_NAME>" \
- --docker-file-path "./dockerfile" \
+ --context-path "./dockerfile" \
--branch <BRANCH_NAME> \ --name <CONTAINER_APP_NAME> \ --resource-group <RESOURCE_GROUP> \
az containerapp github-action add \
```azurecli az containerapp github-action add ` --repo-url "https://github.com/<OWNER>/<REPOSITORY_NAME>" `
- --docker-file-path "./dockerfile" `
+ --content-path "./dockerfile" `
--branch <BRANCH_NAME> ` --name <CONTAINER_APP_NAME> ` --resource-group <RESOURCE_GROUP> `
container-apps Microservices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/microservices.md
-# Microservices with Azure Containers Apps
+# Microservices with Azure Container Apps
[Microservice architectures](https://azure.microsoft.com/solutions/microservice-applications/#overview) allow you to independently develop, upgrade, version, and scale core areas of functionality in an overall system. Azure Container Apps provides the foundation for deploying microservices featuring:
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
First, extract identifiable information from the environment.
# [Bash](#tab/bash) ```bash
-ENVIRONMENT_DEFAULT_DOMAIN=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query defaultDomain --out json | tr -d '"'`
+ENVIRONMENT_DEFAULT_DOMAIN=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query properties.defaultDomain --out json | tr -d '"'`
``` ```bash
-ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query staticIp --out json | tr -d '"'`
+ENVIRONMENT_STATIC_IP=`az containerapp env show --name ${CONTAINERAPPS_ENVIRONMENT} --resource-group ${RESOURCE_GROUP} --query properties.staticIp --out json | tr -d '"'`
``` ```bash
VNET_ID=`az network vnet show --resource-group ${RESOURCE_GROUP} --name ${VNET_N
# [PowerShell](#tab/powershell) ```powershell
-$ENVIRONMENT_DEFAULT_DOMAIN=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query defaultDomain -o tsv)
+$ENVIRONMENT_DEFAULT_DOMAIN=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.defaultDomain -o tsv)
``` ```powershell
-$ENVIRONMENT_STATIC_IP=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query staticIp -o tsv)
+$ENVIRONMENT_STATIC_IP=(az containerapp env show --name $CONTAINERAPPS_ENVIRONMENT --resource-group $RESOURCE_GROUP --query properties.staticIp -o tsv)
``` ```powershell
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> Network address prefixes requires a CDIR range of `/23`.
+> Network address prefixes requires a CIDR range of `/23`.
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
az login
az account set --subscription <your subscription ID> ``` 7. Enable the RBAC capability on your existing API for MongoDB database account.
+Get your existing capabilities. Capabilities are account features. Some are optional and some can't be changed.
```powershell
-az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongoRoleBasedAccessControl
+az cosmosdb show -n <account_name> -g <azure_resource_group>
```
-or create a new database account with the RBAC capability set to true. Your subscription must be allow-listed in order to create an account with the EnableMongoRoleBasedAccessControl capability.
+You should see a capability section similar to this
+```powershell
+"capabilities": [
+ {
+ "name": "EnableMongo"
+ },
+ {
+ "name": "DisableRateLimitingResponses"
+ }
+```
+Copy the existing capabilities and add the RBAC capability (EnableMongoRoleBasedAccessControl) to the list:
+```powershell
+az cosmosdb update -n <account_name> -g <azure_resource_group> --capabilities EnableMongoRoleBasedAccessControl, EnableMongo, DisableRateLimitingResponses
+```
+If you prefer a new database account instead, create a new database account with the RBAC capability set to true. Your subscription must be allow-listed in order to create an account with the EnableMongoRoleBasedAccessControl capability.
```powershell az cosmosdb create -n <account_name> -g <azure_resource_group> --kind MongoDB --capabilities EnableMongoRoleBasedAccessControl ```
data-factory Concepts Nested Activities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-nested-activities.md
+
+ Title: Nested activities
+
+description: Learn about nested activities in Azure Data Factory and Azure Synapse Analytics.
++++++ Last updated : 06/30/2021++
+# Nested activities in Azure Data Factory and Azure Synapse Analytics
++
+This article helps you understand nested activities in Azure Data Factory and Azure Synapse Analytics and how to use them, limitations, and best practices.
+
+## Overview
+A Data Factory or Synapse Workspace pipeline can contain control flow activities that allow for other activities to be contained inside of them. Think of these nested activities as containers that hold one or more other activities that can execute depending on the top level control flow activity.
+
+See the following example with an If activity that has one activity contained.
++
+## Control flow activities
+The following control flow activities support nested activities:
+
+Control activity | Description
+- | --
+[For Each](control-flow-for-each-activity.md) | ForEach Activity defines a repeating control flow in your pipeline. This activity is used to iterate over a collection and executes specified activities in a loop. The loop implementation of this activity is similar to the Foreach looping structure in programming languages.
+[If Condition Activity](control-flow-if-condition-activity.md) | The If Condition can be used to branch based on condition that evaluates to true or false. The If Condition activity provides the same functionality that an if statement provides in programming languages. It evaluates a set of activities when the condition evaluates to `true` and another set of activities when the condition evaluates to `false.`
+[Until Activity](control-flow-until-activity.md) | Implements Do-Until loop that is similar to Do-Until looping structure in programming languages. It executes a set of activities in a loop until the condition associated with the activity evaluates to true. You can specify a timeout value for the until activity.
+[Switch Activity](control-flow-switch-activity.md) | The Switch activity provides the same functionality that a switch statement provides in programming languages. It evaluates a set of activities corresponding to a case that matches the condition evaluation.
+
+## Navigating nested activities
+There are two primary ways to navigate to the contained activities in a nested activity.
+
+1. Each control flow activity that supports nested activities has an activity tab. Selecting the activity tab will then give you a pencil icon you can select to drill down into the inner activities panel.
+
+2. From the activity on the pipeline canvas, you can select the pencil icon to drill down into the inner activities panel. Additionally, the ForEach and Until activities support double-clicking on the activity to drill down to the inner activities panel.
+
+Your pipeline canvas will then switch to the context of the inner activity container that you selected. There will also be a breadcrumb trail at the top you can select to navigate back to the parent pipeline.
+
+## Nested activity embedding limitations
+Activities that support nesting (ForEach, Until, Switch, and If Condition) can't be embedded inside of another nested activity. Essentially, the current support for nesting is one level deep. See the best practices section below on how to use other pipeline activities to enable this scenario. In addition, the
+[Validation Activity](control-flow-validation-activity.md) can't be placed inside of a nested activity.
+
+## Best practices for multiple levels of nested activities
+In order to have logic that supports nesting more than one level deep, you can use the [Execute Pipeline Activity](control-flow-execute-pipeline-activity.md) inside of your nested activity to call another pipeline that then can have another level of nested activities. A common use case for this pattern is with the ForEach loop where you need to additionally loop based off logic in the inner activities.
+
+An example of this pattern would be if you had a file system that had a list of folders and each folder there are multiple files you want to process. You would accomplish this pattern, generally, by performing the following.
+1. Using a [Get Metadata Activity](control-flow-get-metadata-activity.md) first to get a list of just the folders.
+2. Pass the result of the Get Metadata activity into the Items list of a ForEach activity. Each iteration then represents a single folder to process.
+3. In the inner activities panel of the ForEach activity, use another Get Metadata activity to get a list of files inside of the folder.
+4. Call an Execute Pipeline activity that has an array parameter and pass it an array of those filenames.
+5. In the child pipeline, you could then use another nested activity (such as ForEach) with the passed in array list to iterate over the files and perform one or more sets of inner activities.
+
+The parent pipeline would look similar to the below example.
+[ ![Screenshot showing an example parent pipeline calling a child pipeline in a ForEach loop.](media/concepts-pipelines-activities/nested-activity-execute-pipeline.png) ](media/concepts-pipelines-activities/nested-activity-execute-pipeline.png#lightbox)
+
+The child pipeline would look similar to the below example.
+ :::image type="content" source="media/concepts-pipelines-activities/nested-activity-execute-child-pipeline.png" alt-text="Screenshot showing an example child pipeline with a ForEach loop.":::
+
+## Next steps
+
+See the following tutorials for step-by-step instructions for creating pipelines and datasets.
+
+- [Tutorial: Copy multiple tables in bulk by using Azure Data Factory in the Azure portal](tutorial-bulk-copy-portal.md)
+- [Tutorial: Incrementally load data from a source data store to a destination data store](tutorial-incremental-copy-overview.md)
data-factory Sap Change Data Capture Data Partitioning Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-partitioning-template.md
To auto-generate ADF pipeline from SAP data partitioning template, complete the
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-pipeline-from-template.png" alt-text="Screenshot of the Azure Data Factory resources tab with the Pipeline from template menu highlighted.":::
-1. Select SAP data partitioning template.
+1. Select the **Partition SAP data to extract and load into Azure Data Lake Store Gen2 in parallel** template.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-template-selection.png" alt-text="Screenshot of the template gallery with the SAP data partitioning template highlighted.":::
To auto-generate ADF pipeline from SAP data partitioning template, complete the
## Next steps
-[Auto-generate a pipeline from the SAP data replication template](sap-change-data-capture-data-replication-template.md).
+[Auto-generate a pipeline from the SAP data replication template](sap-change-data-capture-data-replication-template.md).
data-factory Sap Change Data Capture Data Replication Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-data-replication-template.md
This topic describes how to use the SAP data replication template for SAP change
1. Create a new pipeline from template.
-1. Select SAP data replication template.
+1. Select the **Replicate SAP data to Azure Synapse Analytics and persist raw data in Azure Data Lake Store Gen2** template.
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-data-replication-template.png" alt-text="Screenshot of the template gallery with the SAP data replication template highlighted.":::
This topic describes how to use the SAP data replication template for SAP change
1. Select the **Save all** button and you can now run SAP data replication pipeline.
-1. If you want to replicate SAP data to ADLS Gen2 in Delta format, complete the same steps as above, except using the Gen2 template.
+1. If you want to replicate SAP data to ADLS Gen2 in Delta format, complete the same steps as above, except using the **Replicate SAP data to Azure Data Lake Store Gen2 in Delta format and persist raw data in CSV format** template.
ADF copy activity runs on SHIR to extract raw data (full + deltas) from SAP systems and load it into ADLS Gen2 where itΓÇÖs persisted as CSV files, archiving/preserving historical changes. The files can be found in the _sapcdc_ container under the _deltachange/&lt;your pipeline name&gt;/&lt;your pipeline run timestamp&gt;_ folder path. The **Extraction mode** property of ADF copy activity is set to _Delta_. The **Subscriber process** property of ADF copy activity is parameterized.
data-factory Sap Change Data Capture Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/sap-change-data-capture-management.md
To monitor data extractions on SAP systems, complete the following steps:
:::image type="content" source="media/sap-change-data-capture-solution/sap-cdc-delete-queue-subscriptions.png" alt-text="Screenshot of the SAP ODQMON tool with the delete button highlighted for a particular queue subscription.":::
-## Current limitations
-
-The following are the current limitations of SAP CDC solution in ADF:
--- Resetting and deleting ODQ subscriptions from ADF aren't supported for now.-- SAP hierarchies aren't supported for now.- ## Troubleshooting delta change The Azure Data Factory ODP connector reads delta changes from the ODP framework, which itself provides them in tables called Operational Delta Queues (ODQs).
Based on the timestamp in the first row, find the line corresponding to the copy
In this case, we recommend consulting with the team responsible for your SAP system.
+## Current limitations
+
+The following are the current limitations of SAP CDC solution in ADF:
+
+- Resetting and deleting ODQ subscriptions from ADF aren't supported for now.
+- SAP hierarchies aren't supported for now.
+
databox-online Azure Stack Edge Gpu Deploy Gpu Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-gpu-virtual-machine.md
Previously updated : 05/26/2022 Last updated : 06/24/2022 #Customer intent: As an IT admin, I want the flexibility to deploy a single GPU virtual machine (VM) quickly in the portal or use templates to deploy and manage multiple GPU VMs efficiently on my Azure Stack Edge Pro GPU device.
# Deploy GPU VMs on your Azure Stack Edge Pro GPU device
-This article how to create a GPU VM in the Azure portal or by using the Azure Resource Manager templates.
+This article describes how to create a GPU VM in the Azure portal or by using the Azure Resource Manager templates.
Use the Azure portal to quickly deploy a single GPU VM. You can install the GPU extension during or after VM creation. Or use Azure Resource Manager templates to efficiently deploy and manage multiple GPU VMs.
Follow these steps when deploying GPU VMs on your device via the Azure portal:
1. To create GPU VMs, follow all the steps in [Deploy VM on your Azure Stack Edge using Azure portal](azure-stack-edge-gpu-deploy-virtual-machine-portal.md), with these configuration requirements:
- - On the **Basics** tab, select a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+ - On the **Basics** tab, select a [VM size from N-series, optimized for GPUs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). Based on the GPU model on your device, Nvidia T4 or Nvidia A2, the dropdown list will display the corresponding supported GPU VM sizes.
![Screenshot of Basics tab for "Add a virtual machine" in Azure Stack Edge. Size option, with a supported VM size for GPU VMs, is highlighted.](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/basics-vm-size-for-gpu.png)
- - To install the GPU extension during deployment, on the **Advanced** tab, choose **Select an extension to install**. Then select a GPU extension to install. GPU extensions are only available for a virtual machine with a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+ - To install the GPU extension during deployment, on the **Advanced** tab, choose **Select an extension to install**. Then select a GPU extension to install. GPU extensions are only available for a virtual machine with a [VM size from N-series](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized).
> [!NOTE] > If you're using a Red Hat image, you'll need to install the GPU extension after VM deployment. Follow the steps in [Install GPU extension](azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md).
Follow these steps when deploying GPU VMs on your device using Azure Resource Ma
1. To create GPU VMs, follow all the steps in [Deploy VM on your Azure Stack Edge using templates](azure-stack-edge-gpu-deploy-virtual-machine-templates.md), with these configuration requirements:
- - When specifying GPU VM sizes, make sure to use the NCasT4-v3-series in the `CreateVM.parameters.json`, which are supported for GPU VMs. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+ - When specifying GPU VM sizes, make sure to use the NCasT4-v3-series in the `CreateVM.parameters.json`, which are supported for GPU VMs. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized).
```json "vmSize": {
If you didn't install the GPU extension when you created the VM, follow these st
1. In **Details**, select **+ Add extension**. Then select a GPU extension to install.
- GPU extensions are only available for a virtual machine with a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview). If you prefer, you can [install the GPU extension after deployment](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#install-gpu-extension-after-deployment).
+ GPU extensions are only available for a virtual machine with a [VM size from N-series](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). If you prefer, you can [install the GPU extension after deployment](azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#install-gpu-extension-after-deployment).
![Illustration showing 2 steps to use the "Plus Add Extension" button on the virtual machine "Details" pane to add a GPU extension to a VM on an Azure Stack Edge device.](media/azure-stack-edge-gpu-deploy-gpu-virtual-machine/add-extension-after-deployment-02.png)
databox-online Azure Stack Edge Gpu Deploy Sample Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-sample-module.md
Previously updated : 02/22/2021 Last updated : 06/28/2022 # Deploy a GPU enabled IoT module on Azure Stack Edge Pro GPU device This article describes how to deploy a GPU enabled IoT Edge module on your Azure Stack Edge Pro GPU device.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Install Gpu Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-install-gpu-extension.md
Previously updated : 05/26/2022 Last updated : 06/28/2022 #Customer intent: As an IT admin, I need to understand how install GPU extension on GPU virtual machines (VMs) on my Azure Stack Edge Pro GPU device. # Install GPU extension on VMs for your Azure Stack Edge Pro GPU device This article describes how to install GPU driver extension to install appropriate Nvidia drivers on the GPU VMs running on your Azure Stack Edge device. The article covers installation steps for installing a GPU extension using Azure Resource Manager templates on both Windows and Linux VMs.
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-portal.md
Follow these steps to create a VM after you've created a VM image.
|Virtual machine name | Enter a name for the new virtual machine. | |Edge resource group | Create a new resource group for all the resources associated with the VM. | |Image | Select from the VM images available on the device. |
- |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md).<br>For a GPU VM, select a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview). |
+ |Size | Choose from the [Supported VM sizes](azure-stack-edge-gpu-virtual-machine-sizes.md).<br>For a GPU VM, select a [VM size from NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). |
|Username | Use the default username **azureuser** for the admin to sign in to the VM. | |Authentication type | Choose from an SSH public key or a user-defined password. | |SSH public key | Displayed when you select the **SSH public key** authentication type. Paste in the SSH public key. |
databox-online Azure Stack Edge Gpu Overview Gpu Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-overview-gpu-virtual-machines.md
Previously updated : 07/13/2021 Last updated : 06/28/2022 #Customer intent: As an IT admin, I need to understand how to deploy and manage GPU-accelerated VM workloads on my Azure Stack Edge Pro GPU devices. # GPU virtual machines for Azure Stack Edge Pro GPU devices GPU-accelerated workloads on an Azure Stack Edge Pro GPU device require a GPU virtual machine. This article provides an overview of GPU VMs, including supported OSs, GPU drivers, and VM sizes. Deployment options for GPU VMs used with Kubernetes clusters also are discussed.
databox-online Azure Stack Edge Gpu Troubleshoot Virtual Machine Gpu Extension Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-troubleshoot-virtual-machine-gpu-extension-installation.md
Previously updated : 05/26/2022 Last updated : 06/28/2022 # Troubleshoot GPU extension issues for GPU VMs on Azure Stack Edge Pro GPU This article gives guidance for resolving the most common issues that cause installation of the GPU extension on a GPU VM to fail on an Azure Stack Edge Pro GPU device.
If the installation failed during the package download, that error indicates the
**Error description:** A GPU VM must be either Standard_NC4as_T4_v3 or Standard_NC8as_T4_v3 size. If any other VM size is used, the GPU extension will fail to be attached.
-**Suggested solution:** Create a VM with the Standard_NC4as_T4_v3 or Standard_NC8as_T4_v3 VM size. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview). For information about specifying the size, see [Create GPU VMs](./azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms).
+**Suggested solution:** Create a VM with the Standard_NC4as_T4_v3 or Standard_NC8as_T4_v3 VM size. For more information, see [Supported VM sizes for GPU VMs](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized). For information about specifying the size, see [Create GPU VMs](./azure-stack-edge-gpu-deploy-gpu-virtual-machine.md#create-gpu-vms).
## Image OS is not supported
databox-online Azure Stack Edge Gpu Virtual Machine Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-virtual-machine-overview.md
To figure out the size and the number of VMs that you can deploy on your device,
For the usable compute and memory on your device, see the [Compute and memory specifications](azure-stack-edge-gpu-technical-specifications-compliance.md#compute-and-memory-specifications) for your device model.
-For a GPU virtual machine, you must use a [VM size from the NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#ncast4_v3-series-preview).
+For a GPU virtual machine, you must use a [VM size from the NCasT4-v3-series](azure-stack-edge-gpu-virtual-machine-sizes.md#n-series-gpu-optimized).
### VM limits
databox-online Azure Stack Edge Pro 2 Deploy Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md
Before you start cabling your device, you need the following things:
- Your Azure Stack Edge Pro 2 physical device, unpacked, and rack mounted. - One power cable (included in the device package). - At least one 1-GbE RJ-45 network cable to connect to the Port 1. Port 1 and Port 2 the two 10/1-GbE network interfaces on your device.-- One 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. Here is an example of the QSFP28 DAC connector:
+- One 100-GbE QSFP28 passive direct attached cable (Microsoft validated) for each data network interface Port 3 and Port 4 to be configured. Here is an example of the QSFP28 DAC connector:
![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
Before you start cabling your device, you need the following things:
- One power cable for each device node (included in the device package). - Access to one power distribution unit for each device node. - At least two 1-GbE RJ-45 network cables per device to connect to Port 1 and Port2. These are the two 10/1-GbE network interfaces on your device. -- A 100-GbE QSFP28 passive direct attached cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured on each device. The total number needed would depend on the network topology you will deploy. Here is an example QSFP28 DAC connector:
+- A 100-GbE QSFP28 passive direct attached cable (Microsoft validated) for each data network interface Port 3 and Port 4 to be configured on each device. The total number needed would depend on the network topology you will deploy. Here is an example QSFP28 DAC connector:
![Example of a QSFP28 DAC connector](./media/azure-stack-edge-pro-2-deploy-install/qsfp28-dac-connector.png)
Before you start cabling your device, you need the following things:
### Device front panel
-The front panel on Azure Stack Edge Pro 2 device:
+On your device:
-- The front panel has disk drives and a power button.
+- The front panel has disk drives and a power button. The front panel has:
- Has six disk slots in the front of your device.
- - Slots 0 to Slot 3 contain data disks. Slots 4 and 5 are empty.
+ - Has 2, 4, or 6 data disks in the 6 available slots depending on the specific hardware configuration.
![Disks and power button on the front plane of a device](./media/azure-stack-edge-pro-2-deploy-install/front-plane-labeled-1.png) ### Device back plane -- The back plane of Azure Stack Edge Pro 2 device has:
+On your device:
- ![Ports on the back plane of a device](./media/azure-stack-edge-pro-2-deploy-install/backplane-ports-1.png)
+- The back plane has:
- Four network interfaces: - Two 10/1-Gbps interfaces, Port 1 and Port 2.
- - Two 100-Gbps interfaces, PORT 3 and PORT 4.
+ - Two 100-Gbps interfaces, Port 3 and Port 4.
- A baseboard management controller (BMC).
The front panel on Azure Stack Edge Pro 2 device:
- Two Wi-Fi Sub miniature version A (SMA) connectors located on the faceplate of PCIe card slot located below Port 3 and Port 4. The Wi-Fi antennas are installed on these connectors.
+ - Two, one, or no Graphical Processing Units (GPUs).
+
+ ![Diagram that shows ports on the back plane of a device.](./media/azure-stack-edge-pro-2-deploy-install/backplane-ports-1.png)
### Power cabling
databox-online Azure Stack Edge Pro 2 Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-overview.md
Previously updated : 03/04/2022 Last updated : 06/24/2022 #Customer intent: As an IT admin, I need to understand what Azure Stack Edge Pro 2 is and how it works so I can use it to process and transform data before sending to Azure.
Azure Stack Edge Pro 2 has the following capabilities:
|Capability |Description | |||
-|Accelerated AI inferencing| Enabled by the compute acceleration card. Depending on your compute needs, you may choose a model that comes with or without Graphical Processing Units (GPUs). <br> For more information, see [GPU sharing on your Azure Stack Edge device](azure-stack-edge-gpu-sharing.md).|
+|Accelerated AI inferencing| Enabled by the compute acceleration card. Depending on your compute needs, you may choose a model that comes with one, two or no Graphical Processing Units (GPUs). <br> For more information, see [Technical specifications for Azure Stack Edge Pro 2](azure-stack-edge-pro-2-technical-specifications-compliance.md).|
|Edge computing |Supports VM and containerized workloads to allow analysis, processing, and filtering of data. <br>For information on VM workloads, see [VM overview on Azure Stack Edge](azure-stack-edge-gpu-virtual-machine-overview.md).<br>For containerized workloads, see [Kubernetes overview on Azure Stack Edge](azure-stack-edge-gpu-kubernetes-overview.md)</li></ul> | |Data access | Direct data access from Azure Storage Blobs and Azure Files using cloud APIs for additional data processing in the cloud. Local cache on the device is used for fast access of most recently used files.| |Cloud-managed |Device and service are managed via the Azure portal.|
databox-online Azure Stack Edge Pro 2 Safety https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-safety.md
Previously updated : 03/02/2022 Last updated : 06/24/2022
This equipment is designed to operate in the following environment:
* Relative humidity specifications * Storage: 5% to 95% relative humidity * Operating: 5% to 85% relative humidity
+ * For models with GPU(s), derate allowable max operating temperature by 1┬░C/210m (2.6┬░F/1000ft) above 950m (3,117ft).
* Maximum altitude specifications * Operating: 3,050 meters (10,000 feet) * Storage: 9,150 meters (30,000 feet)
databox-online Azure Stack Edge Pro 2 Technical Specifications Compliance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-technical-specifications-compliance.md
Previously updated : 03/06/2022 Last updated : 06/17/2022
The hardware components of your Azure Stack Edge Pro 2 adhere to the technical s
## Compute and memory specifications
+# [Model 64G2T](#tab/sku-a)
The Azure Stack Edge Pro 2 device has the following specifications for compute and memory: | Specification | Value |
The Azure Stack Edge Pro 2 device has the following specifications for compute a
| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU| | CPU: raw | 20 total cores, 40 total vCPUs | | CPU: usable | 32 vCPUs |
-| Memory type | Model 64G2T: 64 GB |
-| Memory: raw | Model 64G2T: 64 GB RAM |
-| Memory: usable | Model 64G2T: 51 GB RAM |
+| Memory type | 2 x 32 GB DDR4-2933 RDIMM |
+| Memory: raw | 64 GB RAM |
+| Memory: usable | 51 GB RAM |
+
+# [Model 128G4T1GPU](#tab/sku-b)
+
+| Specification | Value |
+|-|--|
+| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU|
+| CPU: raw | 20 total cores, 40 total vCPUs |
+| CPU: usable | 32 vCPUs |
+| Memory type | 4 x 32 GB DDR4-2933 RDIMM |
+| Memory: raw | 128 GB RAM |
+| Memory: usable | 102 GB RAM |
+
+# [Model 256G6T2GPU](#tab/sku-c)
+
+| Specification | Value |
+|-|--|
+| CPU type | Intel® Xeon ® Gold 6209U CPU @ 2.10 GHz (Cascade Lake) CPU|
+| CPU: raw | 20 total cores, 40 total vCPUs |
+| CPU: usable | 32 vCPUs |
+| Memory type | 4 x 64 GB DDR4-2933 RDIMM |
+| Memory: raw | 256 GB RAM |
+| Memory: usable | 204 GB RAM |
++ ## Power supply unit specifications
This device has one power supply unit (PSU) with high-performance fans. The foll
| Voltage range selection | 200-240V AC, 47-63 Hz, 3.4 A | | Hot pluggable | No | - ## Network interface specifications Your Azure Stack Edge Pro 2 device has four network interfaces, Port 1 - Port 4.
Here are the details for the Mellanox card:
## Storage specifications
+# [Model 64G2T](#tab/sku-a)
+ The following table lists the storage capacity of the device. | Specification | Value |
The following table lists the storage capacity of the device.
| Boot disk capacity | 960 GB | | Number of data disks | 2 SATA SSDs | | Single data disk capacity | 960 GB |
-| Total capacity | Model 64G2T: 2 TB |
-| Total usable capacity | Model 64G2T: 720 GB |
+| Total capacity | 2 TB |
+| Total usable capacity | 720 GB |
+| RAID configuration | [Storage Spaces Direct with mirroring](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#mirroring) |
+
+# [Model 128G4T1GPU](#tab/sku-b)
+
+| Specification | Value |
+|-|--|
+| Boot disk | 1 NVMe SSD |
+| Boot disk capacity | 960 GB |
+| Number of data disks | 4 SATA SSDs |
+| Single data disk capacity | 960 GB |
+| Total capacity | 4 TB |
+| Total usable capacity | 1.6 TB |
| RAID configuration | [Storage Spaces Direct with mirroring](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#mirroring) |
+# [Model 256G6T2GPU](#tab/sku-c)
+
+| Specification | Value |
+|-|--|
+| Boot disk | 1 NVMe SSD |
+| Boot disk capacity | 960 GB |
+| Number of data disks | 6 SATA SSDs |
+| Single data disk capacity | 960 GB |
+| Total capacity | 6 TB |
+| Total usable capacity | 2.5 TB |
+| RAID configuration | [Storage Spaces Direct with mirroring](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#mirroring) |
++ ## Enclosure dimensions and weight specifications
-The following tables list the various enclosure specifications for dimensions and weight.
+The following tables list the various enclosure specifications for dimensions and weight.
### Enclosure dimensions
-The following table lists the dimensions of the 2U device enclosure in millimeters and inches.
+The Azure Stack Edge Pro 2 is designed to fit in a standard 19" equipment rack and is two rack units high (2U).
+
+The enclosure dimensions are identical across all models of Azure Stack Edge Pro 2.
+
+The following table lists the dimensions of the 2U device enclosure in millimeters and inches.
| Enclosure | Millimeters | Inches | |-||-|
-| Height | 87.0 | 3.425 |
+| Height | 87.0 | 3.43 |
| Width | 482.6 | 19.00 | | Depth | 430.5 | 16.95 | + The following table lists the dimensions of the shipping package in millimeters and inches. | Package | Millimeters | Inches |
The following table lists the dimensions of the shipping package in millimeters
| Width | 768.4 | 30.25 | | Depth | 616.0 | 24.25 | + ### Enclosure weight
+# [Model 642GT](#tab/sku-a)
+ | Line # | Hardware | Weight lbs | |--|||
-| 1 | Model 642GT | 21 |
+| 1 | Model 642GT | 21.0 |
| | | | | 2 | Shipping weight, with 4-post mount | 35.3 | | 3 | Model 642GT install handling, 4-post (without bezel and with inner rails attached) | 20.4 |
-| 4 | 4-post in box | 6.28 |
| | | |
-| 5 | Shipping weight, with 2-post mount | 32.1 |
-| 6 | Model 642GT install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
+| 4 | Shipping weight, with 2-post mount | 32.1 |
+| 5 | Model 642GT install handling, 2-post (without bezel and with inner rails attached) | 20.4 |
+| | | |
+| 6 | Shipping weight with wall mount | 31.1 |
+| 7 | Model 642GT install handling without bezel | 19.8 |
+| | | |
+| 4 | 4-post in box | 6.28 |
| 7 | 2-post in box | 3.08 |
+| 10 | Wallmount as packaged | 2.16 |
+
+# [Model 128G4T1GPU](#tab/sku-b)
+
+| Line # | Hardware | Weight lbs |
+|--|||
+| 1 | Model 128G4T1GPU | 21.9 |
+| | | |
+| 2 | Shipping weight, with 4-post mount | 36.2 |
+| 3 | Model 128G4T1GPU install handling, 4-post (without bezel and with inner rails attached) | 21.3 |
| | | |
-| 8 | Shipping weight with wall mount | 31.1 |
-| 9 | Model 642GT install handling without bezel | 19.8 |
-| 10 | Wallmount as packaged | 2.16 |
+| 4 | Shipping weight, with 2-post mount | 33.0 |
+| 5 | Model 128G4T1GPU install handling, 2-post (without bezel and with inner rails attached) | 21.3 |
+| | | |
+| 6 | Shipping weight with wall mount | 32.0 |
+| 7 | Model 128G4T1GPU install handling without bezel | 20.7 |
+| | | |
+| 8 | 4-post in box | 6.28 |
+| 9 | 2-post in box | 3.08 |
+| 10 | Wallmount as packaged | 2.16 |
+
+# [Model 256G6T2GPU](#tab/sku-c)
+
+| Line # | Hardware | Weight lbs |
+|--|--||
+| 1 | Model 256G6T2GPU | 22.9 |
+| | | |
+| 2 | Shipping weight, with 4-post mount | 37.1 |
+| 3 | Model 256G6T2GPU install handling, 4-post (without bezel and with inner rails attached)|22.3 |
+| | | |
+| 4 | Shipping weight, with 2-post mount | 33.9 |
+| 5 | Model 256G6T2GPU install handling, 2-post (without bezel and with inner rails attached) | 22.3 |
+| | | |
+| 6 | Shipping weight with wall mount | 33.0 |
+| 7 | Model 256G6T2GPU install handling without bezel | 21.7 |
+| | | |
+| 8 | 4-post in box | 6.28 |
+| 9 | 2-post in box | 3.08 |
+| 10 | Wallmount as packaged | 2.16 |
+ ## Next steps
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-data-ingress-egress.md
description: Learn about the data ingress and egress requirements for integrating Azure Digital Twins with other services. Previously updated : 06/01/2022 Last updated : 07/01/2022
You can also learn how to connect Azure Digital Twins to a Logic Apps trigger in
You may want to send Azure Digital Twins data to other downstream services for storage or additional processing.
-Digital twin data can be sent to most Azure services using *endpoints*. If your destination is [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), you can use *data history* instead to automatically historize twin property updates to an Azure Data Explorer cluster, where they can be queried as time series data. The rest of this section describes these capabilities in more detail.
+Digital twin data can be sent to most Azure services using *endpoints*. If your destination is [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), you can use *data history* instead to automatically send twin property updates to an Azure Data Explorer cluster, where they are stored as historical data and can be queried as such. The rest of this section describes these capabilities in more detail.
>[!NOTE] >Azure Digital Twins implements *at least once* delivery for data emitted to egress services.
For detailed instructions on how to send Azure Digital Twins data to Azure Maps,
### Data history
-To send twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), set up a [data history connection](concepts-data-history.md) that automatically historizes digital twin property updates from your Azure Digital Twins instance to an Azure Data Explorer cluster. The data history connection requires an [event hub](../event-hubs/event-hubs-about.md), but doesn't require an explicit endpoint.
+To send twin data to [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), set up a [data history connection](concepts-data-history.md) that automatically stores digital twin property updates from your Azure Digital Twins instance in an Azure Data Explorer cluster as historical data. The data history connection requires an [event hub](../event-hubs/event-hubs-about.md), but doesn't require an explicit endpoint.
-Once the data has been historized, you can query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
+Once historical data is being collected, you can query this data in Azure Data Explorer using the [Azure Digital Twins query plugin for Azure Data Explorer](concepts-data-explorer-plugin.md).
-You can also use data history in combination with [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to aggregate data from disparate sources. One useful application of this is to combine information technology (IT) data from ERP or CRM systems (like Dynamics 365, SAP, or Salesforce) with operational technology (OT) data from IoT devices and production management systems. For an example that illustrates how a company might combine this data, see the following blog post: [Integrating IT and OT Data with Azure Digital Twins, Azure Data Explorer, and Azure Synapse](https://techcommunity.microsoft.com/t5/internet-of-things-blog/integrating-it-and-ot-data-with-azure-digital-twins-azure-data/ba-p/3401981).
+You can also use data history in combination with [Azure Synapse Analytics](../synapse-analytics/overview-what-is.md) to aggregate data from disparate sources. This can be useful in many scenarios. Here are two examples:
+* Combine information technology (IT) data from ERP or CRM systems (like Dynamics 365, SAP, or Salesforce) with operational technology (OT) data from IoT devices and production management systems. For an example that illustrates how a company might combine this data, see the following blog post: [Integrating IT and OT Data with Azure Digital Twins, Azure Data Explorer, and Azure Synapse](https://techcommunity.microsoft.com/t5/internet-of-things-blog/integrating-it-and-ot-data-with-azure-digital-twins-azure-data/ba-p/3401981).
+* Integrate with the Azure AI and Cognitive Services [Multivariate Anomaly Detector](/azure/cognitive-services/anomaly-detector/overview-multivariate), to quickly connect your Azure Digital Twins data with a downstream AI/machine learning solution that specializes in anomaly detection. The [Azure Digital Twins Multivariate Anomaly Detection Toolkit](/samples/azure-samples/digital-twins-mvad-integration/adt-mvad-integration/) is a sample project that provides a workflow for training multiple Multivariate Anomaly Detector models for several scenario analyses, based on historical digital twin data. It then leverages the trained models to detect abnormal operations and anomalies in modeled Azure Digital Twins environments, in near real-time.
## Next steps
event-hubs Event Hubs Dedicated Cluster Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-cluster-create-portal.md
Title: Create an Event Hubs dedicated cluster using the Azure portal description: In this quickstart, you learn how to create an Azure Event Hubs cluster using Azure portal. Previously updated : 02/10/2022 Last updated : 06/14/2022 # Quickstart: Create a dedicated Event Hubs cluster using Azure portal Event Hubs clusters offer single-tenant deployments for customers with the most demanding streaming needs. This offering has a guaranteed 99.99% SLA and is available only on our Dedicated pricing tier. An [Event Hubs cluster](event-hubs-dedicated-overview.md) can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within a cluster include all features of the premium offering and more, but without any ingress limits. The Dedicated offering also includes the popular [Event Hubs Capture](event-hubs-capture-overview.md) feature at no additional cost, allowing you to automatically batch and log data streams to [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md) or [Azure Data Lake Storage Gen 1](../data-lake-store/data-lake-store-overview.md).
-Dedicated clusters are provisioned and billed by **Capacity Units (CUs)**, a pre-allocated amount of CPU and memory resources. You can purchase 1, 2, 4, 8, 12, 16 or 20 CUs for each cluster. In this quickstart, we will walk you through creating a 1 CU Event Hubs cluster through the Azure portal.
+Dedicated clusters are provisioned and billed by **Capacity Units (CUs)**, a pre-allocated amount of CPU and memory resources. You can purchase 1, 2, 4, 8, 12, 16 or 20 CUs for each cluster. In this quickstart, we'll walk you through creating a 1 CU Event Hubs cluster through the Azure portal.
> [!NOTE] > This self-serve experience is currently available in preview on [Azure Portal](https://aka.ms/eventhubsclusterquickstart). If you have any questions about the Dedicated offering, please reach out to the [Event Hubs team](mailto:askeventhubs@microsoft.com).
To create a cluster in your resource group using the Azure portal, complete the
1. Enter a **name for the cluster**. The system immediately checks to see if the name is available. 2. Select the **subscription** in which you want to create the cluster. 3. Select the **resource group** in which you want to create the cluster.
- 4. Select a **location** for the cluster. If your preferred region is grayed out, it is temporarily out of capacity and you can submit a [support request](#submit-a-support-request) to the Event Hubs team.
- 5. Select the **Next: Tags** button at the bottom of the page. You may have to wait a few minutes for the system to fully provision the resources.
+ 1. Select the **Support Scaling** option to create a cluster that you can scale out or scale in yourself. For more information about
+ 1. Select a **location** for the cluster. If your preferred region is grayed out or it's temporarily out of capacity, you can submit a [support request](#submit-a-support-request) to the Event Hubs team.
+ 1. Select the **Next: Tags** button at the bottom of the page. You may have to wait a few minutes for the system to fully provision the resources.
:::image type="content" source="./media/event-hubs-dedicated-cluster-create-portal/create-event-hubs-clusters-basics-page.png" alt-text="Image showing the Create Event Hubs Cluster - Basics page."::: 3. On the **Tags** page, configure the following:
To create a cluster in your resource group using the Azure portal, complete the
:::image type="content" source="./media/event-hubs-dedicated-cluster-create-portal/create-namespace-cluster-page.png" alt-text="Image showing the Create namespace in the cluster page."::: 3. Once your namespace is created, you can [create an event hub](event-hubs-create.md#create-an-event-hub) as you would normally create one within a namespace.
+## Scale Event Hubs dedicated cluster
-## Submit a support request
+For clusters created with the **Support Scaling** option set, use the following steps to scale out or scale in your cluster.
-If you wish to change the size of your cluster after creation or if your preferred region isn't available, submit a support request by following these steps:
+1. On the **Event Hubs Cluster** page for your dedicated cluster, select **Scale** on the left menu.
+
+ :::image type="content" source="./media/event-hubs-dedicated-cluster-create-portal/scale-page.png" alt-text="Screenshot showing the Scale tab of the Event Hubs Cluster page.":::
+1. Use the slider to increase (scale out) or decrease (scale in) capacity units assigned to the cluster.
+1. Then, select **Save** on the command bar.
+
+The **Scale** tab is available only for the Event Hubs clusters created with the **Support scaling** option checked. You don't see the **Scale** tab for clusters that were created before this feature was released or for the clusters you created without selecting the **Support scaling** option. If you wish to change the size of a cluster that you can't scale yourself, or if your preferred region isn't available, submit a support request by using the following steps.
+
+### Submit a support request
1. In [Azure portal](https://portal.azure.com), select **Help + support** from the left menu. 2. Select **+ New support request** from the Support menu.
If you wish to change the size of your cluster after creation or if your preferr
2. For **Subscription**, select your subscription. 3. For **Service**, select **My services**, and then select **Event Hubs**. 4. For **Resource**, select your cluster if it exists already, otherwise select **General Question/Resource Not Available**.
- 5. For **Problem type**, select **Quota**.
+ 5. For **Problem type**, select **Quota or Configuration changes**.
6. For **Problem subtype**, select one of the following values from the drop-down list:
- 1. Select **Request for Dedicated SKU** to request for the feature to be supported in your region.
- 2. Select **Request to Scale Up or Scale Down Dedicated Cluster** if you want to scale up or scale down your dedicated cluster.
+ 1. Select **Dedicated Cluster SKU requests** to request for the feature to be supported in your region.
+ 2. Select **Scale up or down a dedicated Cluster** if you want to scale up or scale down your dedicated cluster.
7. For **Subject**, describe the issue. ![Support ticket page](./media/event-hubs-dedicated-cluster-create-portal/support-ticket.png)
event-hubs Event Hubs Dedicated Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dedicated-overview.md
Title: Overview of Azure Event Hubs dedicated tier description: This article provides an overview of dedicated Azure Event Hubs, which offers single-tenant deployments of event hubs. Previously updated : 01/26/2022 Last updated : 06/29/2022
-# Overview of Event Hubs Dedicated
+# Overview of Azure Event Hubs dedicated tier
-*Event Hubs clusters* offer single-tenant deployments for customers with the most demanding streaming needs. This single-tenant offering has a guaranteed 99.99% SLA and is available only on our Dedicated pricing tier. An Event Hubs cluster can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within the Dedicated cluster include all features of the premium offering and more, but without any ingress limits. It also includes the popular [Event Hubs Capture](event-hubs-capture-overview.md) feature at no additional cost. This feature allows you to automatically batch and log data streams to Azure Storage or Azure Data Lake.
+*Event Hubs clusters* offer **single-tenant** deployments for customers with the most demanding streaming needs. This single-tenant offering has a guaranteed 99.99% SLA and is available only on our dedicated pricing tier. An Event Hubs cluster can ingress millions of events per second with guaranteed capacity and subsecond latency. Namespaces and event hubs created within the dedicated cluster include all features of the premium offering and more, but without any ingress limits. It also includes the popular [Event Hubs Capture](event-hubs-capture-overview.md) feature at no additional cost. The Event Hubs Capture feature allows you to automatically batch and log data streams to Azure Storage or Azure Data Lake Storage.
-Clusters are provisioned and billed by **Capacity Units (CUs)**, a pre-allocated amount of CPU and memory resources. You can purchase 1, 2, 4, 8, 12, 16 or 20 CUs for each cluster. How much you can ingest and stream per CU depends on a variety of factors, such as the following ones:
+Clusters are provisioned and billed by **capacity units (CUs)**, a pre-allocated amount of CPU and memory resources. You can purchase 1, 2, 4, 8, 12, 16 or 20 CUs for each cluster. How much you can ingest and stream per CU depends on various factors, such as the following ones:
- Number of producers and consumers - Payload shape - Egress rate > [!NOTE]
-> All Event Hubs clusters are Kafka-enabled by default and support Kafka endpoints that can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases; there is no option or need to disable Kafka on a cluster.
+> All Event Hubs clusters are Kafka-enabled by default and support Kafka endpoints that can be used by your existing Kafka based applications. Having Kafka enabled on your cluster does not affect your non-Kafka use cases. There is no option or need to disable Kafka on a cluster.
-## Why Dedicated?
-
-Dedicated Event Hubs offers three compelling benefits for customers who need enterprise-level capacity:
-
-#### Single-tenancy guarantees capacity for better performance
+## Why dedicated tier?
+The dedicated tier of Event Hubs offers three compelling benefits for customers who need enterprise-level capacity:
+### Single-tenancy guarantees capacity for better performance
A dedicated cluster guarantees capacity at full scale. It can ingress up to gigabytes of streaming data with fully durable storage and subsecond latency to accommodate any burst in traffic.
-#### Inclusive and exclusive access to features
+### Inclusive and exclusive access to features
+The dedicated offering includes features like [Event Hubs Capture](event-hubs-capture-overview.md) at no additional cost and exclusive access to features like [Bring Your Own Key (BYOK)](configure-customer-managed-key.md). The service also manages load balancing, operating system updates, security patches, and partitioning. So, you can spend less time on infrastructure maintenance and more time on building client-side features.
-The Dedicated offering includes features like Capture at no additional cost and exclusive access to features like Bring Your Own Key (BYOK). The service also manages load balancing, OS updates, security patches, and partitioning. So, you can spend less time on infrastructure maintenance and more time on building client-side features.
+### Self-Serve scaling capabilitiesΓÇ»
+The dedicated tier offers self-serve scaling capabilities that allow you to adjust the capacity of the cluster according to dynamic loads and to facilitate business operations. You can scale out during spikes in usage and scale in when the usage is low. To learn how to scale your dedicated cluster, see [Scale Event Hubs dedicated clusters](event-hubs-dedicated-cluster-create-portal.md).
-## Event Hubs Dedicated quotas and limits
-The Event Hubs Dedicated offering is billed at a fixed monthly price, with a minimum of 4 hours of usage. The dedicated tier offers all the features of the premium plan, but with enterprise-scale capacity and limits for customers with demanding workloads.
+## Quotas and limits
+The Event Hubs dedicated offering is billed at a fixed monthly price, with a **minimum of 4 hours of usage**. The dedicated tier offers all the features of the premium plan, but with enterprise-scale capacity and limits for customers with demanding workloads.
-For more quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
+For more information about quotas and limits, see [Event Hubs quotas and limits](event-hubs-quotas.md)
## How to onboard- Event Hubs dedicated tier is generally available (GA). The self-serve experience to create an Event Hubs cluster through the [Azure portal](event-hubs-dedicated-cluster-create-portal.md) is currently in Preview. You can also request for the cluster to be created by contacting the [Event Hubs team](mailto:askeventhubs@microsoft.com). ## FAQs
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
Run [az appservice plan create](/cli/azure/appservice/plan#az-appservice-plan-cr
az appservice plan create \ --name myAppServicePlanCentralUS \ --resource-group myRGFD-
+```
+```azurecli-interactive
az appservice plan create \ --name myAppServicePlanEastUS \ --resource-group myRGFD
az webapp create \
--name WebAppContoso-01 \ --resource-group myRGFD \ --plan myAppServicePlanCentralUS-
+```
+```azurecli-interactive
az webapp create \ --name WebAppContoso-02 \ --resource-group myRGFD \
az afd endpoint create \
--enabled-state Enabled ```
-## Create an origin group
+### Create an origin group
Run [az afd origin-group create](/cli/azure/afd/origin-group#az-afd-origin-group-create) to create an origin group that contains your two web apps.
az afd route create \
--supported-protocols Http Https \ --link-to-default-domain Enabled ```
+Your Front Door profile would become fully functional with the last step.
## Create a new security policy
frontdoor Create Front Door Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-powershell.md
+
+ Title: 'Create an Azure Front Door Standard/Premium with Azure PowerShell'
+description: Learn how to create an Azure Front Door Standard/Premium with Azure PowerShell. Use Azure Front Door to deliver content to your global user base and protect your web apps against vulnerabilities.
+
+documentationcenter: na
+++ Last updated : 06/28/2022+++
+ na
++++
+# Quickstart: Create an Azure Front Door Standard/Premium - Azure PowerShell
+
+In this quickstart, you'll learn how to create an Azure Front Door Standard/Premium profile using Azure PowerShell. You'll create this profile using two Web Apps as your origin. You can then verify connectivity to your Web Apps using the Azure Front Door endpoint hostname.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Azure PowerShell installed locally or Azure Cloud Shell
+++
+## Create resource group
+
+In Azure, you allocate related resources to a resource group. You can either use an existing resource group or create a new one.
+
+Create a resource group with [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup):
+
+```azurepowershell-interactive
+New-AzResourceGroup -Name myRGFD -Location centralus
+```
+
+## Create two instances of a web app
+
+This quickstart requires two instances of a web application that run in different Azure regions. Both the web application instances run in Active/Active mode, so either one can take traffic. This configuration differs from an Active/Stand-By configuration, where one acts as a failover.
+
+If you don't already have a web app, use [New-AzWebApp](/powershell/module/az.websites/new-azwebapp) to set up two example web apps.
+
+```azurepowershell-interactive
+# Create first web app in Central US region.
+
+$webapp1 = New-AzWebApp `
+ -Name "WebAppContoso-01" `
+ -Location centralus `
+ -ResourceGroupName myRGFD `
+ -AppServicePlan myAppServicePlanCentralUS
+
+# Create second web app in South Central US region.
+
+$webapp2 = New-AzWebApp `
+ -Name "WebAppContoso-02" `
+ -Location EastUS `
+ -ResourceGroupName myRGFD `
+ -AppServicePlan myAppServicePlanEastUS
+```
+
+## Create a Front Door
+
+This section details how you can create and configure the components of a Front Door.
+
+### Create a Front Door profile
+
+Run [New-AzFrontDoorCdnProfile](/powershell/module/az.cdn/new-azfrontdoorcdnprofile) to create an Azure Front Door profile.
+
+> [!NOTE]
+> If you want to deploy Azure Front Door Standard instead of Premium substitute the value of the sku parameter with `Standard_AzureFrontDoor`. You won't be able to deploy managed rules with WAF Policy, if you choose Standard SKU. For detailed comparison, view [Azure Front Door tier comparison](standard-premium/tier-comparison.md).
+
+```azurepowershell-interactive
+#Create the profile
+
+$fdprofile = New-AzFrontDoorCdnProfile `
+ -ResourceGroupName myRGFD `
+ -Name contosoAFD `
+ -SkuName Premium_AzureFrontDoor `
+ -Location Global
+```
+### Add an endpoint
+
+Run [New-AzFrontDoorCdnEndpoint](/powershell/module/az.cdn/new-azfrontdoorcdnendpoint) to create an endpoint in your profile. You can create multiple endpoints in your profile after finishing the create experience.
+
+```azurepowershell-interactive
+#Create the endpoint
+
+$FDendpoint = New-AzFrontDoorCdnEndpoint `
+ -EndpointName contosofrontend `
+ -ProfileName contosoAFD `
+ -ResourceGroupName myRGFD `
+ -Location Global
+```
+
+### Create an origin group
+
+Use [New-AzFrontDoorCdnOriginGroupHealthProbeSettingObject](/powershell/module/az.cdn/new-azfrontdoorcdnorigingrouphealthprobesettingobject) and [New-AzFrontDoorCdnOriginGroupLoadBalancingSettingObject](/powershell/module/az.cdn/new-azfrontdoorcdnorigingrouploadbalancingsettingobject) to create in-memory objects for storing health probe and load balancing settings.
+
+Run [New-AzFrontDoorCdnOriginGroup](/powershell/module/az.cdn/new-azfrontdoorcdnorigingroup) to create an origin group that will contain your two web apps.
+
+```azurepowershell-interactive
+# Create health probe settings
+
+$HealthProbeSetting = New-AzFrontDoorCdnOriginGroupHealthProbeSettingObject `
+ -ProbeIntervalInSecond 60 `
+ -ProbePath "/" `
+ -ProbeRequestType GET `
+ -ProbeProtocol Http
+
+# Create load balancing settings
+
+$LoadBalancingSetting = New-AzFrontDoorCdnOriginGroupLoadBalancingSettingObject `
+ -AdditionalLatencyInMillisecond 50 `
+ -SampleSize 4 `
+ -SuccessfulSamplesRequired 3
+
+# Create origin group
+
+$originpool = New-AzFrontDoorCdnOriginGroup `
+ -OriginGroupName og `
+ -ProfileName contosoAFD `
+ -ResourceGroupName myRGFD `
+ -HealthProbeSetting $HealthProbeSetting `
+ -LoadBalancingSetting $LoadBalancingSetting
+```
+
+### Add an origin to the group
+
+Run [New-AzFrontDoorCdnOrigin](/powershell/module/az.cdn/new-azfrontdoorcdnorigin) to add your Web App origins to your origin group.
+
+```azurepowershell-interactive
+# Add first web app origin to origin group.
+
+$origin1 = New-AzFrontDoorCdnOrigin `
+ -OriginGroupName og `
+ -OriginName contoso1 `
+ -ProfileName contosoAFD `
+ -ResourceGroupName myRGFD `
+ -HostName webappcontoso-01.azurewebsites.net `
+ -OriginHostHeader webappcontoso-01.azurewebsites.net `
+ -HttpPort 80 `
+ -HttpsPort 443 `
+ -Priority 1 `
+ -Weight 1000
+
+# Add second web app origin to origin group.
+
+$origin2 = New-AzFrontDoorCdnOrigin `
+ -OriginGroupName og `
+ -OriginName contoso2 `
+ -ProfileName contosoAFD `
+ -ResourceGroupName myRGFD `
+ -HostName webappcontoso-02.azurewebsites.net `
+ -OriginHostHeader webappcontoso-02.azurewebsites.net `
+ -HttpPort 80 `
+ -HttpsPort 443 `
+ -Priority 1 `
+ -Weight 1000
+```
+### Add a route
+
+Run [New-AzFrontDoorCdnRoute](/powershell/module/az.cdn/new-azfrontdoorcdnroute) to map your endpoint to the origin group. This route forwards requests from the endpoint to your origin group.
++
+```azurepowershell-interactive
+# Create a route to map the endpoint to the origin group
+
+$Route = New-AzFrontDoorCdnRoute `
+ -EndpointName contosofrontend `
+ -Name defaultroute `
+ -ProfileName contosoAFD `
+ -ResourceGroupName myRGFD `
+ -ForwardingProtocol MatchRequest `
+ -HttpsRedirect Enabled `
+ -LinkToDefaultDomain Enabled `
+ -OriginGroupId $originpool.Id `
+ -SupportedProtocol Http,Https
+```
+Your Front Door profile has become fully functional with the last step.
+
+## Test the Front Door
+When you create the Azure Front Door Standard/Premium profile, it takes a few minutes for the configuration to be deployed globally. Once completed, you can access the frontend host you created.
+
+Run [Get-AzFrontDoorCdnEndpoint](/powershell/module/az.cdn/get-azfrontdoorcdnendpoint) to get the hostname of the Front Door endpoint.
+
+```azurepowershell-interactive
+$fd = Get-AzFrontDoorCdnEndpoint `
+ -EndpointName contosofrontend-1234 `
+ -ProfileName contosoafd `
+ -ResourceGroupName myRGFD
+
+$fd.hostname
+```
+In a browser, go to the endpoint hostname: `contosofrontend-<hash>.z01.azurefd.net`. Your request will automatically get routed to the web app with the lowest latency in the origin group.
++
+To test instant global failover, we'll use the following steps:
+
+1. Open a browser, as described above, and go to the endpoint hostname: `contosofrontend-<hash>.z01.azurefd.net`.
+
+1. Stop one of the Web Apps by running [Stop-AzWebApp](/powershell/module/az.websites/stop-azwebapp)
+
+ ```azurepowershell-interactive
+ Stop-AzWebApp -ResourceGroupName myRGFD -Name "WebAppContoso-01"
+ ```
+
+1. Refresh your browser. You should see the same information page.
+
+ > [!TIP]
+ > There is a little bit of delay for these actions. You might need to refresh again.
+
+1. Find the other web app, and stop it as well.
+
+ ```azurepowershell-interactive
+ Stop-AzWebApp -ResourceGroupName myRGFD -Name "WebAppContoso-02"
+ ```
+
+1. Refresh your browser. This time, you should see an error message.
+
+ :::image type="content" source="./media/create-front-door-portal/web-app-stopped-message.png" alt-text="Screenshot of the message: Both instances of the web app stopped.":::
++
+1. Restart one of the Web Apps by running [Start-AzWebApp](/powershell/module/az.websites/start-azwebapp). Refresh your browser and the page will go back to normal.
+
+ ```azurepowershell-interactive
+ Start-AzWebApp -ResourceGroupName myRGFD -Name "WebAppContoso-01"
+ ```
+
+## Clean up resources
+
+When you no longer need the resources that you created with the Front Door, delete the resource group. When you delete the resource group, you also delete the Front Door and all its related resources.
+
+To delete the resource group, Run [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup):
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name myRGFD
+```
+
+## Next steps
+
+To learn how to add a custom domain to your Front Door, continue to the Front Door tutorials.
+
+> [!div class="nextstepaction"]
+> [Add a custom domain](front-door-custom-domain.md)
iot-dps Libraries Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/libraries-sdks.md
Title: IoT Hub Device Provisioning Service libraries and SDKs
description: Information about the device and service libraries available for developing solutions with Device Provisioning Service (CPS). Previously updated : 01/26/2022 Last updated : 06/30/2022
# Microsoft SDKs for IoT Hub Device Provisioning Service
-The Device Provisioning Service (DPS) libraries and SDKs help developers build IoT solutions using various programming languages on multiple platforms. The following tables include links to samples and quickstarts to help you get started.
+Azure IoT Hub Device Provisioning Service (DPS) SDKs help you build backend and device applications that leverage DPS to provide zero-touch, just-in-time provisioning to one or more IoT hubs. The SDKs are published in a variety of popular languages and handle the underlying transport and security protocols between your devices or backend apps and DPS, freeing developers to focus on application development. Additionally, using the SDKs provides you with support for future updates to DPS, including security updates.
+
+There are three categories of software development kits (SDKs) for working with DPS:
+
+- [DPS service SDKs](#service-sdks) provide data plane operations for backend apps. You can use the service SDKs to create and manage individual enrollments and enrollment groups, and to query and manage device registration records.
+
+- [DPS management SDKs](#management-sdks) provide control plane operations for backend apps. You can use the management SDKs to create and manage DPS instances and metadata. For example, to create and manage DPS instances in your subscription, to upload and verify certificates with a DPS instance, or to create and manage authorization policies or allocation policies in a DPS instance.
+
+- [DPS device SDKs](#device-sdks) provide data plane operations for devices. You use the device SDK to provision a device through DPS.
+
+Azure IoT SDKs are also available for the following
+
+- [IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md): To help you build devices and backend apps that communicate with Azure IoT Hub.
+
+- [Device Update for IoT Hub SDKs](../iot-hub-device-update/understand-device-update.md): To help you deploy over-the-air (OTA) updates for IoT devices.
+
+- [IoT Plug and Play SDKs](../iot-develop/libraries-sdks.md): To help you build IoT Plug and Play solutions.
## Device SDKs
+The DPS device SDKs provide code that runs on your IoT devices and simplifies provisioning with DPS.
+ | Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--| | .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Client/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/device)|[Quickstart](./quick-create-simulated-device-x509.md?pivots=programming-language-csharp&tabs=windows)| [Reference](/dotnet/api/microsoft.azure.devices.provisioning.client) |
Microsoft also provides embedded device SDKs to facilitate development on resour
## Service SDKs
+The DPS service SDKs help you build backend applications to manage enrollments and registration records in DPS instances.
+ | Platform | Package | Code repository | Samples | Quickstart | Reference | | --|--|--|--|--|--| | .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/) |[GitHub](https://github.com/Azure/azure-iot-sdk-csharp/)|[Samples](https://github.com/Azure-Samples/azure-iot-samples-csharp/tree/main/provisioning/Samples/service)|[Quickstart](./quick-enroll-device-tpm.md?pivots=programming-language-csharp&tabs=symmetrickey)|[Reference](/dotnet/api/microsoft.azure.devices.provisioning.service) |
Microsoft also provides embedded device SDKs to facilitate development on resour
## Management SDKs
+The DPS management SDKs help you build backend applications that manage the DPS instances and their metadata in your Azure subscription.
+ | Platform | Package | Code repository | Reference | | --|--|--|--|
-| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.DeviceProvisioningServices) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/deviceprovisioningservices/Microsoft.Azure.Management.DeviceProvisioningServices)| -- |
+| .NET|[NuGet](https://www.nuget.org/packages/Microsoft.Azure.Management.DeviceProvisioningServices) |[GitHub](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/deviceprovisioningservices/Microsoft.Azure.Management.DeviceProvisioningServices)| [Reference](/dotnet/api/overview/azure/deviceprovisioningservice/management) |
+| Java|[Maven](https://mvnrepository.com/artifact/com.azure.resourcemanager/azure-resourcemanager-deviceprovisioningservices) |[GitHub](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/deviceprovisioningservices/azure-resourcemanager-deviceprovisioningservices)| [Reference](/java/api/com.azure.resourcemanager.deviceprovisioningservices) |
| Node.js|[npm](https://www.npmjs.com/package/@azure/arm-deviceprovisioningservices)|[GitHub](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/deviceprovisioningservices/arm-deviceprovisioningservices)|[Reference](/javascript/api/@azure/arm-deviceprovisioningservices) | | Python|[pip](https://pypi.org/project/azure-mgmt-iothubprovisioningservices/) |[GitHub](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/iothub/azure-mgmt-iothubprovisioningservices)|[Reference](/python/api/azure-mgmt-iothubprovisioningservices) | ## Next steps
-The Device Provisioning Service documentation also provides [tutorials](how-to-legacy-device-symm-key.md) and [additional samples](quick-create-simulated-device-tpm.md) that you can use to try out the SDKs and libraries.
+The Device Provisioning Service documentation provides [tutorials](how-to-legacy-device-symm-key.md) and [additional samples](quick-create-simulated-device-tpm.md) that you can use to try out the SDKs and libraries.
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a custom Hardwa
description: This tutorial uses enrollment groups. In this tutorial, you learn how to provision X.509 devices using a custom Hardware Security Module (HSM) and the C device SDK for Azure IoT Hub Device Provisioning Service (DPS). Previously updated : 05/24/2021 Last updated : 06/20/2022
# Tutorial: Provision multiple X.509 devices using enrollment groups
-In this tutorial, you will learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample device code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) will be executed on your development machine to simulate provisioning of X.509 devices. On real devices, device code would be deployed and run from the IoT device.
-
-Make sure you've at least completed the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](quick-setup-auto-provision.md) before continuing with this tutorial. Also, if you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
+In this tutorial, you'll learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample device code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) will be executed on your development machine to simulate provisioning of X.509 devices. On real devices, device code would be deployed and run from the IoT device.
The Azure IoT Device Provisioning Service supports two types of enrollments for provisioning devices: * [Enrollment groups](concepts-service.md#enrollment-group): Used to enroll multiple related devices. * [Individual Enrollments](concepts-service.md#individual-enrollment): Used to enroll a single device.
-This tutorial is similar to the previous tutorials demonstrating how to use enrollment groups to provision sets of devices. However, X.509 certificates will be used in this tutorial instead of symmetric keys. Review the previous tutorials in this section for a simple approach using [symmetric keys](./concepts-symmetric-key-attestation.md).
-
-This tutorial will demonstrate the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example) that provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets is not required, but strongly recommended to help protect sensitive information like your device certificate's private key.
+You'll use an enrollment group to provision a set of devices that authenticate using X.509 certificates. To learn how to provision a set of devices using [symmetric keys](./concepts-symmetric-key-attestation.md), see [How to provision devices using symmetric key enrollment groups](how-to-legacy-device-symm-key.md). If you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
+This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but strongly recommended to help protect sensitive information like your device certificate's private key.
-In this tutorial you will complete the following objectives:
+In this tutorial you'll complete the following objectives:
> [!div class="checklist"]
+>
> * Create a certificate chain of trust to organize a set of devices using X.509 certificates. > * Complete proof of possession with a signing certificate used with the certificate chain. > * Create a new group enrollment that uses the certificate chain > * Set up the development environment for provisioning a device using code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) > * Provision a device using the certificate chain with the custom Hardware Security Module (HSM) sample in the SDK. - ## Prerequisites
+* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio) before you begin.
+
+* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md).
+ The following prerequisites are for a Windows development environment used to simulate the devices. For Linux or macOS, see the appropriate section in [Prepare your development environment](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md) in the SDK documentation.
-* [Visual Studio](https://visualstudio.microsoft.com/vs/) 2019 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015 and Visual Studio 2017 are also supported.
+* Install [Visual Studio](https://visualstudio.microsoft.com/vs/) 2022 with the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload enabled. Visual Studio 2015, Visual Studio 2017, and Visual Studio 19 are also supported.
+
+* Install the latest [CMake build system](https://cmake.org/download/). Make sure you check the option that adds the CMake executable to your path.
+
+ >[!IMPORTANT]
+ >Confirm that the Visual Studio prerequisites (Visual Studio and the 'Desktop development with C++' workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system. Also, be aware that older versions of the CMake build system fail to generate the solution file used in this article. Make sure to use the latest version of CMake.
+
+* Install the latest version of [Git](https://git-scm.com/download/). Make sure that Git is added to the environment variables accessible to the command window. See [Software Freedom Conservancy's Git client tools](https://git-scm.com/download/) for the latest version of `git` tools to install, which includes *Git Bash*, the command-line app that you can use to interact with your local Git repository.
+
+* Make sure [OpenSSL](https://www.openssl.org/) is installed on your machine. On Windows, your installation of Git includes an installation of OpenSSL. You can access OpenSSL from the Git Bash prompt. To verify that OpenSSL is installed, open a Git Bash prompt and enter `openssl version`.
- Visual Studio is used in this article to build the device sample code that would be deployed to IoT devices. This does not imply that Visual Studio is required on the device itself.
+ >[!NOTE]
+ > Unless you're familiar with OpenSSL and already have it installed on your Windows machine, we recommend using OpenSSL from the Git Bash prompt. Alternatively, you can choose to download the source code and build OpenSSL. To learn more, see the [OpenSSL Downloads](https://www.openssl.org/source/) page. Or, you can download OpenSSL pre-built from a third-party. To learn more, see the [OpenSSL wiki](https://wiki.openssl.org/index.php/Binaries). Microsoft makes no guarantees about the validity of packages downloaded from third-parties. If you do choose to build or download OpenSSL make sure that the OpenSSL binary is accessible in your path and that the `OPENSSL_CNF` environment variable is set to the path of your *openssl.cnf* file.
-* Latest version of [Git](https://git-scm.com/download/) installed.
+* Open both a Windows command prompt and a Git Bash prompt.
+
+ The steps in this tutorial assume that you're using a Windows machine and the OpenSSL installation that is installed as part of Git. You'll use the Git Bash prompt to issue OpenSSL commands and the Windows command prompt for everything else. If you're using Linux, you can issue all commands from a Bash shell.
## Prepare the Azure IoT C SDK development environment
-In this section, you will prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by X.509 devices provisioning with DPS.
+In this section, you'll prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by X.509 devices provisioning with DPS.
-1. Download the [CMake build system](https://cmake.org/download/).
+1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
- It is important that the Visual Studio prerequisites ([Visual Studio](https://visualstudio.microsoft.com/vs/) and the ['Desktop development with C++'](/cpp/ide/using-the-visual-studio-ide-for-cpp-desktop-development) workload) are installed on your machine, **before** starting the `CMake` installation. Once the prerequisites are in place, and the download is verified, install the CMake build system.
+2. Select the **Tags** tab at the top of the page.
-2. Find the tag name for the [latest release](https://github.com/Azure/azure-iot-sdk-c/releases/latest) of the Azure IoT C SDK.
+3. Copy the tag name for the latest release of the Azure IoT C SDK.
-3. Open a command prompt or Git Bash shell. Run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Use the tag you found in the previous step as the value for the `-b` parameter:
+4. In your Windows command prompt, run the following commands to clone the latest release of the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) GitHub repository. Replace `<release-tag>` with the tag you copied in the previous step.
- ```cmd/sh
+ ```cmd
git clone -b <release-tag> https://github.com/Azure/azure-iot-sdk-c.git cd azure-iot-sdk-c git submodule update --init ```
- You should expect this operation to take several minutes to complete.
+ This operation could take several minutes to complete.
-4. Create a `cmake` subdirectory in the root directory of the git repository, and navigate to that folder.
+5. When the operation is complete, run the following commands from the `azure-iot-sdk-c` directory:
- ```cmd/sh
+ ```cmd
mkdir cmake cd cmake ```
-5. The `cmake` directory you created will contain the custom HSM sample, and the sample device provisioning code that uses the custom HSM to provide X.509 authentication.
+6. The code sample uses an X.509 certificate to provide attestation via X.509 authentication. Run the following command to build a version of the SDK specific to your development platform that includes the device provisioning client. A Visual Studio solution for the simulated device is generated in the `cmake` directory.
- Run the following command in your `cmake` directory to build a version of the SDK specific to your development platform. The build will include a reference to the custom HSM sample.
-
- When specifying the path used with `-Dhsm_custom_lib` below, make sure to use the path relative to the `cmake` directory you previously created. The relative path shown below is only an example.
+ When specifying the path used with `-Dhsm_custom_lib` in the command below, make sure to use the absolute path to the library in the `cmake` directory you previously created. The path shown below assumes that you cloned the C SDK in the root directory of the C drive. If you used another directory, adjust the path accordingly.
```cmd
- $ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=/d/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
```
- If `cmake` does not find your C++ compiler, you might get build errors while running the above command. If that happens, try running this command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
-
- Once the build succeeds, a Visual Studio solution will be generated in your `cmake` directory. The last few output lines look similar to the following output:
+ >[!TIP]
+ >If `cmake` doesn't find your C++ compiler, you may get build errors while running the above command. If that happens, try running the command in the [Visual Studio command prompt](/dotnet/framework/tools/developer-command-prompt-for-vs).
- ```cmd/sh
- $ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=/d/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
- -- Building for: Visual Studio 16 2019
- -- The C compiler identification is MSVC 19.23.28107.0
- -- The CXX compiler identification is MSVC 19.23.28107.0
+7. When the build succeeds, the last few output lines look similar to the following output:
+ ```output
+ cmake -Duse_prov_client:BOOL=ON -Dhsm_custom_lib=c:/azure-iot-sdk-c/cmake/provisioning_client/samples/custom_hsm_example/Debug/custom_hsm_example.lib ..
+ -- Building for: Visual Studio 17 2022
+ -- Selecting Windows SDK version 10.0.19041.0 to target Windows 10.0.22000.
+ -- The C compiler identification is MSVC 19.32.31329.0
+ -- The CXX compiler identification is MSVC 19.32.31329.0
+
... -- Configuring done -- Generating done
- -- Build files have been written to: D:/azure-iot-sdk-c/cmake
+ -- Build files have been written to: C:/azure-iot-sdk-c/cmake
``` ## Create an X.509 certificate chain In this section you, will generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates will have the following hierarchy.
-![Tutorial device certificate chain](./media/tutorial-custom-hsm-enrollment-group-x509/example-device-cert-chain.png#lightbox)
-[Root certificate](concepts-x509-attestation.md#root-certificate): You will complete [proof of possession](how-to-verify-certificates.md) to verify the root certificate. This verification will enable DPS to trust that certificate and verify certificates signed by it.
+[Root certificate](concepts-x509-attestation.md#root-certificate): You'll complete [proof of possession](how-to-verify-certificates.md) to verify the root certificate. This verification will enable DPS to trust that certificate and verify certificates signed by it.
[Intermediate Certificate](concepts-x509-attestation.md#intermediate-certificate): It's common for intermediate certificates to be used to group devices logically by product lines, company divisions, or other criteria. This tutorial will use a certificate chain composed of one intermediate certificate. The intermediate certificate will be signed by the root certificate. This certificate will also be used on the enrollment group created in DPS to logically group a set of devices. This configuration allows managing a whole group of devices that have device certificates signed by the same intermediate certificate. You can create enrollment groups for enabling or disabling a group of devices. For more information on disabling a group of devices, see [Disallow an X.509 intermediate or root CA certificate by using an enrollment group](how-to-revoke-device-access-portal.md#disallow-an-x509-intermediate-or-root-ca-certificate-by-using-an-enrollment-group)
-[Device certificates](concepts-x509-attestation.md#end-entity-leaf-certificate): The device (leaf) certificates will be signed by the intermediate certificate and stored on the device along with its private key. Ideally these sensitive items would be stored securely with an HSM. Each device will present its certificate and private key, along with the certificate chain when attempting provisioning.
+[Device certificates](concepts-x509-attestation.md#end-entity-leaf-certificate): The device (leaf) certificates will be signed by the intermediate certificate and stored on the device along with its private key. Ideally these sensitive items would be stored securely with an HSM. Each device will present its certificate and private key, along with the certificate chain when attempting provisioning.
+
+### Set up the X.509 OpenSSL environment
+
+In this section, you'll create the Openssl configuration files, directory structure, and other files used by the Openssl commands.
+
+1. In your Git Bash command prompt, navigate to a folder where you want to generate the X.509 certificates and keys you'll use in this tutorial.
+
+1. Create an OpenSSL configuration file for your root CA certificate. OpenSSL configuration files contain policies and definitions that are consumed by OpenSSL commands. Copy and paste the following text into a file named *openssl_root_ca.cnf*:
+
+ ```text
+ # OpenSSL root CA configuration file.
+
+ [ ca ]
+ default_ca = CA_default
+
+ [ CA_default ]
+ # Directory and file locations.
+ dir = .
+ certs = $dir/certs
+ crl_dir = $dir/crl
+ new_certs_dir = $dir/newcerts
+ database = $dir/index.txt
+ serial = $dir/serial
+ RANDFILE = $dir/private/.rand
+
+ # The root key and root certificate.
+ private_key = $dir/private/azure-iot-test-only.root.ca.key.pem
+ certificate = $dir/certs/azure-iot-test-only.root.ca.cert.pem
+
+ # For certificate revocation lists.
+ crlnumber = $dir/crlnumber
+ crl = $dir/crl/azure-iot-test-only.intermediate.crl.pem
+ crl_extensions = crl_ext
+ default_crl_days = 30
+
+ # SHA-1 is deprecated, so use SHA-2 instead.
+ default_md = sha256
+
+ name_opt = ca_default
+ cert_opt = ca_default
+ default_days = 375
+ preserve = no
+ policy = policy_loose
+
+ [ policy_strict ]
+ # The root CA should only sign intermediate certificates that match.
+ countryName = optional
+ stateOrProvinceName = optional
+ organizationName = optional
+ organizationalUnitName = optional
+ commonName = supplied
+ emailAddress = optional
+
+ [ policy_loose ]
+ # Allow the intermediate CA to sign a more diverse range of certificates.
+ countryName = optional
+ stateOrProvinceName = optional
+ localityName = optional
+ organizationName = optional
+ organizationalUnitName = optional
+ commonName = supplied
+ emailAddress = optional
+
+ [ req ]
+ default_bits = 2048
+ distinguished_name = req_distinguished_name
+ string_mask = utf8only
+
+ # SHA-1 is deprecated, so use SHA-2 instead.
+ default_md = sha256
+
+ # Extension to add when the -x509 option is used.
+ x509_extensions = v3_ca
+
+ [ req_distinguished_name ]
+ # See <https://en.wikipedia.org/wiki/Certificate_signing_request>.
+ countryName = Country Name (2 letter code)
+ stateOrProvinceName = State or Province Name
+ localityName = Locality Name
+ 0.organizationName = Organization Name
+ organizationalUnitName = Organizational Unit Name
+ commonName = Common Name
+ emailAddress = Email Address
+
+ # Optionally, specify some defaults.
+ countryName_default = US
+ stateOrProvinceName_default = WA
+ localityName_default =
+ 0.organizationName_default = My Organization
+ organizationalUnitName_default =
+ emailAddress_default =
+
+ [ v3_ca ]
+ # Extensions for a typical CA.
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid:always,issuer
+ basicConstraints = critical, CA:true
+ keyUsage = critical, digitalSignature, cRLSign, keyCertSign
+
+ [ v3_intermediate_ca ]
+ # Extensions for a typical intermediate CA.
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid:always,issuer
+ basicConstraints = critical, CA:true
+ keyUsage = critical, digitalSignature, cRLSign, keyCertSign
+
+ [ usr_cert ]
+ # Extensions for client certificates.
+ basicConstraints = CA:FALSE
+ nsComment = "OpenSSL Generated Client Certificate"
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid,issuer
+ keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
+ extendedKeyUsage = clientAuth
+
+ [ server_cert ]
+ # Extensions for server certificates.
+ basicConstraints = CA:FALSE
+ nsComment = "OpenSSL Generated Server Certificate"
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid,issuer:always
+ keyUsage = critical, digitalSignature, keyEncipherment
+ extendedKeyUsage = serverAuth
+
+ [ crl_ext ]
+ # Extension for CRLs.
+ authorityKeyIdentifier=keyid:always
+
+ [ ocsp ]
+ # Extension for OCSP signing certificates.
+ basicConstraints = CA:FALSE
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid,issuer
+ keyUsage = critical, digitalSignature
+ extendedKeyUsage = critical, OCSPSigning
+ ```
+
+1. Create an OpenSSL configuration file to use for intermediate and device certificates. Copy and paste the following text into a file named *openssl_device_intermediate_ca.cnf*:
+
+ ```text
+ # OpenSSL root CA configuration file.
+
+ [ ca ]
+ default_ca = CA_default
+
+ [ CA_default ]
+ # Directory and file locations.
+ dir = .
+ certs = $dir/certs
+ crl_dir = $dir/crl
+ new_certs_dir = $dir/newcerts
+ database = $dir/index.txt
+ serial = $dir/serial
+ RANDFILE = $dir/private/.rand
+
+ # The root key and root certificate.
+ private_key = $dir/private/azure-iot-test-only.intermediate.key.pem
+ certificate = $dir/certs/azure-iot-test-only.intermediate.cert.pem
+
+ # For certificate revocation lists.
+ crlnumber = $dir/crlnumber
+ crl = $dir/crl/azure-iot-test-only.intermediate.crl.pem
+ crl_extensions = crl_ext
+ default_crl_days = 30
+
+ # SHA-1 is deprecated, so use SHA-2 instead.
+ default_md = sha256
+
+ name_opt = ca_default
+ cert_opt = ca_default
+ default_days = 375
+ preserve = no
+ policy = policy_loose
+
+ [ policy_strict ]
+ # The root CA should only sign intermediate certificates that match.
+ countryName = optional
+ stateOrProvinceName = optional
+ organizationName = optional
+ organizationalUnitName = optional
+ commonName = supplied
+ emailAddress = optional
+
+ [ policy_loose ]
+ # Allow the intermediate CA to sign a more diverse range of certificates.
+ countryName = optional
+ stateOrProvinceName = optional
+ localityName = optional
+ organizationName = optional
+ organizationalUnitName = optional
+ commonName = supplied
+ emailAddress = optional
+
+ [ req ]
+ default_bits = 2048
+ distinguished_name = req_distinguished_name
+ string_mask = utf8only
+
+ # SHA-1 is deprecated, so use SHA-2 instead.
+ default_md = sha256
+
+ # Extension to add when the -x509 option is used.
+ x509_extensions = v3_ca
+
+ [ req_distinguished_name ]
+ # See <https://en.wikipedia.org/wiki/Certificate_signing_request>.
+ countryName = Country Name (2 letter code)
+ stateOrProvinceName = State or Province Name
+ localityName = Locality Name
+ 0.organizationName = Organization Name
+ organizationalUnitName = Organizational Unit Name
+ commonName = Common Name
+ emailAddress = Email Address
+
+ # Optionally, specify some defaults.
+ countryName_default = US
+ stateOrProvinceName_default = WA
+ localityName_default =
+ 0.organizationName_default = My Organization
+ organizationalUnitName_default =
+ emailAddress_default =
+
+ [ v3_ca ]
+ # Extensions for a typical CA.
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid:always,issuer
+ basicConstraints = critical, CA:true
+ keyUsage = critical, digitalSignature, cRLSign, keyCertSign
+
+ [ v3_intermediate_ca ]
+ # Extensions for a typical intermediate CA.
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid:always,issuer
+ basicConstraints = critical, CA:true
+ keyUsage = critical, digitalSignature, cRLSign, keyCertSign
+
+ [ usr_cert ]
+ # Extensions for client certificates.
+ basicConstraints = CA:FALSE
+ nsComment = "OpenSSL Generated Client Certificate"
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid,issuer
+ keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment
+ extendedKeyUsage = clientAuth
+
+ [ server_cert ]
+ # Extensions for server certificates.
+ basicConstraints = CA:FALSE
+ nsComment = "OpenSSL Generated Server Certificate"
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid,issuer:always
+ keyUsage = critical, digitalSignature, keyEncipherment
+ extendedKeyUsage = serverAuth
+
+ [ crl_ext ]
+ # Extension for CRLs.
+ authorityKeyIdentifier=keyid:always
+
+ [ ocsp ]
+ # Extension for OCSP signing certificates.
+ basicConstraints = CA:FALSE
+ subjectKeyIdentifier = hash
+ authorityKeyIdentifier = keyid,issuer
+ keyUsage = critical, digitalSignature
+ extendedKeyUsage = critical, OCSPSigning
+ ```
+
+1. Create the directory structure, the database file (index.txt), and the serial number file (serial) used by OpenSSL commands in this article:
+
+ ```bash
+ mkdir certs csr newcerts private
+ touch index.txt
+ openssl rand -hex 16 > serial
+ ```
+
+### Create the root CA certificate
-#### Create root and intermediate certificates
+Run the following commands to create the root CA private key and the root CA certificate. You'll use this certificate and key to sign your intermediate certificate.
-To create the root and intermediate portions of the certificate chain:
+1. Create the root CA private key:
-> [!IMPORTANT]
-> Only use the Bash shell approach with this article. Using PowerShell is possible but, it is not covered in this article.
+ ```bash
+ openssl genrsa -aes256 -passout pass:1234 -out ./private/azure-iot-test-only.root.ca.key.pem 4096
+ ```
+1. Create the root CA certificate:
-1. Open a Git Bash command prompt. Complete steps 1 and 2 using the Bash shell instructions that are located in [Managing test CA certificates for samples and tutorials](https://github.com/Azure/azure-iot-sdk-c/blob/master/tools/CACertificates/CACertificateOverview.md#managing-test-ca-certificates-for-samples-and-tutorials).
+ # [Windows](#tab/windows)
- This creates a working directory for the certificate scripts, and generates the example root and intermediate certificate for the certificate chain using openssl.
-
-2. Notice in the output showing the location of the self-signed root certificate. This certificate will go through [proof of possession](how-to-verify-certificates.md) to verify ownership later.
+ ```bash
+ openssl req -new -x509 -config ./openssl_root_ca.cnf -passin pass:1234 -key ./private/azure-iot-test-only.root.ca.key.pem -subj '//CN=Azure IoT Hub CA Cert Test Only' -days 30 -sha256 -extensions v3_ca -out ./certs/azure-iot-test-only.root.ca.cert.pem
+ ```
+
+ > [!IMPORTANT]
+ > The extra forward slash given for the subject name (`//CN=Azure IoT Hub CA Cert Test Only`) is only required to escape the string with Git on Windows platforms.
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl req -new -x509 -config ./openssl_root_ca.cnf -passin pass:1234 -key ./private/azure-iot-test-only.root.ca.key.pem -subj '/CN=Azure IoT Hub CA Cert Test Only' -days 30 -sha256 -extensions v3_ca -out ./certs/azure-iot-test-only.root.ca.cert.pem
+ ```
+
+
+
+1. Examine the root CA certificate:
+
+ ```bash
+ openssl x509 -noout -text -in ./certs/azure-iot-test-only.root.ca.cert.pem
+ ```
+
+ Observe that the **Issuer** and the **Subject** are both the root CA.
```output
- Creating the Root CA Certificate
- CA Root Certificate Generated At:
-
- ./certs/azure-iot-test-only.root.ca.cert.pem
-
Certificate: Data: Version: 3 (0x2) Serial Number:
- fc:cc:6b:ab:3b:9a:3e:fe
- Signature Algorithm: sha256WithRSAEncryption
- Issuer: CN=Azure IoT Hub CA Cert Test Only
+ 1d:93:13:0e:54:07:95:1d:8c:57:4f:12:14:b9:5e:5f:15:c3:a9:d4
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: CN = Azure IoT Hub CA Cert Test Only
Validity
- Not Before: Oct 23 21:30:30 2020 GMT
- Not After : Nov 22 21:30:30 2020 GMT
- Subject: CN=Azure IoT Hub CA Cert Test Only
- ```
-
-3. Notice in the output showing the location of the intermediate certificate that is signed/issued by the root certificate. This certificate will be used with the enrollment group you will create later.
+ Not Before: Jun 20 22:52:23 2022 GMT
+ Not After : Jul 20 22:52:23 2022 GMT
+ Subject: CN = Azure IoT Hub CA Cert Test Only
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ RSA Public-Key: (4096 bit)
+ ```
+
+### Create the intermediate CA certificate
+
+Run the following commands to create the intermediate CA private key and the intermediate CA certificate. You'll use this certificate and key to sign your device certificate(s).
+
+1. Create the intermediate CA private key:
+
+ ```bash
+ openssl genrsa -aes256 -passout pass:1234 -out ./private/azure-iot-test-only.intermediate.key.pem 4096
+ ```
+
+1. Create the intermediate CA certificate signing request (CSR):
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ openssl req -new -sha256 -passin pass:1234 -config ./openssl_device_intermediate_ca.cnf -subj '//CN=Azure IoT Hub Intermediate Cert Test Only' -key ./private/azure-iot-test-only.intermediate.key.pem -out ./csr/azure-iot-test-only.intermediate.csr.pem
+ ```
+
+ > [!IMPORTANT]
+ > The extra forward slash given for the subject name (`//CN=Azure IoT Hub Intermediate Cert Test Only`) is only required to escape the string with Git on Windows platforms.
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl req -new -sha256 -passin pass:1234 -config ./openssl_device_intermediate_ca.cnf -subj '/CN=Azure IoT Hub Intermediate Cert Test Only' -key ./private/azure-iot-test-only.intermediate.key.pem -out ./csr/azure-iot-test-only.intermediate.csr.pem
+ ```
+
+
+
+1. Sign the intermediate certificate with the root CA certificate
+
+ ```bash
+ openssl ca -batch -config ./openssl_root_ca.cnf -passin pass:1234 -extensions v3_intermediate_ca -days 30 -notext -md sha256 -in ./csr/azure-iot-test-only.intermediate.csr.pem -out ./certs/azure-iot-test-only.intermediate.cert.pem
+ ```
+
+1. Examine the intermediate CA certificate:
+
+ ```bash
+ openssl x509 -noout -text -in ./certs/azure-iot-test-only.intermediate.cert.pem
+ ```
+
+ Observe that the **Issuer** is the root CA, and the **Subject** is the intermediate CA.
```output
- Intermediate CA Certificate Generated At:
- --
- ./certs/azure-iot-test-only.intermediate.cert.pem
-
Certificate: Data: Version: 3 (0x2)
- Serial Number: 1 (0x1)
- Signature Algorithm: sha256WithRSAEncryption
- Issuer: CN=Azure IoT Hub CA Cert Test Only
+ Serial Number:
+ d9:55:87:57:41:c8:4c:47:6c:ee:ba:83:5d:ae:db:39
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: CN = Azure IoT Hub CA Cert Test Only
Validity
- Not Before: Oct 23 21:30:33 2020 GMT
- Not After : Nov 22 21:30:33 2020 GMT
- Subject: CN=Azure IoT Hub Intermediate Cert Test Only
- ```
-
-#### Create device certificates
+ Not Before: Jun 20 22:54:01 2022 GMT
+ Not After : Jul 20 22:54:01 2022 GMT
+ Subject: CN = Azure IoT Hub Intermediate Cert Test Only
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ RSA Public-Key: (4096 bit)
+ ```
-To create the device certificates signed by the intermediate certificate in the chain:
+### Create the device certificates
-1. Run the following command to create a new device/leaf certificate with a subject name you give as a parameter. Use the example subject name given for this tutorial, `custom-hsm-device-01`. This subject name will be the device ID for your IoT device.
+In this section you create the device certificates and the full chain device certificates. The full chain certificate contains the device certificate, the intermediate CA certificate, and the root CA certificate. The device must present its full chain certificate when it registers with DPS.
- > [!WARNING]
- > Don't use a subject name with spaces in it. This subject name is the device ID for the IoT device being provisioned.
- > It must follow the rules for a device ID. For more information, see [Device identity properties](../iot-hub/iot-hub-devguide-identity-registry.md#device-identity-properties).
+1. Create the device private key.
- ```cmd
- ./certGen.sh create_device_certificate_from_intermediate "custom-hsm-device-01"
+ ```bash
+ openssl genrsa -out ./private/device-01.key.pem 4096
```
- Notice the following output showing where the new device certificate is located. The device certificate is signed (issued) by the intermediate certificate.
+1. Create the device certificate CSR.
+
+ The subject common name (CN) of the device certificate must be set to the [Registration ID](./concepts-service.md#registration-id) that your device will use to register with DPS. The registration ID is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. For group enrollments, the registration ID is also used as the device ID in IoT Hub. The subject common name is set in the `-subj` parameter in the following command.
+
+ # [Windows](#tab/windows)
+
+ ```bash
+ openssl req -config ./openssl_device_intermediate_ca.cnf -key ./private/device-01.key.pem -subj '//CN=device-01' -new -sha256 -out ./csr/device-01.csr.pem
+ ```
+
+ > [!IMPORTANT]
+ > The extra forward slash given for the subject name (`//CN=device-01`) is only required to escape the string with Git on Windows platforms.
+
+ # [Linux](#tab/linux)
+
+ ```bash
+ openssl req -config ./openssl_device_intermediate_ca.cnf -key ./private/device-01.key.pem -subj '/CN=device-01' -new -sha256 -out ./csr/device-01.csr.pem
+ ```
+
+
+
+1. Sign the device certificate.
+
+ ```bash
+ openssl ca -batch -config ./openssl_device_intermediate_ca.cnf -passin pass:1234 -extensions usr_cert -days 30 -notext -md sha256 -in ./csr/device-01.csr.pem -out ./certs/device-01.cert.pem
+ ```
+
+1. Examine the device certificate:
+
+ ```bash
+ openssl x509 -noout -text -in ./certs/device-01.cert.pem
+ ```
+
+ Observe that the **Issuer** is the intermediate CA, and the **Subject** is the device registration ID, `device-01`.
```output
- --
- ./certs/new-device.cert.pem: OK
- Leaf Device Certificate Generated At:
- -
- ./certs/new-device.cert.pem
-
Certificate: Data: Version: 3 (0x2)
- Serial Number: 9 (0x9)
- Signature Algorithm: sha256WithRSAEncryption
- Issuer: CN=Azure IoT Hub Intermediate Cert Test Only
+ Serial Number:
+ d9:55:87:57:41:c8:4c:47:6c:ee:ba:83:5d:ae:db:3a
+ Signature Algorithm: sha256WithRSAEncryption
+ Issuer: CN = Azure IoT Hub Intermediate Cert Test Only
Validity
- Not Before: Nov 10 09:20:33 2020 GMT
- Not After : Dec 10 09:20:33 2020 GMT
- Subject: CN=custom-hsm-device-01
- ```
-
-2. Run the following command to create a full certificate chain .pem file that includes the new device certificate for `custom-hsm-device-01`.
+ Not Before: Jun 20 22:55:39 2022 GMT
+ Not After : Jul 20 22:55:39 2022 GMT
+ Subject: CN = device-01
+ Subject Public Key Info:
+ Public Key Algorithm: rsaEncryption
+ RSA Public-Key: (4096 bit)
+ ```
- ```Bash
- cd ./certs && cat new-device.cert.pem azure-iot-test-only.intermediate.cert.pem azure-iot-test-only.root.ca.cert.pem > new-device-01-full-chain.cert.pem && cd ..
- ```
+1. The device must present the full certificate chain when it authenticates with DPS.
+
+ ```bash
+ cat ./certs/device-01.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/device-01-full-chain.cert.pem
+ ```
- Use a text editor and open the certificate chain file, *./certs/new-device-01-full-chain.cert.pem*. The certificate chain text contains the full chain of all three certificates. You will use this text as the certificate chain with in the custom HSM device code later in this tutorial for `custom-hsm-device-01`.
+ Use a text editor and open the certificate chain file, *./certs/device-01-full-chain.cert.pem*. The certificate chain text contains the full chain of all three certificates. You'll use this text as the certificate chain with in the custom HSM device code later in this tutorial for `device-01`.
The full chain text has the following format:
-
- ```output
+
+ ```output
--BEGIN CERTIFICATE-- <Text for the device certificate includes public key> --END CERTIFICATE--
To create the device certificates signed by the intermediate certificate in the
--END CERTIFICATE-- ```
-3. Notice the private key for the new device certificate is written to *./private/new-device.key.pem*. Rename this key file *./private/new-device-01.key.pem* for the `custom-hsm-device-01` device. The text for this key will be needed by the device during provisioning. The text will be added to the custom HSM example later.
+1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your GitBash command prompt. To create additional devices, you can modify the `registration_id` variable declared at the beginning of the script.
```bash
- $ mv private/new-device.key.pem private/new-device-01.key.pem
+ registration_id=device-02
+ echo $registration_id
+ openssl genrsa -out ./private/${registration_id}.key.pem 4096
+ openssl req -config ./openssl_device_intermediate_ca.cnf -key ./private/${registration_id}.key.pem -subj "//CN=$registration_id" -new -sha256 -out ./csr/${registration_id}.csr.pem
+ openssl ca -batch -config ./openssl_device_intermediate_ca.cnf -passin pass:1234 -extensions usr_cert -days 30 -notext -md sha256 -in ./csr/${registration_id}.csr.pem -out ./certs/${registration_id}.cert.pem
+ cat ./certs/${registration_id}.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/${registration_id}-full-chain.cert.pem
```
+ >[!NOTE]
+ > This script uses the registration ID as the base filename for the private key and certificate files. If your registration ID contains characters that aren't valid filename characters, you'll need to modify the script accordingly.
+ > [!WARNING]
- > The text for the certificates only contains public key information.
+ > The text for the certificates only contains public key information.
>
- > However, the device must also have access to the private key for the device certificate. This is necessary because the device must perform verification using that key at runtime when attempting provisioning. The sensitivity of this key is one of the main reasons it is recommended to use hardware-based storage in a real HSM to help secure private keys.
+ > However, the device must also have access to the private key for the device certificate. This is necessary because the device must perform verification using that key at runtime when it attempts to provision. The sensitivity of this key is one of the main reasons it is recommended to use hardware-based storage in a real HSM to help secure private keys.
-4. Delete *./certs/new-device.cert.pem*, and repeat steps 1-3 for a second device with device ID `custom-hsm-device-02`. You must delete *./certs/new-device.cert.pem* or certificate generation will fail for the second device. Only the full chain certificate files will be used by this article. Use the following values for the second device:
+You'll use the following files in the rest of this tutorial:
- | Description | Value |
- | :- | : |
- | Subject Name | `custom-hsm-device-02` |
- | Full certificate chain file | *./certs/new-device-02-full-chain.cert.pem* |
- | Private key filename | *private/new-device-02.key.pem* |
-
+| Certificate | File | Description |
+| - | | - |
+| root CA certificate. | *certs/azure-iot-test-only.root.ca.cert.pem* | Will be uploaded to DPS and verified. |
+| intermediate CA certificate | *certs/azure-iot-test-only.intermediate.cert.pem* | Will be used to create an enrollment group in DPS. |
+| device-01 private key | *private/device-01.key.pem* | Used by the device to verify ownership of the device certificate during authentication with DPS. |
+| device-01 full chain certificate | *certs/device-01-full-chain.cert.pem* | Presented by the device to authenticate and register with DPS. |
+| device-02 private key | *private/device-02.key.pem* | Used by the device to verify ownership of the device certificate during authentication with DPS. |
+| device-02 full chain certificate | *certs/device-01-full-chain.cert.pem* | Presented by the device to authenticate and register with DPS. |
## Verify ownership of the root certificate
-> [!NOTE]
-> As of July 1st, 2021, you can perform automatic verification of certificate via [automatic verification](how-to-verify-certificates.md#automatic-verification-of-intermediate-or-root-ca-through-self-attestation)
->
+For DPS to be able to validate the device's certificate chain during authentication, you must upload and verify ownership of the root CA certificate. Because you created the root CA certificate in the last section, you'll auto-verify that it's valid when you upload it. Alternatively, you can do manual verification of the certificate if you're using a CA certificate from a 3rd-party. To learn more about verifying CA certificates, see [How to do proof-of-possession for X.509 CA certificates with your Device Provisioning Service](how-to-verify-certificates.md).
-1. Using the directions from [Register the public part of an X.509 certificate and get a verification code](how-to-verify-certificates.md#register-the-public-part-of-an-x509-certificate-and-get-a-verification-code), upload the root certificate (`./certs/azure-iot-test-only.root.ca.cert.pem`) and get a verification code from DPS.
+To add the root CA certificate, follow these steps:
-2. Once you have a verification code from DPS for the root certificate, run the following command from your certificate script working directory to generate a verification certificate.
-
- The verification code given here is only an example. Use the code you generated from DPS.
+1. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your Device Provisioning Service.
- ```Bash
- ./certGen.sh create_verification_certificate 1B1F84DE79B9BD5F16D71E92709917C2A1CA19D5A156CB9F
- ```
+1. Open **Certificates** from the left-hand menu and then select **+ Add** at the top of the panel to add a new certificate.
- This script creates a certificate signed by the root certificate with subject name set to the verification code. This certificate allows DPS to verify you have access to the private key of the root certificate. Notice the location of the verification certificate in the output of the script. This certificate is generated in `.pfx` format.
+1. Enter a friendly display name for your certificate. Browse to the location of the root CA certificate file `certs/azure-iot-test-only.root.ca.cert.pem`. Select **Upload**.
- ```output
- Leaf Device PFX Certificate Generated At:
- --
- ./certs/verification-code.cert.pfx
- ```
+1. Select the box next to **Set certificate status to verified on upload*.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/add-root-certificate.png" alt-text="Screenshot that shows adding adding the root C A certificate and the set certificate status to verified on upload box selected.":::
-3. As mentioned in [Upload the signed verification certificate](how-to-verify-certificates.md#upload-the-signed-verification-certificate), upload the verification certificate, and click **Verify** in DPS to complete proof of possession for the root certificate.
+1. Select **Save**.
+1. Make sure your certificate is shown in the certificate tab with a status of *Verified*.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/verify-root-certificate.png" alt-text="Screenshot that shows the verified root C A certificate in the list of certificates.":::
## Update the certificate store on Windows-based devices
On non-Windows devices, you can pass the certificate chain from the code as the
On Windows-based devices, you must add the signing certificates (root and intermediate) to a Windows [certificate store](/windows/win32/secauthn/certificate-stores). Otherwise, the signing certificates won't be transported to DPS by a secure channel with Transport Layer Security (TLS). > [!TIP]
-> It is also possible to use OpenSSL instead of secure channel (Schannel) with the C SDK. For more information on using OpenSSL, see [Using OpenSSL in the SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md#using-openssl-in-the-sdk).
+> It's also possible to use OpenSSL instead of secure channel (Schannel) with the C SDK. For more information on using OpenSSL, see [Using OpenSSL in the SDK](https://github.com/Azure/azure-iot-sdk-c/blob/master/doc/devbox_setup.md#using-openssl-in-the-sdk).
To add the signing certificates to the certificate store in Windows-based devices:
-1. In a Git bash prompt, navigate to the `certs` subdirectory that contains your signing certificates and convert them to `.pfx` as follows.
+1. In a Git bash prompt, convert your signing certificates to `.pfx` as follows.
- root certificate:
+ root CA certificate:
```bash
- winpty openssl pkcs12 -inkey ../private/azure-iot-test-only.root.ca.key.pem -in ./azure-iot-test-only.root.ca.cert.pem -export -out ./root.pfx
+ openssl pkcs12 -inkey ./private/azure-iot-test-only.root.ca.key.pem -in ./certs/azure-iot-test-only.root.ca.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/root.pfx
```
-
- intermediate certificate:
+
+ intermediate CA certificate:
```bash
- winpty openssl pkcs12 -inkey ../private/azure-iot-test-only.intermediate.key.pem -in ./azure-iot-test-only.intermediate.cert.pem -export -out ./intermediate.pfx
+ openssl pkcs12 -inkey ./private/azure-iot-test-only.intermediate.key.pem -in ./certs/azure-iot-test-only.intermediate.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/intermediate.pfx
``` 2. Right-click the Windows **Start** button. Then left-click **Run**. Enter *certmgr.msc* and click **Ok** to start certificate manager MMC snap-in.
To add the signing certificates to the certificate store in Windows-based device
Your signing certificates are now trusted on the Windows-based device and the full chain can be transported to DPS. -- ## Create an enrollment group
-1. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your Device Provisioning Service.
+1. From your DPS in Azure portal, select the **Manage enrollments** tab. then select the **Add enrollment group** button at the top.
-2. Select the **Manage enrollments** tab, then select the **Add enrollment group** button at the top.
+1. In the **Add Enrollment Group** panel, enter the following information, then select **Save**.
-3. In the **Add Enrollment Group** panel, enter the following information, then press the **Save** button.
-
- ![Add enrollment group for X.509 attestation in the portal](./media/tutorial-custom-hsm-enrollment-group-x509/custom-hsm-enrollment-group-x509.png#lightbox)
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/custom-hsm-enrollment-group-x509.png" alt-text="Screenshot that shows adding an enrollment group in the portal.":::
| Field | Value | | :-- | :-- |
Your signing certificates are now trusted on the Windows-based device and the fu
| **Certificate Type** | Select **Intermediate Certificate** | | **Primary certificate .pem or .cer file** | Navigate to the intermediate you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*). This intermediate certificate is signed by the root certificate that you already uploaded and verified. DPS trusts that root once it is verified. DPS can verify the intermediate provided with this enrollment group is truly signed by the trusted root. DPS will trust each intermediate truly signed by that root certificate, and therefore be able to verify and trust leaf certificates signed by the intermediate. | - ## Configure the provisioning device code In this section, you update the sample code with your Device Provisioning Service instance information. If a device is authenticated, it will be assigned to an IoT hub linked to the Device Provisioning Service instance configured in this section. 1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service and note the **_ID Scope_** value.
- ![Extract Device Provisioning Service endpoint information from the portal blade](./media/quick-create-simulated-device-x509/copy-id-scope.png)
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot that shows the ID scope on the DPS overview pane.":::
2. Launch Visual Studio and open the new solution file that was created in the `cmake` directory you created in the root of the azure-iot-sdk-c git repository. The solution file is named `azure_iot_sdks.sln`.
-3. In Solution Explorer for Visual Studio, navigate to **Provisioning_Samples > prov_dev_client_sample > Source Files** and open *prov_dev_client_sample.c*.
+3. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > prov_dev_client_sample > Source Files** and open *prov_dev_client_sample.c*.
4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
In this section, you update the sample code with your Device Provisioning Servic
//hsm_type = SECURE_DEVICE_TYPE_SYMMETRIC_KEY; ```
-6. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
+6. Save your changes.
+7. Right-click the **prov\_dev\_client\_sample** project and select **Set as Startup Project**.
## Configure the custom HSM stub code
-The specifics of interacting with actual secure hardware-based storage vary depending on the hardware. As a result, the certificate chains used by the simulated devices in this tutorial will be hardcoded in the custom HSM stub code. In a real-world scenario, the certificate chain would be stored in the actual HSM hardware to provide better security for sensitive information. Methods similar to the stub methods used in this sample would then be implemented to read the secrets from that hardware-based storage.
+The specifics of interacting with actual secure hardware-based storage vary depending on the device hardware. The certificate chains used by the simulated devices in this tutorial will be hardcoded in the custom HSM stub code. In a real-world scenario, the certificate chain would be stored in the actual HSM hardware to provide better security for sensitive information. Methods similar to the stub methods used in this sample would then be implemented to read the secrets from that hardware-based storage.
-While HSM hardware is not required, it is recommended to protect sensitive information, like the certificate's private key. If an actual HSM was being called by the sample, the private key would not be present in the source code. Having the key in the source code exposes the key to anyone that can view the code. This is only done in this article to assist with learning.
+While HSM hardware isn't required, it is recommended to protect sensitive information, like the certificate's private key. If an actual HSM was being called by the sample, the private key wouldn't be present in the source code. Having the key in the source code exposes the key to anyone that can view the code. This is only done in this article to assist with learning.
-To update the custom HSM stub code to simulate the identity of the device with ID `custom-hsm-device-01`, perform the following steps:
+To update the custom HSM stub code to simulate the identity of the device with ID `device-01`, perform the following steps:
-1. In Solution Explorer for Visual Studio, navigate to **Provisioning_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
+1. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > custom_hsm_example > Source Files** and open *custom_hsm_example.c*.
2. Update the string value of the `COMMON_NAME` string constant using the common name you used when generating the device certificate. ```c
- static const char* const COMMON_NAME = "custom-hsm-device-01";
+ static const char* const COMMON_NAME = "device-01";
```
-3. In the same file, you need to update the string value of the `CERTIFICATE` constant string using your certificate chain text you saved in *./certs/new-device-01-full-chain.cert.pem* after generating your certificates.
+3. Update the string value of the `CERTIFICATE` constant string using the certificate chain you saved in *./certs/device-01-full-chain.cert.pem* after generating your certificates.
The syntax of certificate text must follow the pattern below with no extra spaces or parsing done by Visual Studio.
To update the custom HSM stub code to simulate the identity of the device with I
"--END CERTIFICATE--"; ```
- Updating this string value correctly in this step can be very tedious and subject to error. To generate the proper syntax in your Git Bash prompt, copy and paste the following bash shell commands into your Git Bash command prompt, and press **ENTER**. These commands will generate the syntax for the `CERTIFICATE` string constant value.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `CERTIFICATE` string constant value and write it to the output.
```Bash
- input="./certs/new-device-01-full-chain.cert.pem"
- bContinue=true
- prev=
- while $bContinue; do
- if read -r next; then
- if [ -n "$prev" ]; then
- echo "\"$prev\\n\""
- fi
- prev=$next
- else
- echo "\"$prev\";"
- bContinue=false
- fi
- done < "$input"
+ sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' ./certs/device-01-full-chain.cert.pem
```
- Copy and paste the output certificate text for the new constant value.
-
+ Copy and paste the output certificate text for the constant value.
-4. In the same file, the string value of the `PRIVATE_KEY` constant must also be updated with the private key for your device certificate.
+4. Update the string value of the `PRIVATE_KEY` constant with the private key for your device certificate.
The syntax of the private key text must follow the pattern below with no extra spaces or parsing done by Visual Studio.
To update the custom HSM stub code to simulate the identity of the device with I
"--END RSA PRIVATE KEY--"; ```
- Updating this string value correctly in this step can also be very tedious and subject to error. To generate the proper syntax in your Git Bash prompt, copy and paste the following bash shell commands, and press **ENTER**. These commands will generate the syntax for the `PRIVATE_KEY` string constant value.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `PRIVATE_KEY` string constant value and write it to the output.
```Bash
- input="./private/new-device-01.key.pem"
- bContinue=true
- prev=
- while $bContinue; do
- if read -r next; then
- if [ -n "$prev" ]; then
- echo "\"$prev\\n\""
- fi
- prev=$next
- else
- echo "\"$prev\";"
- bContinue=false
- fi
- done < "$input"
+ sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' ./private/device-01.key.pem
```
- Copy and paste the output private key text for the new constant value.
+ Copy and paste the output private key text for the constant value.
-5. Save *custom_hsm_example.c*.
+5. Save your changes.
-6. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. When prompted to rebuild the project, select **Yes** to rebuild the project before running.
+6. Right-click the **custom_hsm_-_example** project and select **Build**.
- The following output is an example of simulated device `custom-hsm-device-01` successfully booting up, and connecting to the provisioning service. The device was assigned to an IoT hub and registered:
+ > [!IMPORTANT]
+ > You must build the **custom_hsm_example** project before you build the rest of the solution in the next section.
+
+### Run the sample
+
+1. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. When prompted to rebuild the project, select **Yes** to rebuild the project before running.
+
+ The following output is an example of simulated device `device-01` successfully booting up, and connecting to the provisioning service. The device was assigned to an IoT hub and registered:
+
+ ```output
+ Provisioning API Version: 1.8.0
- ```cmd
- Provisioning API Version: 1.3.9
-
Registering Device
-
+ Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service: test-docs-hub.azure-devices.net, deviceId: custom-hsm-device-01
+
+ Registration Information received from service: contoso-hub-2.azure-devices.net, deviceId: device-01
Press enter key to exit: ```
-7. In the portal, navigate to the IoT hub linked to your provisioning service and select the **IoT devices** tab. On successful provisioning of the X.509 device to the hub, its device ID appears on the **IoT devices** blade, with *STATUS* as **enabled**. You might need to press the **Refresh** button at the top.
-
- ![Custom HSM device is registered with the IoT hub](./media/tutorial-custom-hsm-enrollment-group-x509/hub-provisioned-custom-hsm-x509-device.png)
-
-8. Repeat steps 1-7 for a second device with device ID `custom-hsm-device-02`. Use the following values for that device:
+1. Repeat the steps in [Configure the custom HSM stub code](#configure-the-custom-hsm-stub-code) for your second device (`device-02`) and run the sample again. Use the following values for that device:
| Description | Value | | :- | : |
- | `COMMON_NAME` | `"custom-hsm-device-02"` |
- | Full certificate chain | Generate the text using `input="./certs/new-device-02-full-chain.cert.pem"` |
- | Private key | Generate the text using `input="./private/new-device-02.key.pem"` |
+ | Common name | `"device-02"` |
+ | Full certificate chain | Generate the text using *./certs/device-02-full-chain.cert.pem* |
+ | Private key | Generate the text using *./private/device-02.key.pem* |
- The following output is an example of simulated device `custom-hsm-device-02` successfully booting up, and connecting to the provisioning service. The device was assigned to an IoT hub and registered:
+ The following output is an example of simulated device `device-02` successfully booting up, and connecting to the provisioning service. The device was assigned to an IoT hub and registered:
+
+ ```output
+ Provisioning API Version: 1.8.0
- ```cmd
- Provisioning API Version: 1.3.9
-
Registering Device
-
+ Provisioning Status: PROV_DEVICE_REG_STATUS_CONNECTED Provisioning Status: PROV_DEVICE_REG_STATUS_ASSIGNING
-
- Registration Information received from service: test-docs-hub.azure-devices.net, deviceId: custom-hsm-device-02
+
+ Registration Information received from service: contoso-hub-2.azure-devices.net, deviceId: device-02
Press enter key to exit: ```
+## Confirm your device provisioning registration
+
+Examine the registration records of the enrollment group to see the registration details for your devices:
+
+1. In Azure portal, go to your Device Provisioning Service.
+
+1. In the **Settings** menu, select **Manage enrollments**.
+
+1. Select **Enrollment Groups**. The X.509 enrollment group entry that you created previously, *custom-hsm-devices*, should appear in the list.
+
+1. Select the enrollment entry. Then select the **Registration Records** tab to see the devices that have been registered through the enrollment group. The IoT hub that each of your devices was assigned to, their device IDs, and the dates and times they were registered appear in the list.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/enrollment-group-registration-records.png" alt-text="Screenshot that shows the registration records tab for the enrollment group on Azure portal.":::
+
+1. You can select one of the devices to see further details for that device.
+
+To verify the devices on your IoT hub:
+
+1. In Azure portal, go to the IoT hub that your device was assigned to.
+
+1. In the **Device management** menu, select **Devices**.
+
+1. If your devices were provisioned successfully, their device IDs, *device-01* and *device-02*, should appear in the list, with **Status** set as *enabled*. If you don't see your devices, select **Refresh**.
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/hub-provisioned-custom-hsm-x509-device.png" alt-text="Screenshot that shows the devices are registered with the I o T hub in Azure portal.":::
## Clean up resources When you're finished testing and exploring this device client sample, use the following steps to delete all resources created by this tutorial. 1. Close the device client sample output window on your machine.
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning Service. Open **Manage Enrollments** for your service, and then select the **Enrollment Groups** tab. Select the check box next to the *Group Name* of the device group you created in this tutorial, and press the **Delete** button at the top of the pane.
+
+1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning Service. Open **Manage Enrollments** for your service, and then select the **Enrollment Groups** tab. Select the check box next to the *Group Name* of the device group you created in this tutorial, and press the **Delete** button at the top of the pane.
+ 1. Click **Certificates** in DPS. For each certificate you uploaded and verified in this tutorial, click the certificate and click the **Delete** button to remove it.+ 1. From the left-hand menu in the Azure portal, select **All resources** and then select your IoT hub. Open **IoT devices** for your hub. Select the check box next to the *DEVICE ID* of the device that you registered in this tutorial. Click the **Delete** button at the top of the pane. ## Next steps
iot-edge How To Provision Single Device Windows Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-windows-symmetric.md
A Windows device.
IoT Edge with Windows containers requires Windows version 1809/build 17763, which is the latest [Windows long term support build](/windows/release-information/). Be sure to review the [supported systems list](support.md#operating-systems) for a list of supported SKUs.
+Note that the Windows versions on both the container and host must match. For more information, see [Could not start module due to OS mismatch](troubleshoot-common-errors.md#could-not-start-module-due-to-os-mismatch).
+ <!-- Register your device and View provisioning information H2s and content --> [!INCLUDE [iot-edge-register-device-symmetric.md](../../includes/iot-edge-register-device-symmetric.md)]
iot-edge Troubleshoot Common Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/troubleshoot-common-errors.md
You can set DNS server for each module's *createOptions* in the IoT Edge deploym
Be sure to set this configuration for the *edgeAgent* and *edgeHub* modules as well.
+<!-- 1.1 -->
+## Could not start module due to OS mismatch
+
+ **Observed behavior:**
+
+The edgeHub module fails to start in IoT Edge version 1.1.
+
+**Root cause:**
+
+Windows module uses a version of Windows that is incompatible with the version of Windows on the host. IoT Edge Windows version 1809 build 17763 is needed as the base layer for the module image, but a different version is in use.
+
+**Resolution:**
+
+Check the version of your various Windows operating systems in [Troubleshoot host and container image mismatches](/virtualization/windowscontainers/deploy-containers/update-containers#troubleshoot-host-and-container-image-mismatches). If the operating systems are different, update them to IoT Edge Windows version 1809 build 17763 and rebuild the Docker image used for that module.
+
+<!-- end 1.1 -->
+ ## IoT Edge hub fails to start **Observed behavior:**
iot-hub-device-update Configure Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/configure-private-endpoints.md
The following sections show you how to approve or reject a private endpoint conn
5. If there are any connections that are pending, you'll see a connection listed with **Pending** in the provisioning state. :::image type="content" source="./media/configure-private-endpoints/device-update-approval.png" alt-text="Screenshot showing a Pending Connection in the Networking tab in Device Update account.":::
-## Use Azure CLI
-
-### Create a private endpoint
-
-To create a private endpoint, use the [az network private-endpoint create](/cli/azure/network/private-endpoint?#az-network-private-endpoint-create) method as shown in the following example:
-
-```azurecli-interactive
-az network private-endpoint create \
- -g <RESOURCE GROUP NAME> \
- -n <PRIVATE ENDPOINT NAME> \
- --vnet-name <VIRTUAL NETWORK NAME> \
- --subnet <SUBNET NAME> \
- --private-connection-resource-id "/subscriptions/<SUBSCRIPTION ID>/resourceGroups/<RESOURCE GROUP NAME>/providers/Microsoft.DeviceUpdate/account/<ACCOUNT NAME> \
- --connection-name <PRIVATE LINK SERVICE CONNECTION NAME> \
- --location <LOCATION> \
- --group-id DeviceUpdate
- --request-message "Optional message"
- --manual-request
-```
-
-For descriptions of the parameters used in the example, see documentation for [az network private-endpoint create](/cli/azure/network/private-endpoint?#az-network-private-endpoint-create). A few points to note in this example are:
--- For `private-connection-resource-id`, specify the resource ID of the **account**. -- For `group-id`, specify `DeviceUpdate`.-
-To delete a private endpoint, use the [az network private-endpoint delete](/cli/azure/network/private-endpoint?#az-network-private-endpoint-delete) method as shown in the following example:
-
-```azurecli-interactive
-az network private-endpoint delete -g <RESOURCE GROUP NAME> -n <PRIVATE ENDPOINT NAME>
-```
-
-### Approve/reject a private endpoint connection
-
-```azurecli-interactive
-az iot device-update account private-endpoint-connection set \
- -n <ACCOUNT NAME> \
- --cn <PRIVATE LINK SERVICE CONNECTION NAME> \
- --status <Approved/Rejected> \
- --desc 'Optional description'
-```
-- ## Next steps * [Learn about network security concepts](network-security.md).
iot-hub Iot Hub Devguide Device Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-device-twins.md
The solution back end operates on the device twin using the following atomic ope
* **Replace tags**. This operation enables the solution back end to completely overwrite all existing tags and substitute a new JSON document for `tags`.
-* **Receive twin notifications**. This operation allows the solution back end to be notified when the twin is modified. To do so, your IoT solution needs to create a route and to set the Data Source equal to *twinChangeEvents*. By default, no such routes pre-exist, so no twin notifications are sent. If the rate of change is too high, or for other reasons such as internal failures, the IoT Hub might send only one notification that contains all changes. Therefore, if your application needs reliable auditing and logging of all intermediate states, you should use device-to-cloud messages. The twin notification message includes properties and body.
-
- - Properties
-
- | Name | Value |
- | | |
- $content-type | application/json |
- $iothub-enqueuedtime | Time when the notification was sent |
- $iothub-message-source | twinChangeEvents |
- $content-encoding | utf-8 |
- deviceId | ID of the device |
- hubName | Name of IoT Hub |
- operationTimestamp | [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp of operation |
- iothub-message-schema | twinChangeNotification |
- opType | "replaceTwin" or "updateTwin" |
-
- Message system properties are prefixed with the `$` symbol.
-
- - Body
-
- This section includes all the twin changes in a JSON format. It uses the same format as a patch, with the difference that it can contain all twin sections: tags, properties.reported, properties.desired, and that it contains the "$metadata" elements. For example,
-
- ```json
- {
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- }
- }
- }
- ```
+* **Receive twin notifications**. This operation allows the solution back end to be notified when the twin is modified. To do so, your IoT solution needs to create a route and to set the Data Source equal to *twinChangeEvents*. By default, no such route exists, so no twin notifications are sent. If the rate of change is too high, or for other reasons such as internal failures, the IoT Hub might send only one notification that contains all changes. Therefore, if your application needs reliable auditing and logging of all intermediate states, you should use device-to-cloud messages. To learn more about the properties and body returned in the twin notification message, see [Non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md).
All the preceding operations support [Optimistic concurrency](iot-hub-devguide-device-twins.md#optimistic-concurrency) and require the **ServiceConnect** permission, as defined in [Control access to IoT Hub](iot-hub-dev-guide-sas.md).
iot-hub Iot Hub Devguide Identity Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-identity-registry.md
A more complex implementation could include the information from [Azure Monitor]
## Device and module lifecycle notifications
-IoT Hub can notify your IoT solution when a device identity is created or deleted by sending lifecycle notifications. To do so, your IoT solution needs to create a route and to set the Data Source equal to *DeviceLifecycleEvents*. By default, no lifecycle notifications are sent, that is, no such routes pre-exist. By creating a route with Data Source equal to *DeviceLifecycleEvents*, lifecycle events will be sent for both device identities and module identities; however, the message contents will differ depending on whether the events are generated for module identities or device identities. It should be noted that for IoT Edge modules, the module identity creation flow is different than for other modules, as a result for IoT Edge modules the create notification is only sent if the corresponding IoT Edge Device for the updated IoT Edge module identity is running. For all other modules, lifecycle notifications are sent whenever the module identity is updated on the IoT Hub side. The notification message includes properties, and body.
-
-Properties: Message system properties are prefixed with the `$` symbol.
-
-Notification message for device:
-
-| Name | Value |
-| | |
-|$content-type | application/json |
-|$iothub-enqueuedtime | Time when the notification was sent |
-|$iothub-message-source | deviceLifecycleEvents |
-|$content-encoding | utf-8 |
-|opType | **createDeviceIdentity** or **deleteDeviceIdentity** |
-|hubName | Name of IoT Hub |
-|deviceId | ID of the device |
-|operationTimestamp | ISO8601 timestamp of operation |
-|iothub-message-schema | deviceLifecycleNotification |
-
-Body: This section is in JSON format and represents the twin of the created device identity. For example,
-
-```json
-{
- "deviceId":"11576-ailn-test-0-67333793211",
- "etag":"AAAAAAAAAAE=",
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- }
- }
-}
-```
-Notification message for module:
-
-| Name | Value |
-| | |
-$content-type | application/json |
-$iothub-enqueuedtime | Time when the notification was sent |
-$iothub-message-source | moduleLifecycleEvents |
-$content-encoding | utf-8 |
-opType | **createModuleIdentity** or **deleteModuleIdentity** |
-hubName | Name of IoT Hub |
-moduleId | ID of the module |
-operationTimestamp | ISO8601 timestamp of operation |
-iothub-message-schema | moduleLifecycleNotification |
-
-Body: This section is in JSON format and represents the twin of the created module identity. For example,
-
-```json
-{
- "deviceId":"11576-ailn-test-0-67333793211",
- "moduleId":"tempSensor",
- "etag":"AAAAAAAAAAE=",
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- }
- }
-}
-```
+IoT Hub can notify your IoT solution when a device identity is created or deleted by sending lifecycle notifications. To do so, your IoT solution needs to create a route and to set the Data Source equal to *DeviceLifecycleEvents*. By default, no lifecycle notifications are sent, that is, no such routes pre-exist. By creating a route with Data Source equal to *DeviceLifecycleEvents*, lifecycle events will be sent for both device identities and module identities; however, the message contents will differ depending on whether the events are generated for module identities or device identities. It should be noted that for IoT Edge modules, the module identity creation flow is different than for other modules, as a result for IoT Edge modules the create notification is only sent if the corresponding IoT Edge Device for the updated IoT Edge module identity is running. For all other modules, lifecycle notifications are sent whenever the module identity is updated on the IoT Hub side. To learn more about the properties and body returned in the notification message, see [Non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md).
## Device identity properties
iot-hub Iot Hub Devguide Messages Construct https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-messages-construct.md
The **iothub-connection-auth-method** property contains a JSON serialized object
* For information about message size limits in IoT Hub, see [IoT Hub quotas and throttling](iot-hub-devguide-quotas-throttling.md).
-* To learn how to create and read IoT Hub messages in various programming languages, see the [Quickstarts](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs).
+* To learn how to create and read IoT Hub messages in various programming languages, see the [Quickstarts](../iot-develop/quickstart-send-telemetry-iot-hub.md?pivots=programming-language-nodejs).
+
+* To learn about the structure of non-telemetry events generated by IoT Hub, see [IoT Hub non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md).
iot-hub Iot Hub Devguide Module Twins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-devguide-module-twins.md
The solution back end operates on the module twin using the following atomic ope
* **Replace tags**. This operation enables the solution back end to completely overwrite all existing tags and substitute a new JSON document for `tags`.
-* **Receive twin notifications**. This operation allows the solution back end to be notified when the twin is modified. To do so, your IoT solution needs to create a route and to set the Data Source equal to *twinChangeEvents*. By default, no twin notifications are sent, that is, no such routes pre-exist. If the rate of change is too high, or for other reasons such as internal failures, the IoT Hub might send only one notification that contains all changes. Therefore, if your application needs reliable auditing and logging of all intermediate states, you should use device-to-cloud messages. The twin notification message includes properties and body.
-
- - Properties
-
- | Name | Value |
- | | |
- $content-type | application/json |
- $iothub-enqueuedtime | Time when the notification was sent |
- $iothub-message-source | twinChangeEvents |
- $content-encoding | utf-8 |
- deviceId | ID of the device |
- moduleId | ID of the module |
- hubName | Name of IoT Hub |
- operationTimestamp | [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp of operation |
- iothub-message-schema | twinChangeNotification |
- opType | "replaceTwin" or "updateTwin" |
-
- Message system properties are prefixed with the `$` symbol.
-
- - Body
-
- This section includes all the twin changes in a JSON format. It uses the same format as a patch, with the difference that it can contain all twin sections: tags, properties.reported, properties.desired, and that it contains the ΓÇ£$metadataΓÇ¥ elements. For example,
-
- ```json
- {
- "properties": {
- "desired": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- },
- "reported": {
- "$metadata": {
- "$lastUpdated": "2016-02-30T16:24:48.789Z"
- },
- "$version": 1
- }
- }
- }
- ```
+* **Receive twin notifications**. This operation allows the solution back end to be notified when the twin is modified. To do so, your IoT solution needs to create a route and to set the Data Source equal to *twinChangeEvents*. By default, no such route exists, so no twin notifications are sent. If the rate of change is too high, or for other reasons such as internal failures, the IoT Hub might send only one notification that contains all changes. Therefore, if your application needs reliable auditing and logging of all intermediate states, you should use device-to-cloud messages. To learn more about the properties and body returned in the twin notification message, see [Non-telemetry event schemas](iot-hub-non-telemetry-event-schema.md).
All the preceding operations support [Optimistic concurrency](iot-hub-devguide-device-twins.md#optimistic-concurrency) and require the **ServiceConnect** permission, as defined in the [Control Access to IoT Hub](iot-hub-devguide-security.md) article.
iot-hub Iot Hub Non Telemetry Event Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-non-telemetry-event-schema.md
+
+ Title: Azure IoT Hub non-telemetry event schemas
+description: This article provides the properties and schema for Azure IoT Hub non-telemetry events. It lists the available event types, an example event, and event properties.
+++ Last updated : 07/01/2022++++
+# Azure IoT Hub non-telemetry event schemas
+
+This article provides the properties and schemas for non-telemetry events emitted by Azure IoT Hub. Non-telemetry events are different from device-to-cloud and cloud-to-device messages in that they are emitted directly by IoT Hub in response to specific kinds of state changes associated with your devices. For example, lifecycle changes like a device or module being created or deleted, or connection state changes like a device or module connecting or disconnecting. To observe non-telemetry events, you must have an appropriate message route configured. To learn more about IoT Hub message routing, see [IoT Hub message routing](iot-hub-devguide-messages-d2c.md).
+
+## Available event types
+
+Azure IoT Hub emits the non-telemetry events in the following categories:
+
+| Event category | Description |
+| - | -- |
+| Device connection state events | Emitted when a device connects to or disconnects from an IoT hub. |
+| Device lifecycle events | Emitted when a device or module is created on or deleted from an IoT hub. |
+| Device twin change events | Emitted when a device or module twin is changed or replaced. |
+| Digital twin change events | Emitted when a device or module's digital twin is changed or replaced. |
+
+## Common event properties
+
+Non-telemetry events share several common properties.
+
+### System properties
+
+The following system properties are set by IoT Hub on each event.
+
+| Property | Type |Description | Keyword for routing query |
+| -- | - | - | - |
+| content-encoding | string | utf-8 | $contentEncoding |
+| content-type | string | application/json | $contentType |
+| correlation-id | string | A unique ID that identifies the event. | $correlationId |
+| user-id | string | The name of IoT Hub that generated the event. | $userId |
+| iothub-connection-device-id | string | The device ID. | $connectionDeviceId |
+| iothub-connection-module-id | string | The module ID. This property is output only for module life cycle and twin events. | $connectionModuleId |
+| iothub-enqueuedtime | number | Date and time when the notification was sent. In routing queries, use an [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp; for example, `$enqueuedTime > "2022-06-06T22:56:06Z"` | $enqueuedTime |
+| iothub-message-source | string | The event category that identifies the message source. For example, *deviceLifecycleEvents*. | N/A |
+
+### Application properties
+
+The following application properties are set by IoT Hub on each event.
+
+| Property | Type |Description |
+| -- | - | - |
+| deviceId | string | The device ID. |
+| hubName | string | The name of the IoT Hub that generated the event. |
+| iothub-message-schema | string | The message schema associated with the event category; for example, *deviceLifecycleNotification*. |
+| moduleId | string | The module ID. This property is output only for module lifecycle and twin change events. |
+| operationTimestamp | string | The [ISO8601](https://en.wikipedia.org/wiki/ISO_8601) timestamp of the operation. |
+| opType | string | The identifier for the operation that generated the event. For example, *createDeviceIdentity* or *deleteDeviceIdentity*. |
+
+In routing queries, use the property name. For example, `deviceId = "my-device"`.
+
+## Connection state events
+
+Connection state events are emitted whenever a device or module connects or disconnects from the IoT hub.
+
+**Application properties**: The following table shows how application properties are set for connection state events:
+
+| Property | Value |
+| - | -- |
+| iothub-message-schema | deviceConnectionStateNotification |
+| opType | One of the following values: deviceConnected, deviceDisconnected, moduleConnected, or moduleDisconnected. |
+
+**System properties**: The following table shows how system properties are set for connection state events:
+
+| Property | Value |
+| - | -- |
+| iothub-message-source | deviceConnectionStateEvents |
+
+**Body**: The body contains a sequence number. The sequence number is a string representation of a hexadecimal number. You can use string compare to identify the larger number. If you're converting the string to hex, then the number will be a 256-bit number. The sequence number is strictly increasing, and the latest event will have a higher number than other events. This is useful if you have frequent device connects and disconnects, and want to ensure only the latest event is used to trigger a downstream action.
+
+### Example
+
+The following JSON shows a device connection state event emitted when a device disconnects.
+
+```json
+{
+ "event": {
+ "origin": "contoso-device-1",
+ "module": "",
+ "interface": "",
+ "component": "",
+ "properties": {
+ "system": {
+ "content_encoding": "utf-8",
+ "content_type": "application/json",
+ "correlation_id": "98dcbcf6-3398-c488-c62c-06330e65ea98",
+ "user_id": "contoso-routing-hub"
+ },
+ "application": {
+ "hubName": "contoso-routing-hub",
+ "deviceId": "contoso-device-1",
+ "opType": "deviceDisconnected",
+ "iothub-message-schema": "deviceConnectionStateNotification",
+ "operationTimestamp": "2022-06-01T18:43:04.5561024Z"
+ }
+ },
+ "annotations": {
+ "iothub-connection-device-id": "contoso-device-1",
+ "iothub-enqueuedtime": 1654109018051,
+ "iothub-message-source": "deviceConnectionStateEvents",
+ "x-opt-sequence-number": 72,
+ "x-opt-offset": "37344",
+ "x-opt-enqueued-time": 1654109018176
+ },
+ "payload": {
+ "sequenceNumber": "000000000000000001D8713FF7E0851400000002000000000000000000000007"
+ }
+ }
+}
+```
+
+## Device lifecycle events
+
+Device lifecycle events are emitted whenever a device or module is created or deleted from the identity registry. For more detail about when device lifecycle events are generated, see [Device and module lifecycle notifications](iot-hub-devguide-identity-registry.md#device-and-module-lifecycle-notifications).
+
+**Application properties**: The following table shows how application properties are set for device lifecycle events:
+
+| Property | Value |
+| - | -- |
+| iothub-message-schema | deviceLifecycleNotification |
+| opType | One of the following values: createDeviceIdentity, deleteDeviceIdentity, createModuleIdentity, or deleteModuleIdentity. |
+
+**System properties**: The following table shows how system properties are set for device lifecycle events:
+
+| Property | Value |
+| - | -- |
+| iothub-message-source | deviceLifecycleEvents |
+
+**Body**: The body contains a representation of the device twin or module twin. It includes the device ID and module ID, the twin etag, the version property, and the tags, properties and associated metadata of the twin.
+
+### Example
+
+The following JSON shows a device lifecycle event emitted when a module is created. The event is captured using the `az iot hub monitor-events` Azure CLI command.
+
+```json
+{
+ "event": {
+ "origin": "contoso-device-2",
+ "module": "module-1",
+ "interface": "",
+ "component": "",
+ "properties": {
+ "system": {
+ "content_encoding": "utf-8",
+ "content_type": "application/json",
+ "correlation_id": "c5a4e6986c",
+ "user_id": "contoso-routing-hub"
+ },
+ "application": {
+ "hubName": "contoso-routing-hub",
+ "deviceId": "contoso-device-2",
+ "operationTimestamp": "2022-05-27T18:49:38.4904785Z",
+ "moduleId": "module-1",
+ "opType": "createModuleIdentity",
+ "iothub-message-schema": "moduleLifecycleNotification"
+ }
+ },
+ "annotations": {
+ "iothub-connection-device-id": "contoso-device-2",
+ "iothub-connection-module-id": "module-1",
+ "iothub-enqueuedtime": 1653677378534,
+ "iothub-message-source": "deviceLifecycleEvents",
+ "x-opt-sequence-number": 62,
+ "x-opt-offset": "31768",
+ "x-opt-enqueued-time": 1653677378643
+ },
+ "payload": {
+ "deviceId": "contoso-device-2",
+ "moduleId": "module-1",
+ "etag": "AAAAAAAAAAE=",
+ "version": 2,
+ "properties": {
+ "desired": {
+ "$metadata": {
+ "$lastUpdated": "0001-01-01T00:00:00Z"
+ },
+ "$version": 1
+ },
+ "reported": {
+ "$metadata": {
+ "$lastUpdated": "0001-01-01T00:00:00Z"
+ },
+ "$version": 1
+ }
+ }
+ }
+ }
+}
+```
+
+## Device twin change events
+
+Device twin change events are emitted whenever a device twin or a module twin is updated or replaced. In some cases, several changes may be packaged in a single event. To learn more, see [Device twin backend operations](iot-hub-devguide-device-twins.md#back-end-operations) or [Module twin backend operations](iot-hub-devguide-module-twins.md#back-end-operations).
+
+**Application properties**: The following table shows how application properties are set for device twin change events:
+
+| Property | Value |
+| - | -- |
+| iothub-message-schema | twinChangeNotification |
+| opType | One of the following values: replaceTwin or updateTwin. |
+
+**System properties**: The following table shows how system properties are set for device twin change events:
+
+| Property | Value |
+| - | -- |
+| iothub-message-source | twinChangeEvents |
+
+**Body**: On an update, the body contains the version property of the twin and the updated tags and properties and their associated metadata. On a replace, the body contains the device ID and module ID, the twin etag, the version property, and all the tags, properties and associated metadata of the device or module twin.
+
+### Example
+
+The following JSON shows a twin change event emitted for an update of a desired property and a tag on a module twin. The event is captured using the `az iot hub monitor-events` Azure CLI command.
+
+```json
+{
+ "event": {
+ "origin": "contoso-device-3",
+ "module": "module-1",
+ "interface": "",
+ "component": "",
+ "properties": {
+ "system": {
+ "content_encoding": "utf-8",
+ "content_type": "application/json",
+ "correlation_id": "4d1f1e2e74f",
+ "user_id": "contoso-routing-hub"
+ },
+ "application": {
+ "hubName": "contoso-routing-hub",
+ "deviceId": "contoso-device-3",
+ "operationTimestamp": "2022-06-01T22:27:50.2612586Z",
+ "moduleId": "module-1",
+ "iothub-message-schema": "twinChangeNotification",
+ "opType": "updateTwin"
+ }
+ },
+ "annotations": {
+ "iothub-connection-device-id": "contoso-device-3",
+ "iothub-connection-module-id": "module-1",
+ "iothub-enqueuedtime": 1654122470282,
+ "iothub-message-source": "twinChangeEvents",
+ "x-opt-sequence-number": 17,
+ "x-opt-offset": "12352",
+ "x-opt-enqueued-time": 1654122470329
+ },
+ "payload": {
+ "version": 7,
+ "tags": {
+ "tag1": "new value"
+ },
+ "properties": {
+ "desired": {
+ "property1": "new value",
+ "$metadata": {
+ "$lastUpdated": "2022-06-01T22:27:50.2612586Z",
+ "$lastUpdatedVersion": 6,
+ "property1": {
+ "$lastUpdated": "2022-06-01T22:27:50.2612586Z",
+ "$lastUpdatedVersion": 6
+ }
+ },
+ "$version": 6
+ }
+ }
+ }
+ }
+}
+```
+
+## Next steps
+
+- To learn about message routing, see [IoT Hub message routing](iot-hub-devguide-messages-d2c.md).
+
+- To learn how to add queries to your message routes, see [IoT Hub message routing query syntax](iot-hub-devguide-routing-query-syntax.md).
+
+- To learn about the structure of device-to-cloud and cloud-to-device messages, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
# Tutorial: Send device data to Azure Storage using IoT Hub message routing
-Use [message routing](iot-hub-devguide-messages-d2c.md) in Azure IoT Hub to send telemetry data from your IoT devices Azure services such as blob storage, Service Bus Queues, Service Bus Topics, and Event Hubs.
+Use [message routing](iot-hub-devguide-messages-d2c.md) in Azure IoT Hub to send telemetry data from your IoT devices to Azure services such as blob storage, Service Bus Queues, Service Bus Topics, and Event Hubs.
Every IoT hub has a default built-in endpoint that is compatible with Event Hubs. You can also create custom endpoints and route messages to other Azure services by defining [routing queries](iot-hub-devguide-routing-query-syntax.md). Each message that arrives at the IoT hub is routed to all endpoints whose routing queries it matches. If a message doesn't match any of the defined routing queries, it is routed to the default endpoint.
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
Cross-region load balancer routes the traffic to the appropriate regional load b
* A health probe can't be configured currently. A default health probe automatically collects availability information about the regional load balancer every 20 seconds.
-* Integration with Azure Kubernetes Service (AKS) is currently unavailable. Loss of connectivity will occur when deploying a cross-region load balancer with the Standard load balancer with AKS cluster deployed in the backend.
## Pricing and SLA Cross-region load balancer, shares the [SLA](https://azure.microsoft.com/support/legal/sla/load-balancer/v1_0/ ) of standard load balancer.
load-balancer Load Balancer Monitor Metrics Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-monitor-metrics-cli.md
For metric definitions and further details, refer to [Monitoring load balancer d
## CLI examples for Load Balancer metrics <!-- Introduction paragraph -->
-The [az monitor metrics](/cli/azure/monitor/metrics/) command is used to view Azure resource metrics. To see the metric definitions available for a Standard Load Balancer, you run the `az monitor metrics list-definitions` command.
+The [az monitor metrics](/cli/azure/monitor/metrics/) command is used to view Azure resource metrics. To see the metric definitions available for a Standard Load Balancer, you run the [az monitor metrics list-definitions](/cli/azure/monitor/metrics#az-monitor-metrics-list-definitions) command.
```azurecli # Display available metric definitions for a Standard Load Balancer resource
az monitor metrics list-definitions --resource <resource_id>
>[!NOTE] >In the all the following examples, replace **<resource_id>** with the unique resource id of your Standard Load Balancer.
-To retrieve Standard Load Balancer metrics for a resource, you can use the `az monitor metrics list` command. For example, use the `--metric DipAvailability` option to collect the Health Probe Status metric from a Standard Load Balancer.
+To retrieve Standard Load Balancer metrics for a resource, you can use the [az monitor metrics list](/cli/azure/monitor/metrics#az-monitor-metrics-list) command. For example, use the `--metric DipAvailability` option to collect the Health Probe Status metric from a Standard Load Balancer.
```azurecli
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
Connect-AzAccount
#### Log analytics workspace
-To enable Diagnostic Logs for a Log Analytics workspace, enter these commands. Replace the bracketed values with your values:
+To send resource logs to a Log Analytics workspace, enter these commands. Replace the bracketed values with your values:
```azurepowershell ## Place the load balancer in a variable. ##
Set-AzDiagnosticSetting `
#### Storage account
-To enable Diagnostic Logs in a storage account, enter these commands. Replace the bracketed values with your values:
+To send resource logs to a storage account, enter these commands. Replace the bracketed values with your values:
```azurepowershell ## Place the load balancer in a variable. ##
Set-AzDiagnosticSetting `
#### Event hub
-To enable Diagnostic Logs for an event hub namespace, enter these commands. Replace the bracketed values with your values:
+To send resource logs to an event hub namespace, enter these commands. Replace the bracketed values with your values:
```azurepowershell ## Place the load balancer in a variable. ##
az login
#### Log analytics workspace
-To enable Diagnostic Logs for a Log Analytics workspace, enter these commands. Replace the bracketed values with your values:
+To send resource logs to a Log Analytics workspace, enter these commands. Replace the bracketed values with your values:
```azurecli lbid=$(az network lb show \
az monitor diagnostic-settings create \
#### Storage account
-To enable Diagnostic Logs in a storage account, enter these commands. Replace the bracketed values with your values:
+To send resource logs to a storage account, enter these commands. Replace the bracketed values with your values:
```azurecli lbid=$(az network lb show \
az monitor diagnostic-settings create \
#### Event hub
-To enable Diagnostic Logs for an event hub namespace, enter these commands. Replace the bracketed values with your values:
+To send resource logs to an event hub namespace, enter these commands. Replace the bracketed values with your values:
```azurecli lbid=$(az network lb show \
machine-learning Concept Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-workspace.md
You can also perform the following workspace management tasks:
There are multiple ways to create a workspace:
-* Use the [Azure portal](how-to-manage-workspace.md?tabs=azure-portal#create-a-workspace) for a point-and-click interface to walk you through each step.
+* Use the [Azure portal](quickstart-create-resources.md) for a point-and-click interface to walk you through each step.
* Use the [Azure Machine Learning SDK for Python](how-to-manage-workspace.md?tabs=python#create-a-workspace) to create a workspace on the fly from Python scripts or Jupyter notebooks * Use an [Azure Resource Manager template](how-to-create-workspace-template.md) or the [Azure Machine Learning CLI](how-to-configure-cli.md) when you need to automate or customize the creation with corporate security standards. * If you work in Visual Studio Code, use the [VS Code extension](how-to-manage-resources-vscode.md#create-a-workspace).
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Access the terminal of a compute instance in your workspace to:
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* A Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* A Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
## Access a terminal
machine-learning How To Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-add-users.md
This article shows how to add users to your data labeling project so that they c
## Prerequisites * An Azure subscription. If you don't have an Azure subscription [create a free account](https://azure.microsoft.com/free) before you begin.
-* An Azure Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
You'll need certain permission levels to follow the steps in this article. If you can't follow one of the steps, contact your administrator to get the appropriate permissions.
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Unlike classical time series methods, in automated ML, past time-series values a
For this article you need,
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
Automated ML supports model training for computer vision tasks like image classi
# [CLI v2](#tab/CLI-v2)
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the `ml` extension. # [Python SDK v2 (preview)](#tab/SDK-v2)
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* The Azure Machine Learning Python SDK v2 (preview) installed.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
Last updated 03/15/2022
#Customer intent: I'm a data scientist with ML knowledge in the natural language processing space, looking to build ML models using language specific data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
-# # Set up AutoML to train a natural language processing model (preview)
+# Set up AutoML to train a natural language processing model (preview)
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)] > [!div class="op_single_selector" title1="Select the version of the developer platform of Azure Machine Learning you are using:"]
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
> [!WARNING] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
You can seamlessly integrate with the [Azure Machine Learning data labeling](how
* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
> [!WARNING] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
text,labels
### Named entity recognition (NER)
-Unlike multi-class or multi-label, which takes `.csv` format datasets, named entity recognition requires [CoNLL](https://www.clips.uantwerpen.be/conll2003/ner/) format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space.
+Unlike multi-class or multi-label, which takes `.csv` format datasets, named entity recognition requires CoNLL format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space.
For example,
machine-learning How To Change Storage Access Key https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-change-storage-access-key.md
For security purposes, you may need to change the access keys for an Azure Stora
## Prerequisites
-* An Azure Machine Learning workspace. For more information, see the [Create a workspace](how-to-manage-workspace.md) article.
+* An Azure Machine Learning workspace. For more information, see the [Create workspace resources](quickstart-create-resources.md) article.
* The [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
If you prefer to submit training jobs with the Azure Machine learning CLI v2 ext
## Prerequisites For this article you need:
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* The Azure Machine Learning Python SDK v2 (preview) installed. To install the SDK you can either,
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
For a low-code or no-code experience, see [Create your automated machine learnin
For this article you need,
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* Familiarity with setting up an automated machine learning experiment with the Azure Machine Learning SDK. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the fundamental automated machine learning experiment design patterns.
machine-learning How To Configure Databricks Automl Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-databricks-automl-environment.md
For information on other machine learning development environments, see [Set up
## Prerequisite
-Azure Machine Learning workspace. If you don't have one, you can create an Azure Machine Learning workspace through the [Azure portal](how-to-manage-workspace.md), [Azure CLI](how-to-manage-workspace-cli.md#create-a-workspace), and [Azure Resource Manager templates](how-to-create-workspace-template.md).
+Azure Machine Learning workspace. To create one, use the steps in the [Create workspace resources](quickstart-create-resources.md) article.
## Azure Databricks with Azure Machine Learning and AutoML
machine-learning How To Connect Data Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-connect-data-ui.md
For a code first experience, see the following articles to use the [Azure Machin
- Access to [Azure Machine Learning studio](https://ml.azure.com/). -- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
- When you create a workspace, an Azure blob container and an Azure file share are automatically registered as datastores to the workspace. They're named `workspaceblobstore` and `workspacefilestore`, respectively. If blob storage is sufficient for your needs, the `workspaceblobstore` is set as the default datastore, and already configured for use. Otherwise, you need a storage account on Azure with a [supported storage type](how-to-access-data.md#supported-data-storage-service-types).
machine-learning How To Create Attach Compute Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-studio.md
In this article, learn how to create and manage compute targets in Azure Machine
## Prerequisites * If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md)
+* An [Azure Machine Learning workspace](quickstart-create-resources.md)
## What's a compute target?
machine-learning How To Create Component Pipeline Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipeline-python.md
Create a `MLClient` object to manage Azure Machine Learning services.
[!notebook-python[] (~/azureml-examples-main/sdk/jobs/pipelines/2e_image_classification_keras_minist_convnet/image_classification_keras_minist_convnet.ipynb?name=workspace)] > [!IMPORTANT]
-> This code snippet expects the workspace configuration json file to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
+> This code snippet expects the workspace configuration json file to be saved in the current directory or its parent. For more information on creating a workspace, see [Create workspace resources](quickstart-create-resources.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
#### Submit pipeline job to workspace
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
In this article, you learn how to create and run [machine learning pipelines](co
- If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/). -- You'll need an [Azure Machine Learning workspace](how-to-manage-workspace.md) for your pipelines and associated resources
+- An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
-- [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md)
+- [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md).
- Clone the examples repository:
machine-learning How To Create Component Pipelines Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-ui.md
In this article, you'll learn how to create and run [machine learning pipelines]
* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* You'll need an [Azure Machine Learning workspace](how-to-manage-workspace.md) for your pipelines and associated resources
+* An Azure Machine Learning workspace[Create workspace resources](quickstart-create-resources.md).
-* [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md)
+* [Install and set up the Azure CLI extension for Machine Learning](how-to-configure-cli.md).
* Clone the examples repository:
machine-learning How To Create Machine Learning Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-machine-learning-pipelines.md
If you don't have an Azure subscription, create a free account before you begin.
## Prerequisites
-* Create an [Azure Machine Learning workspace](how-to-manage-workspace.md) to hold all your pipeline resources.
+* An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](concept-compute-instance.md) with the SDK already installed.
machine-learning How To Create Register Data Assets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-register-data-assets.md
To create and work with Data assets, you need:
* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
* The [Azure Machine Learning CLI/SDK installed](how-to-configure-cli.md) and MLTable package installed (`pip install mltable`).
machine-learning How To Data Prep Synapse Spark Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-data-prep-synapse-spark-pool.md
The Azure Synapse Analytics integration with Azure Machine Learning (preview) al
* The [Azure Machine Learning Python SDK installed](/python/api/overview/azure/ml/install).
-* [Create an Azure Machine Learning workspace](how-to-manage-workspace.md?tabs=python).
+* [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
* [Create an Azure Synapse Analytics workspace in Azure portal](../synapse-analytics/quickstart-create-workspace.md).
machine-learning How To Deploy And Where https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-and-where.md
For more information on the concepts involved in the machine learning deployment
[!INCLUDE [cli10-only](../../includes/machine-learning-cli-version-1-only.md)] -- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).
- A model. The examples in this article use a pre-trained model. - A machine that can run Docker, such as a [compute instance](how-to-create-manage-compute-instance.md). # [Python](#tab/python) -- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).
- A model. The examples in this article use a pre-trained model. - The [Azure Machine Learning software development kit (SDK) for Python](/python/api/overview/azure/ml/intro). - A machine that can run Docker, such as a [compute instance](how-to-create-manage-compute-instance.md).
machine-learning How To Deploy Batch With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-batch-with-rest.md
In this article, you learn how to use the new REST APIs to:
## Prerequisites - An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/).-- An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An [Azure Machine Learning workspace](quickstart-create-resources.md).
- A service principal in your workspace. Administrative REST requests use [service principal authentication](how-to-setup-authentication.md#use-service-principal-authentication). - A service principal authentication token. Follow the steps in [Retrieve a service principal authentication token](./how-to-manage-rest.md#retrieve-a-service-principal-authentication-token) to retrieve this token. - The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`.
machine-learning How To Deploy Inferencing Gpus https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-inferencing-gpus.md
Inference, or model scoring, is the phase where the deployed model is used to ma
## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
* A Python development environment with the Azure Machine Learning SDK installed. For more information, see [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
Inference, or model scoring, is the phase where the deployed model is used to ma
To connect to an existing workspace, use the following code: > [!IMPORTANT]
-> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
+> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create workspace resources](quickstart-create-resources.md). For more information on saving the configuration to file, see [Create a workspace configuration file](how-to-configure-environment.md#workspace).
```python from azureml.core import Workspace
machine-learning How To Deploy Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-local.md
Scenarios for local deployment include:
## Prerequisites -- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).
- A model and an environment. If you don't have a trained model, you can use the model and dependency files provided in [this tutorial](tutorial-train-deploy-notebook.md). - The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro). - A conda manager, like Anaconda or Miniconda, if you want to mirror Azure Machine Learning package dependencies.
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
When deploying a model for use with Azure Cognitive Search, the deployment must
## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).
* A Python development environment with the Azure Machine Learning SDK installed. For more information, see [Azure Machine Learning SDK](/python/api/overview/azure/ml/install).
machine-learning How To Deploy Model Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-designer.md
Models trained in the designer can also be deployed through the SDK or command-l
## Prerequisites
-* [An Azure Machine Learning workspace](how-to-manage-workspace.md)
+* [An Azure Machine Learning workspace](quickstart-create-resources.md)
* A completed training pipeline containing one of following components: - [Train Model Component](./algorithm-module-reference/train-model.md)
machine-learning How To Deploy Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-pipelines.md
Machine learning pipelines are reusable workflows for machine learning tasks. On
## Prerequisites
-* Create an [Azure Machine Learning workspace](how-to-manage-workspace.md) to hold all your pipeline resources
+* Create an [Azure Machine Learning workspace](quickstart-create-resources.md) to hold all your pipeline resources
* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](concept-compute-instance.md) with the SDK already installed
machine-learning How To Deploy With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md
In this article, you learn how to use the new REST APIs to:
## Prerequisites - An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/).-- An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An [Azure Machine Learning workspace](quickstart-create-resources.md).
- A service principal in your workspace. Administrative REST requests use [service principal authentication](how-to-setup-authentication.md#use-service-principal-authentication). - A service principal authentication token. Follow the steps in [Retrieve a service principal authentication token](./how-to-manage-rest.md#retrieve-a-service-principal-authentication-token) to retrieve this token. - The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`.
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
The following diagram illustrates that you can enable code generation for any Au
## Prerequisites
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* This article assumes some familiarity with setting up an automated machine learning experiment. Follow the [tutorial](tutorial-auto-train-models.md) or [how-to](how-to-configure-auto-train.md) to see the main automated machine learning experiment design patterns.
machine-learning How To Link Synapse Ml Workspaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-link-synapse-ml-workspaces.md
You can also link workspaces and attach a Synapse Spark pool with a single [Azur
## Prerequisites
-* [Create an Azure Machine Learning workspace](how-to-manage-workspace.md?tabs=python).
+* [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
* [Create a Synapse workspace in Azure portal](../synapse-analytics/quickstart-create-workspace.md).
machine-learning How To Log Pipelines Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-pipelines-application-insights.md
Having your logs in once place will provide a history of exceptions and error me
## Prerequisites
-* Follow the steps to create an [Azure Machine Learning](./how-to-manage-workspace.md) workspace and [create your first pipeline](./how-to-create-machine-learning-pipelines.md)
+* Follow the steps to create an [Azure Machine Learning workspace](quickstart-create-resources.md) and [create your first pipeline](./how-to-create-machine-learning-pipelines.md)
* [Configure your development environment](./how-to-configure-environment.md) to install the Azure Machine Learning SDK. * Install the [OpenCensus Azure Monitor Exporter](https://pypi.org/project/opencensus-ext-azure/) package locally: ```python
machine-learning How To Manage Environments In Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-environments-in-studio.md
For a high-level overview of how environments work in Azure Machine Learning, se
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md)
+* An [Azure Machine Learning workspace](quickstart-create-resources.md).
## Browse curated environments
machine-learning How To Manage Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-files.md
Learn how to create and manage the files in your Azure Machine Learning workspac
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* A Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* A Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
## <a name="create"></a> Create files
machine-learning How To Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md
In this article, you learn how to:
## Prerequisites - An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/)-- An [Azure Machine Learning Workspace](./how-to-manage-workspace.md)
+- An [Azure Machine Learning Workspace](quickstart-create-resources.md).
- Administrative REST requests use service principal authentication. Follow the steps in [Set up authentication for Azure Machine Learning resources and workflows](./how-to-setup-authentication.md#service-principal-authentication) to create a service principal in your workspace - The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`.
machine-learning How To Monitor Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-datasets.md
You can view data drift metrics with the Python SDK or in Azure Machine Learning
To create and work with dataset monitors, you need: * An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An [Azure Machine Learning workspace](quickstart-create-resources.md).
* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package. * Structured (tabular) data with a timestamp specified in the file path, file name, or column in the data.
machine-learning How To Monitor Tensorboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-tensorboard.md
How you launch TensorBoard with Azure Machine Learning experiments depends on th
* **how-to-use-azureml > track-and-monitor-experiments > tensorboard > tensorboard > tensorboard.ipynb** * Your own Juptyer notebook server * [Install the Azure Machine Learning SDK](/python/api/overview/azure/ml/install) with the `tensorboard` extra
- * [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+ * [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
* [Create a workspace configuration file](how-to-configure-environment.md#workspace). ## Option 1: Directly view run history in TensorBoard
machine-learning How To Move Data In Out Of Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-data-in-out-of-pipelines.md
You'll need:
- An Azure Machine Learning workspace.
- Either [create an Azure Machine Learning workspace](how-to-manage-workspace.md) or use an existing one via the Python SDK. Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This function looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
+ Either [create an Azure Machine Learning workspace](quickstart-create-resources.md) or use an existing one via the Python SDK. Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This function looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
```python import azureml.core
machine-learning How To Move Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-move-workspace.md
Moving the workspace enables you to migrate the workspace and its contents as a
## Prerequisites -- An Azure Machine Learning workspace in the source subscription. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace in the source subscription. For more information, see [Create workspace resources](quickstart-create-resources.md).
- You must have permissions to manage resources in both source and target subscriptions. For example, Contributor or Owner role at the __subscription__ level. For more information on roles, see [Azure roles](../role-based-access-control/rbac-and-directory-admin-roles.md#azure-roles) - The destination subscription must be registered for required resource providers. The following table contains a list of the resource providers required by Azure Machine Learning:
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
For information on how to create and manage files, including notebooks, see [Cre
## Prerequisites * An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
-* A Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* A Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
* Your user identity must have access to your workspace's default storage account. Whether you can read, edit, or create notebooks depends on your [access level](how-to-assign-roles.md) to your workspace. For example, a Contributor can edit the notebook, while a Reader could only view it. ## Access notebooks from your workspace
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
In this article, learn how to enable MLflow's tracking URI and logging API, coll
* Install the `azureml-mlflow` package. * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
-* [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md).
+* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md).
* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). ## Train MLflow Projects on local compute
machine-learning How To Train With Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-datasets.md
To create and train with datasets, you need:
* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An [Azure Machine Learning workspace](quickstart-create-resources.md).
* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install) (>= 1.13.0), which includes the `azureml-datasets` package.
machine-learning How To Train With Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-rest.md
In this article, you learn how to use the new REST APIs to:
## Prerequisites - An **Azure subscription** for which you have administrative rights. If you don't have such a subscription, try the [free or paid personal subscription](https://azure.microsoft.com/free/).-- An [Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An [Azure Machine Learning workspace](quickstart-create-resources.md).
- A service principal in your workspace. Administrative REST requests use [service principal authentication](how-to-setup-authentication.md#use-service-principal-authentication). - A service principal authentication token. Follow the steps in [Retrieve a service principal authentication token](./how-to-manage-rest.md#retrieve-a-service-principal-authentication-token) to retrieve this token. - The **curl** utility. The **curl** program is available in the [Windows Subsystem for Linux](/windows/wsl/install-win10) or any UNIX distribution. In PowerShell, **curl** is an alias for **Invoke-WebRequest** and `curl -d "key=val" -X POST uri` becomes `Invoke-WebRequest -Body "key=val" -Method POST -Uri uri`.
machine-learning How To Train With Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-with-ui.md
There are many ways to create a training job with Azure Machine Learning. You ca
* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://aka.ms/AMLFree) today.
-* An Azure Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
* Understanding of what a job is in Azure Machine Learning. See [how to train models with the CLI (v2)](how-to-train-cli.md).
machine-learning How To Use Automated Ml For Ml Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automated-ml-for-ml-models.md
For a Python code-based experience, [configure your automated machine learning e
* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
## Get started
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
When tiling, each image is divided into a grid of tiles. Adjacent tiles overlap
## Prerequisites
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](quickstart-create-resources.md).
* This article assumes some familiarity with how to configure an [automated machine learning experiment for computer vision tasks](how-to-auto-train-image-models.md).
machine-learning How To Use Automlstep In Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automlstep-in-pipelines.md
Azure Machine Learning's automated ML capability helps you discover high-perform
* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
* Familiarity with Azure's [automated machine learning](concept-automated-ml.md) and [machine learning pipelines](concept-ml-pipelines.md) facilities and SDK.
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-environments.md
For a high-level overview of how environments work in Azure Machine Learning, se
## Prerequisites * The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install) (>= 1.13.0)
-* An [Azure Machine Learning workspace](how-to-manage-workspace.md)
+* An [Azure Machine Learning workspace](quickstart-create-resources.md)
## Create an environment
machine-learning How To Use Labeled Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-labeled-dataset.md
Azure Machine Learning datasets with labels are referred to as labeled datasets.
* An Azure subscription. If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. * The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/intro), or access to [Azure Machine Learning studio](https://ml.azure.com/).
-* A Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* A Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
* Access to an Azure Machine Learning data labeling project. If you don't have a labeling project, first create one for [image labeling](how-to-create-image-labeling-projects.md) or [text labeling](how-to-create-text-labeling-projects.md). ## Export data labels
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-identities.md
In this article, you'll learn how to use managed identities to:
## Prerequisites -- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).
- The [Azure CLI extension for Machine Learning service](v1/reference-azure-machine-learning-cli.md) - The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). - To assign roles, the login for your Azure subscription must have the [Managed Identity Operator](../role-based-access-control/built-in-roles.md#managed-identity-operator) role, or other role that grants the required actions (such as __Owner__).
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
In this article, you learn how to:
> * Delete managed online endpoints and deployments ## Prerequisites-- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](quickstart-create-resources.md).
- The examples repository - Clone the [AzureML Example repository](https://github.com/Azure/azureml-examples). This article uses the assets in `/cli/endpoints/online`. ## Create a managed online endpoint
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
Previously updated : 10/21/2021 Last updated : 07/01/2022
If you have an MLflow Project to train with Azure Machine Learning, see [Train M
* Install the `azureml-mlflow` package, which handles the connectivity with Azure Machine Learning, including authentication. * An [Azure Databricks workspace and cluster](/azure/databricks/scenarios/quickstart-create-databricks-workspace-portal).
-* [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md).
+* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md).
* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). ## Install libraries
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
See [MLflow and Azure Machine Learning](concept-mlflow.md) for all supported MLf
## Prerequisites * Install the `azureml-mlflow` package.
-* [Create an Azure Machine Learning Workspace](how-to-manage-workspace.md).
+* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md).
* See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations). * Install and [set up CLI (v2)](how-to-configure-cli.md#prerequisites) and make sure you install the ml extension.
machine-learning How To Use Pipeline Parameter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-parameter.md
In this article, you learn how to do the following:
## Prerequisites
-* An Azure Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
* For a guided introduction to the designer, complete the [designer tutorial](tutorial-designer-automobile-price-train-score.md).
machine-learning How To Use Private Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-private-python-packages.md
The private packages are used through [Environment](/python/api/azureml-core/azu
## Prerequisites * The [Azure Machine Learning SDK for Python](/python/api/overview/azure/ml/install)
- * An [Azure Machine Learning workspace](how-to-manage-workspace.md)
+ * An [Azure Machine Learning workspace](quickstart-create-resources.md)
## Use small number of packages for development and testing
machine-learning How To Use Synapsesparkstep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-synapsesparkstep.md
In this article, you'll learn how to use Apache Spark pools powered by Azure Syn
## Prerequisites
-* Create an [Azure Machine Learning workspace](how-to-manage-workspace.md) to hold all your pipeline resources.
+* Create an [Azure Machine Learning workspace](quickstart-create-resources.md) to hold all your pipeline resources.
* [Configure your development environment](how-to-configure-environment.md) to install the Azure Machine Learning SDK, or use an [Azure Machine Learning compute instance](concept-compute-instance.md) with the SDK already installed.
machine-learning How To Version Track Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-version-track-datasets.md
For this tutorial, you need:
- [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install). This SDK includes the [azureml-datasets](/python/api/azureml-core/azureml.core.dataset) package. -- An [Azure Machine Learning workspace](concept-workspace.md). Retrieve an existing one by running the following code, or [create a new workspace](how-to-manage-workspace.md).
+- An [Azure Machine Learning workspace](concept-workspace.md). Retrieve an existing one by running the following code, or [create a new workspace](quickstart-create-resources.md).
```Python import azureml.core
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-overview.md
To migrate to Azure Machine Learning, we recommend the following approach:
3. Verify that your critical Studio (classic) modules are supported in Azure Machine Learning designer. For more information, see the [Studio (classic) and designer component-mapping](#studio-classic-and-designer-component-mapping) table below.
-4. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md?tabs=azure-portal).
+4. [Create an Azure Machine Learning workspace](quickstart-create-resources.md).
## Step 2: Define a strategy and plan
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-experiment.md
For more information on building pipelines with the SDK, see [What are Azure Mac
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
- A Studio (classic) experiment to migrate. - [Upload your dataset](migrate-register-dataset.md) to Azure Machine Learning.
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
This article is part of the ML Studio (classic) to Azure Machine Learning migrat
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
- An [Azure Machine Learning real-time endpoint or pipeline endpoint](migrate-rebuild-web-service.md). ## Consume a real-time endpoint
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-web-service.md
This article is part of the Studio (classic) to Azure Machine Learning migration
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
- An Azure Machine Learning training pipeline. For more information, see [Rebuild a Studio (classic) experiment in Azure Machine Learning](migrate-rebuild-experiment.md). ## Real-time endpoint vs pipeline endpoint
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-register-dataset.md
You have three options to migrate a dataset to Azure Machine Learning. Read each
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning workspace. [Create workspace resources](quickstart-create-resources.md).
- A Studio (classic) dataset to migrate.
machine-learning Overview What Happened To Workbench https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/overview-what-happened-to-workbench.md
Previously updated : 03/05/2020 Last updated : 07/01/2022 # What happened to Azure Machine Learning Workbench?
The latest release of Azure Machine Learning includes the following features:
+ A new, more comprehensive Python <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK</a>. + The new expanded [Azure CLI extension](v1/reference-azure-machine-learning-cli.md) for machine learning.
-The [architecture](v1/concept-azure-machine-learning-architecture.md) was redesigned for ease of use. Instead of multiple Azure resources and accounts, you only need an [Azure Machine Learning Workspace](concept-workspace.md). You can create workspaces quickly in the [Azure portal](how-to-manage-workspace.md). By using a workspace, multiple users can store training and deployment compute targets, model experiments, Docker images, deployed models, and so on.
+The [architecture](v1/concept-azure-machine-learning-architecture.md) was redesigned for ease of use. Instead of multiple Azure resources and accounts, you only need an [Azure Machine Learning Workspace](concept-workspace.md). You can create workspaces quickly in the [Azure portal](quickstart-create-resources.md). By using a workspace, multiple users can store training and deployment compute targets, model experiments, Docker images, deployed models, and so on.
Although there are new improved CLI and SDK clients in the current release, the desktop workbench application itself has been retired. Experiments can be managed in the [workspace dashboard in Azure Machine Learning studio](how-to-log-view-metrics.md#view-the-experiment-in-the-web-portal). Use the dashboard to get your experiment history, manage the compute targets attached to your workspace, manage your models and Docker images, and even deploy web services.
Although there are new improved CLI and SDK clients in the current release, the
On January 9th, 2019 support for Machine Learning Workbench, Azure Machine Learning Experimentation and Model Management accounts, and their associated SDK and CLI ended.
-All the latest capabilities are available by using this <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK</a>, the [CLI](v1/reference-azure-machine-learning-cli.md), and the [portal](how-to-manage-workspace.md).
+All the latest capabilities are available by using this <a href="/python/api/overview/azure/ml/intro" target="_blank">SDK</a>, the [CLI](v1/reference-azure-machine-learning-cli.md), and the [Azure portal](quickstart-create-resources.md).
## What about run histories?
Older run histories are no longer accessible, how you can still see your runs in
Run histories are now called **experiments**. You can collect your model's experiments and explore them by using the SDK, the CLI, or the Azure Machine Learning studio.
-The portal's workspace dashboard is supported on Microsoft Edge, Chrome, and Firefox browsers only:
+The Azure Machine Learning studio is supported on Microsoft Edge, Chrome, and Firefox browsers only:
-[![Online portal](./media/overview-what-happened-to-workbench/image001.png)](./media/overview-what-happened-to-workbench/image001.png#lightbox)
+[![Screenshot of Azure Machine Learning studio](./media/overview-what-happened-to-workbench/jobs-experiments.png)](./media/overview-what-happened-to-workbench/jobs-experiments.png#lightbox)
Start training your models and tracking the run histories using the new CLI and SDK. You can learn how with the [Tutorial: train models with Azure Machine Learning](tutorial-train-deploy-notebook.md).
machine-learning Tutorial Automated Ml Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-automated-ml-forecast.md
Also try automated machine learning for these other model types:
## Prerequisites
-* An Azure Machine Learning workspace. See [Create an Azure Machine Learning workspace](how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. See [Create workspace resources](quickstart-create-resources.md).
* Download the [bike-no.csv](https://github.com/Azure/azureml-examples/blob/main/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/bike-no.csv) data file
machine-learning Tutorial Designer Automobile Price Train Score https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-designer-automobile-price-train-score.md
To create an Azure Machine Learning pipeline, you need an Azure Machine Learning
### Create a new workspace
-You need an Azure Machine Learning workspace to use the designer. The workspace is the top-level resource for Azure Machine Learning, it provides a centralized place to work with all the artifacts you create in Azure Machine Learning. For instruction on creating a workspace, see [Create and manage Azure Machine Learning workspaces](how-to-manage-workspace.md).
+You need an Azure Machine Learning workspace to use the designer. The workspace is the top-level resource for Azure Machine Learning, it provides a centralized place to work with all the artifacts you create in Azure Machine Learning. For instruction on creating a workspace, see [Create workspace resources](quickstart-create-resources.md).
> [!NOTE] > If your workspace uses a Virtual network, there are additional configuration steps you must use to use the designer. For more information, see [Use Azure Machine Learning studio in an Azure virtual network](how-to-enable-studio-virtual-network.md)
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-pipeline-python-sdk.md
If you're not going to use the endpoint, delete it to stop using the resource.
## Next steps > [!div class="nextstepaction"]
-> Learn more about [Azure ML logging](https://github.com/Azure/azureml-examples/blob/sdk-preview/notebooks/mlflow/mlflow-v1-comparison.ipynb).
+> Learn more about [Azure ML logging](/azure/machine-learning/how-to-use-mlflow-cli-runs).
machine-learning Tutorial Power Bi Custom Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-power-bi-custom-model.md
In this tutorial, you:
## Prerequisites - An Azure subscription. If you don't already have a subscription, you can use a [free trial](https://azure.microsoft.com/free/). -- An Azure Machine Learning workspace. If you don't already have a workspace, see [Create and manage Azure Machine Learning workspaces](./how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning workspace. If you don't already have a workspace, see [Create workspace resources](quickstart-create-resources.md).
- Introductory knowledge of the Python language and machine learning workflows. ## Create a notebook and compute
machine-learning Concept Azure Machine Learning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/concept-azure-machine-learning-architecture.md
The studio is also where you access the interactive tools that are part of Azure
To get started with Azure Machine Learning, see: * [What is Azure Machine Learning?](../overview-what-is-azure-machine-learning.md)
-* [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md)
+* [Create an Azure Machine Learning workspace](../quickstart-create-resources.md)
* [Tutorial: Train and deploy a model](../tutorial-train-deploy-notebook.md)
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
For a low code experience, see how to use the [Azure Machine Learning studio to
- An Azure Machine Learning workspace.
- Either [create an Azure Machine Learning workspace](../how-to-manage-workspace.md) or use an existing one via the Python SDK.
+ Either [create an Azure Machine Learning workspace](../quickstart-create-resources.md) or use an existing one via the Python SDK.
Import the `Workspace` and `Datastore` class, and load your subscription information from the file `config.json` using the function `from_config()`. This looks for the JSON file in the current directory by default, but you can also specify a path parameter to point to the file using `from_config(path="your/file/path")`.
machine-learning How To Attach Compute Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-attach-compute-targets.md
To use compute targets managed by Azure Machine Learning, see:
## Prerequisites
-* An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. For more information, see [Create workspace resources](../quickstart-create-resources.md).
* The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md), [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro), or the [Azure Machine Learning Visual Studio Code extension](../how-to-setup-vs-code.md).
machine-learning How To Auto Train Image Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-image-models-v1.md
Automated ML supports model training for computer vision tasks like image classi
## Prerequisites
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md).
* The Azure Machine Learning Python SDK installed. To install the SDK you can either,
machine-learning How To Auto Train Nlp Models V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-auto-train-nlp-models-v1.md
You can seamlessly integrate with the [Azure Machine Learning data labeling](../
* Azure subscription. If you don't have an Azure subscription, sign up to try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md). See [GPU optimized virtual machine sizes](../../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
+* An Azure Machine Learning workspace with a GPU training compute. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md). See [GPU optimized virtual machine sizes](../../virtual-machines/sizes-gpu.md) for more details of GPU instances provided by Azure.
> [!Warning] > Support for multilingual models and the use of models with longer max sequence length is necessary for several NLP use cases, such as non-english datasets and longer range documents. As a result, these scenarios may require higher GPU memory for model training to succeed, such as the NC_v3 series or the ND series.
text,labels
### Named entity recognition (NER)
-Unlike multi-class or multi-label, which takes `.csv` format datasets, named entity recognition requires [CoNLL](https://www.clips.uantwerpen.be/conll2003/ner/) format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space.
+Unlike multi-class or multi-label, which takes `.csv` format datasets, named entity recognition requires CoNLL format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space.
For example,
machine-learning How To Configure Auto Train V1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-configure-auto-train-v1.md
If you prefer a no-code experience, you can also [Set up no-code AutoML training
## Prerequisites For this article you need,
-* An Azure Machine Learning workspace. To create the workspace, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+* An Azure Machine Learning workspace. To create the workspace, see [Create workspace resources](../quickstart-create-resources.md).
* The Azure Machine Learning Python SDK installed. To install the SDK you can either,
machine-learning How To Create Register Datasets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-register-datasets.md
To create and work with datasets, you need:
* An Azure subscription. If you don't have one, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
-* An [Azure Machine Learning workspace](../how-to-manage-workspace.md).
+* An [Azure Machine Learning workspace](../quickstart-create-resources.md).
* The [Azure Machine Learning SDK for Python installed](/python/api/overview/azure/ml/install), which includes the azureml-datasets package.
machine-learning How To Deploy Profile Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-profile-model.md
description: Use CLI (v1) or SDK (v1) to profile your model before deployment. P
Previously updated : 07/31/2020 Last updated : 07/01/2022 zone_pivot_groups: aml-control-methods
machine-learning How To Track Monitor Analyze Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-track-monitor-analyze-runs.md
You'll need the following items:
* An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
-* An [Azure Machine Learning workspace](../how-to-manage-workspace.md).
+* An [Azure Machine Learning workspace](../quickstart-create-resources.md).
* The Azure Machine Learning SDK for Python (version 1.0.21 or later). To install or update to the latest version of the SDK, see [Install or update the SDK](/python/api/overview/azure/ml/install).
machine-learning How To Use Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-environments.md
For a high-level overview of how environments work in Azure Machine Learning, se
## Prerequisites
-* An [Azure Machine Learning workspace](../how-to-manage-workspace.md)
+* An [Azure Machine Learning workspace](../quickstart-create-resources.md)
[!INCLUDE [cli-version-info](../../../includes/machine-learning-cli-version-1-only.md)]
machine-learning How To Use Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-managed-identities.md
In this article, you'll learn how to use managed identities to:
## Prerequisites -- An Azure Machine Learning workspace. For more information, see [Create an Azure Machine Learning workspace](../how-to-manage-workspace.md).
+- An Azure Machine Learning workspace. For more information, see [Create workspace resources](../quickstart-create-resources.md).
- The [Azure CLI extension for Machine Learning service](reference-azure-machine-learning-cli.md) - The [Azure Machine Learning Python SDK](/python/api/overview/azure/ml/intro). - To assign roles, the login for your Azure subscription must have the [Managed Identity Operator](../../role-based-access-control/built-in-roles.md#managed-identity-operator) role, or other role that grants the required actions (such as __Owner__).
machine-learning How To Use Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-use-mlflow.md
The following diagram illustrates that with MLflow Tracking, you track an experi
* Install the `azureml-mlflow` package. * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
-* [Create an Azure Machine Learning Workspace](../how-to-manage-workspace.md).
+* [Create an Azure Machine Learning Workspace](../quickstart-create-resources.md).
* See which [access permissions you need to perform your MLflow operations with your workspace](../how-to-assign-roles.md#mlflow-operations). ## Track local runs
machine-learning Tutorial Pipeline Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/tutorial-pipeline-python-sdk.md
workspace = Workspace.from_config()
``` > [!IMPORTANT]
-> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create and manage Azure Machine Learning workspaces](../how-to-manage-workspace.md). For more information on saving the configuration to file, see [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
+> This code snippet expects the workspace configuration to be saved in the current directory or its parent. For more information on creating a workspace, see [Create workspace resources](../quickstart-create-resources.md). For more information on saving the configuration to file, see [Create a workspace configuration file](../how-to-configure-environment.md#workspace).
## Create the infrastructure for your pipeline
marketplace Analytics Programmatic Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/analytics-programmatic-faq.md
This table describes the API responses and what to do if you receive them.
| | - | - | | Unauthorized | 401 | This is an authentication exception. Check the correctness of the Azure Active Directory (Azure AD) token. The Azure AD token is valid for 60 minutes, after which time you would need to regenerate the Azure AD token. | | Invalid table name | 400 | The name of the dataset is wrong. Recheck the dataset name by calling the ΓÇ£Get All DatasetsΓÇ¥ API. |
-| Incorrect column name | 400| The name of the column in the query is incorrect. Recheck the column name by calling the ΓÇ£Get All DatasetsΓÇ¥ API or refer to the column names in the following tables:<br><ul><li>[Orders details table](orders-dashboard.md#orders-details-table)</li><li>[Usage details table](usage-dashboard.md#usage-details-table)</li><li>[Customer details table](customer-dashboard.md#customer-details-table)</li><li>[Marketplace insights details table](insights-dashboard.md#marketplace-insights-details-table)</li></UL> |
+| Incorrect column name | 400| The name of the column in the query is incorrect. Recheck the column name by calling the ΓÇ£Get All DatasetsΓÇ¥ API or refer to the column names in the following tables:<br><ul><li>[Orders details table](orders-dashboard.md#orders-details-table)</li><li>[Usage details table](usage-dashboard.md#usage-details-table)</li><li>[Customer details table](customer-dashboard.md#customer-details-table)</li><li>[Marketplace insights details table](insights-dashboard.md#marketplace-insights-details-table)</li><li>[Revenue dashboard](revenue-dashboard.md)</li><li>[Quality of Service dashboard](quality-of-service-dashboard.md)</li><li>[Customer retention dashboard](customer-retention-dashboard.md#dictionary-of-data-terms)</li></UL> |
| Null or missing value | 400 | You may be missing mandatory parameters as part of the request payload of the API. | | Invalid report parameters | 400 | Make sure the report parameters are correct. For example, you may be giving a value of less than 4 for `RecurrenceInterval` parameter. | | Recurrence Interval must be between 4 and 90 | 400 | Make sure the value of the `RecurrenceInterval` request parameter is between 4 and 90. |
This table describes the API responses and what to do if you receive them.
## No records
-I receive API response 200 when I download the report from the secure location. Why am I getting no records?
+**I receive API response 200 when I download the report from the secure location. Why am I getting no records?**
Check whether the string in the query has one of the allowable values for the column header. For example, this query will return zero results:
In this example, the allowable values for SKUBillingType are Paid or Free. Refer
- [Usage details table](usage-dashboard.md#usage-details-table) - [Customer details table](customer-dashboard.md#customer-details-table) - [Marketplace insights details table](insights-dashboard.md#marketplace-insights-details-table)
+- [Revenue dashboard](revenue-dashboard.md)
+- [Quality of Service dashboard](quality-of-service-dashboard.md)
+- [Customer retention dashboard](customer-retention-dashboard.md#dictionary-of-data-terms)
## Next steps
marketplace Marketplace Metering Service Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-metering-service-apis.md
For SaaS offers, the `resourceId` is the SaaS subscription ID. For more details
} ```
-For Azure Application Managed Apps plans, the `resourceUri` is the Managed App `resource group Id`. An example script for fetching it can be found in [using the Azure-managed identities token](marketplace-metering-service-authentication.md#using-the-azure-managed-identities-token).
+For Azure Application Managed Apps plans, the `resourceUri` is the Managed Application `resourceId`. An example script for fetching it can be found in [using the Azure-managed identities token](marketplace-metering-service-authentication.md#using-the-azure-managed-identities-token).
*Request body example for Azure Application managed apps:*
migrate Migrate Support Matrix Vmware Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/migrate-support-matrix-vmware-migration.md
The table summarizes agentless migration requirements for VMware VMs.
**NFS** | NFS volumes mounted as volumes on the VMs won't be replicated. **iSCSI targets** | VMs with iSCSI targets aren't supported for agentless migration. **Multipath IO** | Not supported.
-**Storage vMotion** | Not supported. Replication won't work if a VM uses storage vMotion.
+**Storage vMotion** | Supported.
**Teamed NICs** | Not supported. **IPv6** | Not supported. **Target disk** | VMs can only be migrated to managed disks (standard HDD, standard SSD, premium SSD) in Azure.
This table summarizes assessment support and limitations for VMware virtualizati
**VMware requirements** | **Details** | **VMware vCenter Server** | Version 5.5, 6.0, 6.5, or 6.7.
-**VMware vSphere ESXI host** | Version 5.5, 6.0, 6.5, or 6.7.
+**VMware vSphere ESXI host** | Version 5.5, 6.0, 6.5, 6.7 or 7.0.
**vCenter Server permissions** | A read-only account for vCenter Server. ### VM requirements (agent-based)
migrate Troubleshoot Changed Block Tracking Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-changed-block-tracking-replication.md
ms. Previously updated : 08/17/2020 Last updated : 06/30/2022 # Troubleshooting replication issues in agentless VMware VM migration
When you replicate a VMware virtual machine using the agentless replication meth
You may occasionally see replication cycles failing for a VM. These failures can happen due to reasons ranging from issues in on-premises network configuration to issues at the Azure Migrate Cloud Service backend. In this article, we will: - Show you how you can monitor replication status and resolve errors.
+ - List some of the commonly occurring replication errors and suggest steps to remediate them.
## Monitor replication status using the Azure portal
Use the following steps to monitor the replication status for your virtual machi
1. Go to the Servers page in Azure Migrate on the Azure portal. ![Image 1](./media/troubleshoot-changed-block-tracking-replication/image0.png)
- 1. Navigate to the "Replicating machines" page by clicking on "Replicating servers" in the Server Migration tile.
+ 1. Navigate to the "Replicating machines" page by selecting **Replicating servers** in the Server Migration tile.
![Image 2](./media/troubleshoot-changed-block-tracking-replication/image1.png)
- 1. You'll see a list of replicating servers along with additional information such as status, health, last sync time, etc. The health column indicates the current replication health of the VM. A 'Critical' or 'Warning' value in the health column typically indicates that the previous replication cycle for the VM failed. To get more details, right-click on the VM, and select "Error Details." The Error Details page contains information on the error and additional details on how to troubleshoot. You'll also see a "Recent Events" link that can be used to navigate to the events page for the VM.
+ 1. You'll see a list of replicating servers along with additional information such as status, health, last sync time, etc. The health column indicates the current replication health of the VM. A 'Critical' or 'Warning' value in the health column typically indicates that the previous replication cycle for the VM failed. To get more details, right-click on the VM, and select **Error Details**. The Error Details page contains information on the error and additional details on how to troubleshoot. You'll also see a "Recent Events" link that can be used to navigate to the events page for the VM.
![Image 3](./media/troubleshoot-changed-block-tracking-replication/image2.png)
- 1. Click "Recent Events" to see the previous replication cycle failures for the VM. In the events page, look for the most recent event of type "Replication cycle failed" or "Replication cycle failed for disk" for the VM.
+ 1. Select **Recent Events** to see the previous replication cycle failures for the VM. In the events page, look for the most recent event of type *Replication cycle failed* or *Replication cycle failed* for disk" for the VM.
![Image 4](./media/troubleshoot-changed-block-tracking-replication/image3.png)
- 1. Click on the event to understand the possible causes of the error and recommended remediation steps. Use the information provided to troubleshoot and remediate the error.
+ 1. Select the event to understand the possible causes of the error and recommended remediation steps. Use the information provided to troubleshoot and remediate the error.
![Image 5](./media/troubleshoot-changed-block-tracking-replication/image4.png) ## Common Replication Errors
This section describes some of the common errors, and how you can troubleshoot t
## Key Vault operation failed error when trying to replicate VMs
-**Error:** ΓÇ£Key Vault operation failed. Operation : Configure managed storage account, Key Vault: Key-vault-name, Storage Account: storage account name failed with the error:ΓÇ¥
+**Error:** ΓÇ£Key Vault operation failed. Operation: Configure managed storage account, Key Vault: Key-vault-name, Storage Account: storage account name failed with the error:ΓÇ¥
-**Error:** ΓÇ£Key Vault operation failed. Operation : Generate shared access signature definition, Key Vault: Key-vault-name, Storage Account: storage account name failed with the error:ΓÇ¥
+**Error:** ΓÇ£Key Vault operation failed. Operation: Generate shared access signature definition, Key Vault: Key-vault-name, Storage Account: storage account name failed with the error:ΓÇ¥
![Key Vault](./media/troubleshoot-changed-block-tracking-replication/key-vault.png) This error typically occurs because the User Access Policy for the Key Vault doesn't give the currently logged in user the necessary permissions to configure storage accounts to be Key Vault managed. To check for user access policy on the key vault, go to the Key vault page on the portal for the Key vault and select Access policies
-When the portal creates the key vault it also adds a user access policy granting the currently logged in user permissions to configure storage accounts to be Key Vault managed. This can fail for two reasons
+When the portal creates the key vault, it also adds a user access policy granting the currently logged in user permissions to configure storage accounts to be Key Vault managed. This can fail for two reasons
-- The logged in user is a remote principal on the customers Azure tenant (CSP subscription - and the logged in user is the partner admin). The workaround in this case is to delete the key vault, log out from the portal, and then log in with a user account from the customers tenant (not a remote principal) and retry the operation. The CSP partner will typically have a user account in the customers Azure Active Directory tenant that they can use. If not they can create a new user account for themselves in the customers Azure Active Directory tenant, log in to the portal as the new user and then retry the replicate operation. The account used must have either Owner or Contributor+User Access Administrator permissions granted to the account on the resource group (Migrate project resource group)
+- The logged in user is a remote principal on the customer's Azure tenant (CSP subscription - and the logged in user is the partner admin). The work-around in this case is to delete the key vault, sign out from the portal, and then sign in with a user account from the customer's tenant (not a remote principal) and retry the operation. The CSP partner will typically have a user account in the customers Azure Active Directory tenant that they can use. If not, they can create a new user account for themselves in the customers Azure Active Directory tenant, sign in to the portal as the new user, and then retry the replicate operation. The account used must have either Owner or Contributor+User Access Administrator permissions granted to the account on the resource group (Migrate project resource group)
-- The other case where this may happen is when one user (user1) attempted to setup replication initially and encountered a failure, but the key vault has already been created (and user access policy appropriately assigned to this user). Now at a later point a different user (user2) tries to setup replication, but the Configure Managed Storage Account or Generate SAS definition operation fails as there is no user access policy corresponding to user2 in the key vault.
+- The other case where this may happen is when one user (user1) attempted to set up replication initially and encountered a failure, but the key vault has already been created (and user access policy appropriately assigned to this user). Now at a later point a different user (user2) tries to set up replication, but the Configure Managed Storage Account or Generate SAS definition operation fails as there's no user access policy corresponding to user2 in the key vault.
-**Resolution**: To workaround this issue create a user access policy for user2 in the keyvault granting user2 permission to configure managed storage account and generate SAS definitions. User2 can do this from Azure PowerShell using the below cmdlets:
+**Resolution**: To work around this issue, create a user access policy for user2 in the keyvault granting user2 permission to configure managed storage account and generate SAS definitions. User2 can do this from Azure PowerShell using the below cmdlets:
$userPrincipalId = $(Get-AzureRmADUser -UserPrincipalName "user2_email_address").Id
-Set-AzureRmKeyVaultAccessPolicy -VaultName "keyvaultname" -ObjectId $userPrincipalId -PermissionsToStorage get, list, delete, set, update, regeneratekey, getsas, listsas, deletesas, setsas, recover, backup, restore, purge
+Set-AzureRmKeyVaultAccessPolicy -VaultName "keyvaultname" -ObjectId $userPrincipalId -PermissionsToStorage get, list, delete, set, update, regeneratekey, getsas, listsas, deletesas, setsas, recover, back up, restore, purge
## DisposeArtefactsTimedOut
Set-AzureRmKeyVaultAccessPolicy -VaultName "keyvaultname" -ObjectId $userPrincip
The component trying to replicate data to Azure is either down or not responding. The possible causes include: - The gateway service running in the Azure Migrate appliance is down.-- The gateway service is experiencing connectivity issues to Service Bus/Event hub/Appliance Storage account.
+- The gateway service is experiencing connectivity issues to Service Bus/Event hubs/Appliance Storage account.
**Identifying the exact cause for DisposeArtefactsTimedOut and the corresponding resolution:** 1. Ensure that the Azure Migrate appliance is up and running. 2. Check if the gateway service is running on the appliance:
- 1. Log in to the Azure Migrate appliance using remote desktop and do the following.
+ 1. Sign in to the Azure Migrate appliance using remote desktop and do the following.
- 2. Open the Microsoft services MMC snap-in (run > services.msc), and check if the "Microsoft Azure Gateway Service" is running. If the service is stopped or not running, start the service. Alternatively, you can open command prompt or PowerShell and do: "Net Start asrgwy"
+ 2. Open the Microsoft services MMC snap-in (run > services.msc), and check if the "Microsoft Azure Gateway Service" is running. If the service is stopped or not running, start the service. Alternatively, you can open command prompt or PowerShell and enter 'Net Start asrgwy'.
3. Check for connectivity issues between Azure Migrate appliance and Appliance Storage Account:
The component trying to replicate data to Azure is either down or not responding
2. Look for the appliance Storage Account in the Resource Group. The Storage Account has a name that resembles migrategwsa\*\*\*\*\*\*\*\*\*\*. This is the value of parameter [account] in the above command.
- 3. Search for your storage account in the Azure portal. Ensure that the subscription you use to search is the same subscription (target subscription) in which the storage account is created. Go to Containers in the Blob Service section. Click on +Container and create a Container. Leave Public Access Level to default selected value.
+ 3. Search for your storage account in the Azure portal. Ensure that the subscription you use to search is the same subscription (target subscription) in which the storage account is created. Go to Containers in the Blob Service section. Select **+Container** and create a Container. Leave Public Access Level to default selected value.
- 4. Go to Shared Access Signature under Settings. Select Container in "Allowed Resource Type." Click on Generate SAS and connection string. Copy the SAS value.
+ 4. Go to Shared Access Signature under Settings. Select Container in **Allowed Resource Type**.Select Generate SAS and connection string. Copy the SAS value.
5. Execute the above command in Command Prompt by replacing account, container, SAS with the values obtained in steps 2, 3, and 4 respectively.
- Alternatively, [download](https://go.microsoft.com/fwlink/?linkid=2138967) the Azure Storage Explore on to the appliance and try to upload 10 blobs of ~64 MB into the storage accounts. If there is no issue, the upload should be successful.
+ Alternatively, [download](https://go.microsoft.com/fwlink/?linkid=2138967) the Azure Storage Explore on to the appliance and try to upload 10 blobs of ~64 MB into the storage accounts. If there's no issue, the upload should be successful.
**Resolution:** If this test fails, there&#39;s a networking issue. Engage your local networking team to check connectivity issues. Typically, there can be some firewall settings that are causing the failures. 4. Check for connectivity issues between Azure Migrate appliance and Service Bus:
- This test checks if the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hub message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform send message/receive message. If there is no issue, this should be successful.
+ This test checks if the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hubs message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform send message/receive message. If there's no issue, this should be successful.
**Steps to run the test:**
- 1. Copy the connection string from the Service Bus that got created in the Migrate Project
- 2. Open the Service Bus Explorer
- 3. Go to File then Connect
- 4. Paste the connection string and click Connect
- 5. This will open Service Bus Name Space
- 6. Select Snapshot Manager in the topic. Right click on Snapshot Manager, select "Receive Messages" > select "peek", and click OK
- 7. If the connection is successful, you will see "[x] messages received" on the console output. If the connection is not successful, you'll see a message stating that the connection failed
+ 1. Copy the connection string from the Service Bus that got created in the Migrate Project.
+ 2. Open the Service Bus Explorer.
+ 3. Go to File then Connect.
+ 4. Paste the connection string and select **Connect**.
+ 5. This will open Service Bus Name Space.
+ 6. Select Snapshot Manager. Right-click on Snapshot Manager, select **Receive Messages** > **peek**, and select **OK**.
+ 7. If the connection is successful, you'll see "[x] messages received" on the console output. If the connection isn't successful, you'll see a message stating that the connection failed.
**Resolution:** If this test fails, there's a networking issue. Engage your local networking team to check connectivity issues. Typically, there can be some firewall settings that are causing the failures.
The component trying to replicate data to Azure is either down or not responding
This command will attempt a TCP connection and will return an output.
- - In the output, check the field "_TcpTestSucceeded_". If the value is "_True_", there is no connectivity issue between the Azure Migrate Appliance and the Azure Key Vault. If the value is "False", there is a connectivity issue.
+ - In the output, check the field "_TcpTestSucceeded_". If the value is "_True_", there's no connectivity issue between the Azure Migrate Appliance and the Azure Key Vault. If the value is "False", there's a connectivity issue.
**Resolution:** If this test fails, there's a connectivity issue between the Azure Migrate appliance and the Azure Key Vault. Engage your local networking team to check connectivity issues. Typically, there can be some firewall settings that are causing the failures.
The component trying to replicate data to Azure is either down or not responding
**Error ID:** 1011
-**Error Message:** The upload of data for disk DiskPath, DiskId of virtual machine VMName; VMId did not complete within the expected time.
+**Error Message:** The upload of data for disk DiskPath, DiskId of virtual machine VMName; VMId didn't complete within the expected time.
This error typically indicates either that the Azure Migrate appliance performing the replication is unable to connect to the Azure Cloud Services, or that replication is progressing slowly causing the replication cycle to time out. The possible causes include: - The Azure Migrate appliance is down.-- The replication gateway service on the appliance is not running.-- The replication gateway service is experiencing connectivity issues to one of the following Azure service components that are used for replication: Service Bus/Event Hub/Azure cache Storage Account/Azure Key Vault.
+- The replication gateway service on the appliance isn't running.
+- The replication gateway service is experiencing connectivity issues to one of the following Azure service components that are used for replication: Service Bus/Event Hubs/Azure cache Storage Account/Azure Key Vault.
- The gateway service is being throttled at the vCenter level while trying to read the disk. **Identifying the root cause and resolving the issue:** 1. Ensure that the Azure Migrate appliance is up and running. 2. Check if the gateway service is running on the appliance:
- 1. Log in to the Azure Migrate appliance using remote desktop and do the following.
+ 1. Sign in to the Azure Migrate appliance using remote desktop and do the following.
- 2. Open the Microsoft services MMC snap-in (run > services.msc), and check if the "Microsoft Azure Gateway Service" is running. If the service is stopped or not running, start the service. Alternatively, you can open command prompt or PowerShell and do: "Net Start asrgwy".
+ 2. Open the Microsoft services MMC snap-in (run > services.msc), and check if the "Microsoft Azure Gateway Service" is running. If the service is stopped or not running, start the service. Alternatively, you can open command prompt or PowerShell and enter 'Net Start asrgwy'.
3. **Check for connectivity issues between Azure Migrate appliance and cache Storage Account:**
The possible causes include:
2. Look for the Appliance Storage Account in the Resource Group. The Storage Account has a name that resembles migratelsa\*\*\*\*\*\*\*\*\*\*. This is the value of parameter [account] in the above command.
- 3. Search for your storage account in the Azure portal. Ensure that the subscription you use to search is the same subscription (target subscription) in which the storage account is created. Go to Containers in the Blob Service section. Click on +Container and create a Container. Leave Public Access Level to default selected value.
+ 3. Search for your storage account in the Azure portal. Ensure that the subscription you use to search is the same subscription (target subscription) in which the storage account is created. Go to Containers in the Blob Service section. Select **+Container** and create a Container. Leave Public Access Level to default selected value.
- 4. Go to Shared Access Signature under Settings. Select Container in "Allowed Resource Type." Click on Generate SAS and connection string. Copy the SAS value.
+ 4. Go to **Settings** > **Shared Access Signature**. Select Container in **Allowed Resource Type**. Select Generate SAS and connection string. Copy the SAS value.
5. Execute the above command in Command Prompt by replacing account, container, SAS with the values obtained in steps 2, 3, and 4 respectively.
- Alternatively, [download](https://go.microsoft.com/fwlink/?linkid=2138967) the Azure Storage Explore on to the appliance and try to upload 10 blobs of ~64 MB into the storage accounts. If there is no issue, the upload should be successful.
+ Alternatively, [download](https://go.microsoft.com/fwlink/?linkid=2138967) the Azure Storage Explore on to the appliance and try to upload 10 blobs of ~64 MB into the storage accounts. If there's no issue, the upload should be successful.
**Resolution:** If this test fails, there&#39;s a networking issue. Engage your local networking team to check connectivity issues. Typically, there can be some firewall settings that are causing the failures. 4. **Connectivity issues between Azure Migrate appliance and Azure Service Bus:**
- This test will check whether the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hub message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform send message/receive message. If there is no issue, this should be successful.
+ This test will check whether the Azure Migrate appliance can communicate to the Azure Migrate Cloud Service backend. The appliance communicates to the service backend through Service Bus and Event Hubs message queues. To validate connectivity from the appliance to the Service Bus, [download](https://go.microsoft.com/fwlink/?linkid=2139104) the Service Bus Explorer, try to connect to the appliance Service Bus and perform send message/receive message. If there's no issue, this should be successful.
**Steps to run the test:**
- 1. Copy the connection string from the Service Bus that got created in the Resource Group corresponding to Azure Migrate Project
+ 1. Copy the connection string from the Service Bus that got created in the Resource Group corresponding to Azure Migrate Project.
- 1. Open Service Bus Explorer
+ 1. Open Service Bus Explorer.
- 1. Go to File > Connect
+ 1. Go to **File** > **Connect**.
- 1. Paste the connection string you copied in step 1, and click Connect
+ 1. Paste the connection string you copied in step 1, and select **Connect**.
1. This will open Service Bus namespace.
- 1. Select Snapshot Manager in the topic in namespace. Right click on Snapshot Manager, select "Receive Messages" > select "peek", and click OK.
+ 1. Select Snapshot Manager in namespace. Right-click on Snapshot Manager, select **Receive Messages** > **peek**, and select OK.
If the connection is successful, you will see "[x] messages received" on the console output. If the connection is not successful, you'll see a message stating that the connection failed.
The possible causes include:
This command will attempt a TCP connection and will return an output.
- 1. In the output, check the field "_TcpTestSucceeded_". If the value is "_True_", there is no connectivity issue between the Azure Migrate Appliance and the Azure Key Vault. If the value is "False", there is a connectivity issue.
+ 1. In the output, check the field "_TcpTestSucceeded_". If the value is "_True_", there's no connectivity issue between the Azure Migrate Appliance and the Azure Key Vault. If the value is "False", there's a connectivity issue.
**Resolution:** If this test fails, there's a connectivity issue between the Azure Migrate appliance and the Azure Key Vault. Engage your local networking team to check connectivity issues. Typically, there can be some firewall settings that are causing the failures.
The agentless replication method uses VMware's changed block tracking technology
This error can be resolved in the following two ways: -- If you had opted for "Automatically repair replication" by selecting "Yes" when you triggered replication of VM, the tool will try to repair it for you. Right click on the VM, and select "Repair Replication."-- If you did not opt for "Automatically repair replication" or the above step did not work for you, then stop replication for the virtual machine, [reset changed block tracking](https://go.microsoft.com/fwlink/?linkid=2139203) on the virtual machine, and then reconfigure replication.
+- If you had opted for "Automatically repair replication" by selecting "Yes" when you triggered replication of VM, the tool will try to repair it for you. Right-click on the VM, and select **Repair Replication**.
+- If you didn't opt for "Automatically repair replication" or the above step didn't work for you, then stop replication for the virtual machine, [reset changed block tracking](https://go.microsoft.com/fwlink/?linkid=2139203) on the virtual machine, and then reconfigure replication.
-One such known issue that may cause a CBT reset of virtual machine on VMware vSphere 5.5 is described in [VMware KB 1020128: Changed Block Tracking](https://kb.vmware.com/s/article/1020128) is reset after a storage vMotion operation in vSphere 5.x . If you are on VMware vSphere 5.5 ensure that you apply the updates described in this KB.
+One such known issue that may cause a CBT reset of virtual machine on VMware vSphere 5.5 is described in [VMware KB 1020128: Changed Block Tracking](https://kb.vmware.com/s/article/1020128) is reset after a storage vMotion operation in vSphere 5.x. If you are on VMware vSphere 5.5, ensure that you apply the updates described in this KB.
Alternatively, you can reset VMware changed block tracking on a virtual machine using VMware PowerCLI.
The issue is a known VMware issue and occurs in VDDK 6.7. You need to stop the g
Steps to stop gateway service:
-1. Press Windows + R, open services.msc. Click on "Microsoft Azure Gateway Service", and stop it.
-2. Alternatively, you can open command prompt or PowerShell and do: Net Stop asrgwy. Ensure you wait until you get the message that service is no longer running.
+1. Press Windows + R and open services.msc. Select **Microsoft Azure Gateway Service**, and stop it.
+2. Alternatively, you can open command prompt or PowerShell and enter 'Net Stop asrgwy'. Ensure you wait until you get the message that service is no longer running.
Steps to start gateway service:
-1. Press Windows + R, open services.msc. Right click on "Microsoft Azure Gateway Service", and start it.
-2. Alternatively, you can open command prompt or PowerShell and do: Net Start asrgwy.
+1. Press Windows + R, open services.msc. Right-click on **Microsoft Azure Gateway Service**, and start it.
+2. Alternatively, you can open command prompt or PowerShell and enter 'Net Start asrgwy'.
### Error Message: An internal error occurred. ['An Invalid snapshot configuration was detected.']
This happens when the NFC host buffer is out of memory. To resolve this issue, y
This happens when the file size is larger than the maximum supported file size while creating the snapshot. Follow the resolution given in the [VMware KB](https://kb.vmware.com/s/article/1012384) ### Error Message: An internal error occurred. [Cannot connect to the host (1004109)]
-This happens when ESXi hosts cannot connect to the network. Follow the resolution given in the [VMware KB](https://kb.vmware.com/s/article/1004109).
+This happens when ESXi hosts can't connect to the network. Follow the resolution given in the [VMware KB](https://kb.vmware.com/s/article/1004109).
### Error message: An error occurred while saving the snapshot: Invalid change tracker error code This error occurs when there's a problem with the underlying datastore on which the snapshot is being stored. Follow the resolution given in the [VMware KB](https://kb.vmware.com/s/article/2042742).
This error occurs when the size of the snapshot file created is larger than the
**Error ID:** 181008
-**Error Message:** VM: 'VMName'. Error: No disksnapshots were found for the snapshot replication with snapshot Id : 'SnapshotID'.
+**Error Message:** VM: 'VMName'. Error: No disksnapshots were found for the snapshot replication with snapshot ID: 'SnapshotID'.
**Possible Causes:**
-Possible reasons are:
-1. Path of one or more included disks changed due to Storage VMotion.
-2. One or more included disks is no longer attached to the VM.
+- One or more included disks is no longer attached to the VM.
**Recommendation:**
-Following recommendations are provided
-1. Restore the included disks to original path using storage vMotion and then disable storage vmotion.
-2. Disable Storage VMotion, if enabled, stop replication on the virtual machine, and replicate the virtual machine again. If the issue persists, contact support.
+- Restore the included disks to the original path using storage vMotion and try replication again.
## Next Steps
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Before you begin this tutorial, you should:
-[Review](./agent-based-migration-architecture.md) the migration architecture.
+- [Review](./agent-based-migration-architecture.md) the migration architecture.
+- [Review](/site-recovery/migrate-tutorial-windows-server-2008.md#limitations-and-known-issues) the limitations related to migrating Windows Server 2008 servers to Azure.
## Prepare Azure
migrate Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/whats-new.md
- We recommended migrating instances to *SQL Server on Azure VM* as per the Azure best practices. - *Right sized Lift and Shift* - Server to *SQL Server on Azure VM*. We recommend this when SQL Server credentials are not available. - Enhanced user-experience that covers readiness and cost estimates for multiple migration targets for SQL deployments in one assessment.
+- Support for Storage vMotion during replication for agentless VMware VM migrations.
## Update (March 2022) - Perform agentless VMware VM discovery, assessments, and migrations over a private network using Azure Private Link. [Learn more.](how-to-use-azure-migrate-with-private-endpoints.md)
openshift Howto Restrict Egress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-restrict-egress.md
Title: Restrict egress traffic in an Azure Red Hat OpenShift (ARO) cluster description: Learn what ports and addresses are required to control egress traffic in Azure Red Hat OpenShift (ARO)--++ Previously updated : 04/09/2021 Last updated : 06/02/2022 # Control egress traffic for your Azure Red Hat OpenShift (ARO) cluster (preview)
-This article provides the necessary details that allow you to secure outbound traffic from your Azure Red Hat OpenShift cluster (ARO). It contains the cluster requirements for a basic ARO deployment, and more requirements for optional Red Hat and third-party components. An [example](#private-aro-cluster-setup) will be provided at the end on how to configure these requirements with Azure Firewall. Keep in mind, you can apply this information to Azure Firewall or to any outbound restriction method or appliance.
+This article provides the necessary details that allow you to secure outbound traffic from your Azure Red Hat OpenShift cluster (ARO). With the release of the [Egress Lockdown Feature](./concepts-egress-lockdown.md), all of the required connections for a private cluster will be proxied through the service. There are additional destinations that you may want to allow to use features such as Operator Hub, or Red Hat telemetry. An [example](#private-aro-cluster-setup) will be provided at the end on how to configure these requirements with Azure Firewall. Keep in mind, you can apply this information to Azure Firewall or to any outbound restriction method or appliance.
## Before you begin
This article assumes that you're creating a new cluster. If you need a basic ARO
> [!IMPORTANT] > ARO preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. ARO previews are partially covered by customer support on a best-effort basis.
-## Minimum Required FQDN / application rules
+## Minimum Required FQDN - Proxied through ARO service
This list is based on the list of FQDNs found in the OpenShift docs here: https://docs.openshift.com/container-platform/4.6/installing/install_config/configuring-firewall.html
-The following FQDN / application rules are required:
+The following FQDNs are proxied through the service, and will not need additional firewall rules. They are here for informational purposes.
| Destination FQDN | Port | Use | | -- | -- | - |
-| **`*.quay.io`** | **HTTPS:443** | Mandatory for the installation, used by the cluster. This is used by the cluster to download the platform container images. |
-| **`registry.redhat.io`** | **HTTPS:443** | Mandatory for core add-ons. This is used by the cluster to download core components such as dev tools, operator-based add-ons, and Red Hat provided container images.
-| **`mirror.openshift.com`** | **HTTPS:443** | This is required in the VDI environment or your laptop to access mirrored installation content and images. It's required in the cluster to download platform release signatures to know what images to pull from quay.io. |
-| **`api.openshift.com`** | **HTTPS:443** | Required by the cluster to check if there are available updates before downloading the image signatures. |
| **`arosvc.azurecr.io`** | **HTTPS:443** | Global Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. | | **`arosvc.$REGION.data.azurecr.io`** | **HTTPS:443** | Regional Internal Private registry for ARO Operators. Required if you do not allow the service-endpoints Microsoft.ContainerRegistry on your subnets. | | **`management.azure.com`** | **HTTPS:443** | This is used by the cluster to access Azure APIs. | | **`login.microsoftonline.com`** | **HTTPS:443** | This is used by the cluster for authentication to Azure. |
-| **`gcs.prod.monitoring.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`*.monitor.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
+| **`*.monitoring.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
| **`*.blob.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). | | **`*.servicebus.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). | | **`*.table.core.windows.net`** | **HTTPS:443** | This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s). |
The following FQDN / application rules are required:
-## Complete list of required and optional FQDNs
+## List of optional FQDNs
-### FIRST GROUP: INSTALLING AND DOWNLOADING PACKAGES AND TOOLS
+### INSTALLING AND DOWNLOADING PACKAGES AND TOOLS
-- **`quay.io`**: Mandatory for the installation, used by the cluster. This is used by the cluster to download the platform container images.-- **`registry.redhat.io`**: Mandatory for core add-ons. This is used by the cluster to download core components such as dev tools, operator-based add-ons, or Red Hat provided container images such as our middleware, the Universal Base Image...-- **`sso.redhat.com`**: This one is required in the VDI environment or your laptop to connect to cloud.redhat.com. This is the site where we can download the pull secret, and use some of the SaaS solutions we offer in Red Hat to facilitate monitoring of your subscriptions, cluster inventory, chargeback reporting, among other things.-- **`openshift.org`**: This one is required in the VDI environment or your laptop to connect to download RH CoreOS images, but in Azure they are picked from the marketplace, there is no need to download OS images.
+- **`registry.redhat.io`**: Used to provide images for things such as Operator Hub.
-### SECOND GROUP: TELEMETRY
+### TELEMETRY
All this section can be opted out, but before we know how, please check what it is: https://docs.openshift.com/container-platform/4.6/support/remote_health_monitoring/about-remote-health-monitoring.html-- **`cert-api.access.redhat.com`**: Use in your VDI or laptop environment.-- **`api.access.redhat.com`**: Use in your VDI or laptop environment.-- **`infogw.api.openshift.com`**: Use in your VDI or laptop environment.-- **`https://cloud.redhat.com/api/ingress`**: Use in the cluster for the insights operator who integrates with the aaS Red Hat Insights.
+- **`cert-api.access.redhat.com`**: Used for Red Hat telemetry.
+- **`api.access.redhat.com`**: Used for Red Hat telemetry.
+- **`infogw.api.openshift.com`**: Used for Red Hat telemetry.
+- **`https://cloud.redhat.com/api/ingress`**: Use in the cluster for the insights operator who integrates with Red Hat Insights.
In OpenShift Container Platform, customers can opt out of reporting health and usage information. However, connected clusters allow Red Hat to react more quickly to problems and better support our customers, and better understand how product upgrades clusters. Check details here: https://docs.openshift.com/container-platform/4.6/support/remote_health_monitoring/opting-out-of-remote-health-reporting.html.
-### THIRD GROUP: CLOUD APIs
--- **`management.azure.com`**: This is used by the cluster to access Azure APIs.---
-### FOURTH GROUP: OTHER OPENSHIFT REQUIREMENTS
+### OTHER POSSIBLE OPENSHIFT REQUIREMENTS
-- **`mirror.openshift.com`**: This one is required in the VDI environment or your laptop to access mirrored installation content and images and required in the cluster to download platform release signatures, used by the cluster to know what images to pull from quay.io.-- **`storage.googleapis.com/openshift-release`**: Alternative site to download platform release signatures, used by the cluster to know what images to pull from quay.io.-- **`*.apps.<cluster_name>.<base_domain>`** (OR EQUIVALENT ARO URL): When allowlisting domains, this is use in your corporate network to reach applications deployed in OpenShift, or to access the OpenShift console.-- **`api.openshift.com`**: Required by the cluster to check if there are available updates before downloading the image signatures.
+- **`quay.io`**: May be used to download images from the Red Hat managed Quay registry. Also a possible fall-back target for ARO required system images.
+- **`mirror.openshift.com`**: Required to access mirrored installation content and images. This site is also a source of release image signatures.
+- **`*.apps.<cluster_name>.<base_domain>`** (OR EQUIVALENT ARO URL): When allowlisting domains, this is used in your corporate network to reach applications deployed in OpenShift, or to access the OpenShift console.
+- **`api.openshift.com`**: Used by the cluster for release graph parsing. https://access.redhat.com/labs/ocpupgradegraph/ can be used as an alternative.
- **`registry.access.redhat.com`**: Registry access is required in your VDI or laptop environment to download dev images when using the ODO CLI tool. (This CLI tool is an alternative CLI tool for developers who aren't familiar with kubernetes). https://docs.openshift.com/container-platform/4.6/cli_reference/developer_cli_odo/understanding-odo.html --
-### FIFTH GROUP: MICROSOFT & RED HAT ARO MONITORING SERVICE
--- **`login.microsoftonline.com`**: This is used by the cluster for authentication to Azure.-- **`gcs.prod.monitoring.core.windows.net`**: This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s).-- **`*.blob.core.windows.net`**: This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s).-- **`*.servicebus.windows.net`**: This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s).-- **`*.table.core.windows.net`**: This is used for Microsoft Geneva Monitoring so that the ARO team can monitor the customer's cluster(s).- ## ARO integrations ### Azure Monitor for containers
az network route-table route create -g $RESOURCEGROUP --name aro-udr --route-tab
``` ### Add Application Rules for Azure Firewall
-Rule for OpenShift to work based on this [list](https://docs.openshift.com/container-platform/4.3/installing/install_config/configuring-firewall.html#configuring-firewall_configuring-firewall):
+Example rule for telemetry to work. Additional possibilities can be found on this [list](https://docs.openshift.com/container-platform/4.3/installing/install_config/configuring-firewall.html#configuring-firewall_configuring-firewall):
```azurecli az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \ --collection-name 'ARO' \
az network firewall application-rule create -g $RESOURCEGROUP -f aro-private \
-n 'required' \ --source-addresses '*' \ --protocols 'http=80' 'https=443' \
- --target-fqdns 'registry.redhat.io' '*.quay.io' 'sso.redhat.com' 'management.azure.com' 'mirror.openshift.com' 'api.openshift.com' 'quay.io' '*.blob.core.windows.net' 'gcs.prod.monitoring.core.windows.net' 'registry.access.redhat.com' 'login.microsoftonline.com' '*.servicebus.windows.net' '*.table.core.windows.net' 'grafana.com'
+ --target-fqdns 'cert-api.access.redhat.com' 'api.openshift.com' 'api.access.redhat.com' 'infogw.api.openshift.com'
``` Optional rules for Docker images: ```azurecli
az network vnet subnet update -g $RESOURCEGROUP --vnet-name $AROVNET --name "$CL
## Test the configuration from the Jumpbox These steps work only if you added rules for Docker images. ### Configure the jumpbox
-Log into a jumpbox VM and install `azure-cli`, `oc-cli`, and `jq` utils. For the installation of openshift-cli, check the Red Hat customer portal.
+Log in to a jumpbox VM and install `azure-cli`, `oc-cli`, and `jq` utils. For the installation of openshift-cli, check the Red Hat customer portal.
```bash #Install Azure-cli curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash #Install jq sudo apt install jq -y ```
-### Log into the ARO cluster
+### Log in to the ARO cluster
List cluster credentials: ```bash
postgresql Concepts Row Level Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/hyperscale/concepts-row-level-security.md
+
+ Title: Row level security ΓÇô Hyperscale (Citus) - Azure Database for PostgreSQL
+description: Multi-tenant security through database roles
+++++ Last updated : 06/30/2022++
+# Row-level security in Hyperscale (Citus)
++
+PostgreSQL [row-level security
+policies](https://www.postgresql.org/docs/current/ddl-rowsecurity.html)
+restrict which users can modify or access which table rows. Row-level security
+can be especially useful in a multi-tenant Hyperscale (Citus) server group. It
+allows individual tenants to have full SQL access to the database while hiding
+each tenantΓÇÖs information from other tenants.
+
+## Implementing for multi-tenant apps
+
+We can implement the separation of tenant data by using a naming convention for
+database roles that ties into table row-level security policies. WeΓÇÖll assign
+each tenant a database role in a numbered sequence: `tenant1`, `tenant2`,
+etc. Tenants will connect to Citus using these separate roles. Row-level
+security policies can compare the role name to values in the `tenant_id`
+distribution column to decide whether to allow access.
+
+Here's how to apply the approach on a simplified events table distributed by
+`tenant_id`. First [create the roles](howto-create-users.md) `tenant1` and
+`tenant2`. Then run the following SQL commands as the `citus` administrator
+user:
+
+```postgresql
+CREATE TABLE events(
+ tenant_id int,
+ id int,
+ type text
+);
+
+SELECT create_distributed_table('events','tenant_id');
+
+INSERT INTO events VALUES (1,1,'foo'), (2,2,'bar');
+
+-- assumes that roles tenant1 and tenant2 exist
+GRANT select, update, insert, delete
+ ON events TO tenant1, tenant2;
+```
+
+As it stands, anyone with select permissions for this table can see both rows.
+Users from either tenant can see and update the row of the other tenant. We can
+solve the data leak with row-level table security policies.
+
+Each policy consists of two clauses: USING and WITH CHECK. When a user tries to
+read or write rows, the database evaluates each row against these clauses.
+PostgreSQL checks existing table rows against the expression specified in the
+USING clause, and rows that would be created via INSERT or UPDATE against the
+WITH CHECK clause.
+
+```postgresql
+-- first a policy for the system admin "citus" user
+CREATE POLICY admin_all ON events
+ TO citus -- apply to this role
+ USING (true) -- read any existing row
+ WITH CHECK (true); -- insert or update any row
+
+-- next a policy which allows role "tenant<n>" to
+-- access rows where tenant_id = <n>
+CREATE POLICY user_mod ON events
+ USING (current_user = 'tenant' || tenant_id::text);
+ -- lack of CHECK means same condition as USING
+
+-- enforce the policies
+ALTER TABLE events ENABLE ROW LEVEL SECURITY;
+```
+
+Now roles `tenant1` and `tenant2` get different results for their queries:
+
+**Connected as tenant1:**
+
+```sql
+SELECT * FROM events;
+```
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé tenant_id Γöé id Γöé type Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 1 Γöé 1 Γöé foo Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+
+**Connected as tenant2:**
+
+```sql
+SELECT * FROM events;
+```
+```
+ΓöîΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓö¼ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÉ
+Γöé tenant_id Γöé id Γöé type Γöé
+Γö£ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓö╝ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöñ
+Γöé 2 Γöé 2 Γöé bar Γöé
+ΓööΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓö┤ΓöÇΓöÇΓöÇΓöÇΓöÇΓöÇΓöÿ
+```
+```sql
+INSERT INTO events VALUES (3,3,'surprise');
+/*
+ERROR: new row violates row-level security policy for table "events_102055"
+*/
+```
+
+## Next steps
+
+Learn how to [create roles](howto-create-users.md) in a Hyperscale (Citus)
+server group.
purview Asset Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/asset-insights.md
Last updated 05/16/2022
This guide describes how to access, view, and filter Microsoft Purview asset insight reports for your data. - In this guide, you'll learn how to: > [!div class="checklist"]
purview Classification Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/classification-insights.md
This guide describes how to access, view, and filter Microsoft Purview Classification insight reports for your data. - In this guide, you'll learn how to: > [!div class="checklist"]
purview Concept Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-insights.md
Last updated 05/16/2022
This article provides an overview of the Data Estate Insights application in Microsoft Purview. - The Data Estate Insights application is purpose-built for governance stakeholders, primarily for roles focused on data management, compliance, and data use: like a Chief Data Officer. The application provides actionable insights into the organizationΓÇÖs data estate, catalog usage, adoption, and processes. As organizations scan and populate their Microsoft Purview Data Map, the Data Estate Insights application automatically extracts valuable governance gaps and highlights them in its top metrics. Then it also provides drill-down experience that enables all stakeholders, such as data owners and data stewards, to take appropriate action to close the gaps. All the reports within the Data Estate Insights application are automatically generated and populated, so governance stakeholders can focus on the information itself, rather than building the reports.
purview Glossary Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/glossary-insights.md
Last updated 05/16/2022
This guide describes how to access, view, and filter Microsoft Purview glossary insight reports for your data. - In this how-to guide, you'll learn how to: > [!div class="checklist"]
purview Insights Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/insights-permissions.md
Last updated 05/16/2022
Like all other permissions in Microsoft Purview, Data Estate Insights access is given through collections. This article describes what permissions are needed to access Data Estate Insights in Microsoft Purview. - ## Insights reader role The insights reader role gives users read permission to the Data Estate Insights application in Microsoft Purview. However, a user with this role will only have access to information for collections that they also have at least data reader access to.
purview Sensitivity Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/sensitivity-insights.md
This how-to guide describes how to access, view, and filter security insights provided by sensitivity labels applied to your data. - Supported data sources include: Azure Blob Storage, Azure Data Lake Storage (ADLS) GEN 1, Azure Data Lake Storage (ADLS) GEN 2, SQL Server, Azure SQL Database, Azure SQL Managed Instance, Amazon S3 buckets, Amazon RDS databases (public preview), Power BI In this how-to guide, you'll learn how to:
role-based-access-control Quickstart Role Assignments Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/quickstart-role-assignments-bicep.md
+
+ Title: "Quickstart: Assign an Azure role using Bicep - Azure RBAC"
+description: Learn how to grant access to Azure resources for a user at resource group scope using Bicep and Azure role-based access control (Azure RBAC).
++++++ Last updated : 06/30/2022+
+#Customer intent: As a new user, I want to see how to grant access to resources using Bicep so that I can start automating role assignment processes.
++
+# Quickstart: Assign an Azure role using Bicep
+
+[Azure role-based access control (Azure RBAC)](overview.md) is the way that you manage access to Azure resources. In this quickstart, you create a resource group and grant a user access to create and manage virtual machines in the resource group. This quickstart uses Bicep to grant the access.
++
+## Prerequisites
+
+To assign Azure roles and remove role assignments, you must have:
+
+- If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- `Microsoft.Authorization/roleAssignments/write` and `Microsoft.Authorization/roleAssignments/delete` permissions, such as [User Access Administrator](built-in-roles.md#user-access-administrator) or [Owner](built-in-roles.md#owner).
+- To assign a role, you must specify three elements: security principal, role definition, and scope. For this quickstart, the security principal is you or another user in your directory, the role definition is [Virtual Machine Contributor](built-in-roles.md#virtual-machine-contributor), and the scope is a resource group that you specify.
+
+## Review the Bicep file
+
+The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/rbac-builtinrole-resourcegroup/). The Bicep file has two parameters and a resources section. In the resources section, notice that it has the three elements of a role assignment: security principal, role definition, and scope.
++
+The resource defined in the Bicep file is:
+
+- [Microsoft.Authorization/roleAssignments](/azure/templates/Microsoft.Authorization/roleAssignments)
+
+## Deploy the Bicep file
+
+1. Save the Bicep file as **main.bicep** to your local computer.
+1. Deploy the Bicep file using either Azure CLI or Azure PowerShell.
+
+ # [CLI](#tab/CLI)
+
+ ```azurecli
+ az group create --name exampleRG --location eastus
+ az deployment group create --resource-group exampleRG --template-file main.bicep --parameters roleDefinitionID=9980e02c-c2be-4d73-94e8-173b1dc7cf3c principalId=<principal-id>
+ ```
+
+ # [PowerShell](#tab/PowerShell)
+
+ ```azurepowershell
+ New-AzResourceGroup -Name exampleRG -Location eastus
+ New-AzResourceGroupDeployment -ResourceGroupName exampleRG -TemplateFile ./main.bicep -roleDefinitionID "9980e02c-c2be-4d73-94e8-173b1dc7cf3c" -principalId "<principal-id>"
+ ```
+
+
+
+> [!NOTE]
+> Replace **\<principal-id\>** with the principal ID assigned to the role.
+
+ When the deployment finishes, you should see a message indicating the deployment succeeded.
+
+## Review deployed resources
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to list the deployed resources in the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az role assignment list --resource-group exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Get-AzRoleAssignment -ResourceGroupName exampleRG
+```
+++
+## Clean up resources
+
+When no longer needed, use the Azure portal, Azure CLI, or Azure PowerShell to remove the role assignment. For more information, see [Remove Azure role assignments](role-assignments-remove.md).
+
+Use the Azure portal, Azure CLI, or Azure PowerShell to delete the resource group.
+
+# [CLI](#tab/CLI)
+
+```azurecli-interactive
+az group delete --name exampleRG
+```
+
+# [PowerShell](#tab/PowerShell)
+
+```azurepowershell-interactive
+Remove-AzResourceGroup -Name exampleRG
+```
+++
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Tutorial: Grant a user access to Azure resources using Azure PowerShell](tutorial-role-assignments-user-powershell.md)
search Cognitive Search Concept Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-concept-intro.md
Previously updated : 06/27/2022 Last updated : 07/01/2022 # AI enrichment in Azure Cognitive Search
-*AI enrichment* is the application of machine learning models over raw content, where analysis and inference are used to create searchable content and structure where none previously existed. Because Azure Cognitive Search is a full text search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios:
+*AI enrichment* is the application of machine learning models over content that isn't full text searchable in its raw form. Through enrichment, analysis and inference are used to create searchable content and structure where none previously existed.
-+ Machine translation and language detection support multi-lingual search
-+ Entity recognition finds people, places, and other entities in large chunks of text
+Because Azure Cognitive Search is a full text search solution, the purpose of AI enrichment is to improve the utility of your content in search-related scenarios:
+++ Machine translation and language detection, in support of multi-lingual search++ Entity recognition extracts people, places, and other entities from large chunks of text + Key phrase extraction identifies and then outputs important terms
-+ Optical Character Recognition (OCR) recognizes text in binary files
++ Optical Character Recognition (OCR) recognizes printed and handwritten text in binary files + Image analysis describes image content and outputs the descriptions as searchable text fields
-AI enrichment is an extension of an [**indexer pipeline**](search-indexer-overview.md). It has all of the base components (indexer, data source, index), plus a [**skillset**](cognitive-search-working-with-skillsets.md) that specifies atomic enrichment steps.
+AI enrichment is an extension of an [**indexer pipeline**](search-indexer-overview.md). An enrichment pipeline has all of the components of an indexer pipeline (indexer, data source, index), plus a [**skillset**](cognitive-search-working-with-skillsets.md) that specifies atomic enrichment steps.
The following diagram shows the progression of AI enrichment:
The following diagram shows the progression of AI enrichment:
**Enrich & Index** covers most of the AI enrichment pipeline:
-+ Enrichment starts when the indexer ["cracks documents"](search-indexer-overview.md#document-cracking) and extracts images and text. The kind of processing that occurs next will depend on your data and which skills you've added to a skillset. If you have images, they can be forwarded to skills that perform image processing. Text content is queued for text and natural language processing. Internally, skills create an "enriched document" that collects the transformations as they occur.
++ Enrichment starts when the indexer ["cracks documents"](search-indexer-overview.md#document-cracking) and extracts images and text. The kind of processing that occurs next will depend on your data and which skills you've added to a skillset. If you have images, they can be forwarded to skills that perform image processing. Text content is queued for text and natural language processing. Internally, skills create an ["enriched document"](cognitive-search-working-with-skillsets.md#enrichment-tree) that collects the transformations as they occur.+++ Enriched content is generated during skillset execution, and is temporary unless you save it. You can enable an [enrichment cache](cognitive-search-incremental-indexing-conceptual.md) to persist cracked documents and skill outputs for subsequent reuse during future skillset executions.
- Enriched content is generated during skillset execution, and is temporary unless you save it. In order for enriched content to appear in a search index, the indexer must have mapping information so that it can send enriched content to a field in a search index. Output field mappings set up these associations.
++ To get content into a search index, the indexer must have mapping information for sending enriched content to target field. [Field mappings](search-indexer-field-mappings.md) (explicit or implicit) set the data path from source data to a search index. [Output field mappings](cognitive-search-output-field-mapping.md) set the data path from enriched documents to an index.
-+ Indexing is the process wherein raw and enriched content is ingested into a [search index](search-what-is-an-index.md) (its files and folders).
++ Indexing is the process wherein raw and enriched content is ingested into the physical data structures of a [search index](search-what-is-an-index.md) (its files and folders). Lexical analysis and tokenization occur in this step.
-**Exploration** is the last step. Output is always a [search index](search-what-is-an-index.md) that you can query from a client app. Output can optionally be a [knowledge store](knowledge-store-concept-intro.md) consisting of blobs and tables in Azure Storage that are accessed through data exploration tools or downstream processes. [Field mappings](search-indexer-field-mappings.md), [output field mappings](cognitive-search-output-field-mapping.md), and [projections](knowledge-store-projection-overview.md) determine the data paths that direct content out of the pipeline and into a search index or knowledge store. The same enriched content can appear in both, using implicit or explicit field mappings to send the content to the correct fields.
+**Exploration** is the last step. Output is always a [search index](search-what-is-an-index.md) that you can query from a client app. Output can optionally be a [knowledge store](knowledge-store-concept-intro.md) consisting of blobs and tables in Azure Storage that are accessed through data exploration tools or downstream processes. If you're creating a knowledge store, [projections](knowledge-store-projection-overview.md) determine the data path for enriched content. The same enriched content can appear in both indexes and knowledge stores.
<!-- ![Enrichment pipeline diagram](./media/cognitive-search-intro/cogsearch-architecture.png "enrichment pipeline") --> ## When to use AI enrichment
-Enrichment is useful if raw content is unstructured text, image content, or content that needs language detection and translation. Applying AI through the [*built-in skills*](cognitive-search-predefined-skills.md) can unlock this content for full text search and data science applications.
+Enrichment is useful if raw content is unstructured text, image content, or content that needs language detection and translation. Applying AI through the [**built-in skills**](cognitive-search-predefined-skills.md) can unlock this content for full text search and data science applications.
-Enrichment also unlocks external processing that you provide. Open-source, third-party, or first-party code can be integrated into the pipeline as a custom skill. Classification models that identify salient characteristics of various document types fall into this category, but any external package that adds value to your content could be used.
+You can also create [**custom skills**](cognitive-search-create-custom-skill-example.md) to provide external processing.
+Open-source, third-party, or first-party code can be integrated into the pipeline as a custom skill. Classification models that identify salient characteristics of various document types fall into this category, but any external package that adds value to your content could be used.
### Use-cases for built-in skills
Billing follows a pay-as-you-go pricing model. The costs of using built-in skill
## Checklist: A typical workflow
-An enrichment pipeline consists of [*indexers*](search-indexer-overview.md) that have [*skillsets*](cognitive-search-working-with-skillsets.md). A skillset defines the enrichment steps, and the indexer drives the skillset. When configuring an indexer, you can include properties like output field mappings that send enriched content to a [search index](search-what-is-an-index.md) or projections that define data structures in a [knowledge store](knowledge-store-concept-intro.md).
+An enrichment pipeline consists of [*indexers*](search-indexer-overview.md) that have [*skillsets*](cognitive-search-working-with-skillsets.md). Post-indexing, you can query an index to validate your results.
-Post-indexing, you can access content via search requests through all [query types supported by Azure Cognitive Search](search-query-overview.md).
-
-1. Start with a subset of data. Indexer and skillset design is an iterative process, and the work goes faster with a small representative data set.
+Start with a subset of data in a [supported data source](search-indexer-overview.md#supported-data-sources). Indexer and skillset design is an iterative process. The work goes faster with a small representative data set.
1. Create a [data source](/rest/api/searchservice/create-data-source) that specifies a connection to your data.
-1. Create a [skillset](cognitive-search-defining-skillset.md) to add enrichment steps. If you're using a knowledge store, you'll specify it in this step. Unless you're doing a small proof-of-concept exercise, you'll want to [attach a multi-region Cognitive Services resource](cognitive-search-attach-cognitive-services.md) to the skillset.
+1. [Create a skillset](cognitive-search-defining-skillset.md). Unless your project is small, you'll want to [attach a Cognitive Services resource](cognitive-search-attach-cognitive-services.md). If you're [creating a knowledge store](knowledge-store-create-rest.md), define it within the skillset.
+
+1. [Create an index schema](search-how-to-create-search-index.md) that defines a search index.
-1. Create an [index schema](search-how-to-create-search-index.md) that defines a search index.
+1. [Create and run the indexer](search-howto-create-indexers.md) to bring all of the above components together. This step retrieves the data, runs the skillset, and loads the index.
-1. Create and run the [indexer](search-howto-create-indexers.md) to bring all of the above components together. This step retrieves the data, runs the skillset, and loads the index. An indexer is also where you specify field mappings and output field mappings that set up the data path to a search index.
+ An indexer is also where you specify field mappings and output field mappings that set up the data path to a search index.
- If possible, [enable enrichment caching](cognitive-search-incremental-indexing-conceptual.md) in the indexer configuration. This step allows you to reuse existing enrichments later on.
+ Optionally, [enable enrichment caching](cognitive-search-incremental-indexing-conceptual.md) in the indexer configuration. This step allows you to reuse existing enrichments later on.
-1. Run [queries](search-query-create.md) to evaluate results and modify code to update skillsets, schema, or indexer configuration.
+1. [Run queries](search-query-create.md) to evaluate results or [start a debug session](cognitive-search-how-to-debug-skillset.md) to work through any skillset issues.
-1. To repeat any of the above steps, [reset the indexer](search-howto-reindex.md) before you run it. Or, delete and recreate the objects on each run (recommended if youΓÇÖre using the free tier). If you enabled caching the indexer will pull from the cache if data is unchanged at the source, and if your edits to the pipeline don't invalidate the cache.
+To repeat any of the above steps, [reset the indexer](search-howto-reindex.md) before you run it. Or, delete and recreate the objects on each run (recommended if youΓÇÖre using the free tier). If you enabled caching the indexer will pull from the cache if data is unchanged at the source, and if your edits to the pipeline don't invalidate the cache.
## Next steps
search Search What Is Azure Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md
Previously updated : 05/31/2022 Last updated : 07/01/2022 # What is Azure Cognitive Search? Azure Cognitive Search ([formerly known as "Azure Search"](whats-new.md#new-service-name)) is a cloud search service that gives developers infrastructure, APIs, and tools for building a rich search experience over private, heterogeneous content in web, mobile, and enterprise applications.
-Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, online retail, or data exploration.
+Search is foundational to any app that surfaces text content to users, with common scenarios including catalog or document search, online retail, or data exploration over proprietary content.
When you create a search service, you'll work with the following capabilities:
-+ A search engine for full text search with storage for user-owned content in a search index
++ A search engine for full text search over a search index containing your user-owned content + Rich indexing, with [text analysis](search-analyzers.md) and [optional AI enrichment](cognitive-search-concept-intro.md) for advanced content extraction and transformation + Rich query syntax that supplements free text search with filters, autocomplete, regex, geo-search and more + Programmability through REST APIs and client libraries in Azure SDKs for .NET, Python, Java, and JavaScript
On the search service itself, the two primary workloads are *indexing* and *quer
+ [**Querying**](search-query-overview.md) can happen once an index is populated with searchable text, when your client app sends query requests to a search service and handles responses. All query execution is over a search index that you create, own, and store in your service. In your client app, the search experience is defined using APIs from Azure Cognitive Search, and can include relevance tuning, autocomplete, synonym matching, fuzzy matching, pattern matching, filter, and sort.
-Functionality is exposed through a simple [REST API](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md), that masks the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
+Functionality is exposed through simple [REST APIs](/rest/api/searchservice/), or Azure SDKs like the [Azure SDK for .NET](search-howto-dotnet-sdk.md), that mask the inherent complexity of information retrieval. You can also use the Azure portal for service administration and content management, with tools for prototyping and querying your indexes and skillsets. Because the service runs in the cloud, infrastructure and availability are managed by Microsoft.
## Why use Cognitive Search?
search Tutorial Csharp Create First App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/tutorial-csharp-create-first-app.md
ms.devlang: csharp Previously updated : 02/26/2021 Last updated : 07/01/2022
This tutorial shows you how to create a web app that queries and returns results from a search index using Azure Cognitive Search and Visual Studio.
-In this tutorial, you will learn how to:
+In this tutorial, you'll learn how to:
> [!div class="checklist"] > * Set up a development environment
In this tutorial, you will learn how to:
> * Define a search method > * Test the app
-You will also learn how straightforward a search call is. The key statements in the code you will develop are encapsulated in the following few lines.
+You'll also learn how straightforward a search call is. The key statements in the code are encapsulated in the following few lines:
```csharp var options = new SearchOptions()
var searchResult = await _searchClient.SearchAsync<Hotel>(model.searchText, opti
model.resultList = searchResult.Value.GetResults().ToList(); ```
-Just one call queries the index and returns results.
+Just one call queries the search index and returns results.
:::image type="content" source="media/tutorial-csharp-create-first-app/azure-search-pool.png" alt-text="Searching for *pool*" border="true":::
A finished version of the code can be found in the following project:
* [1-basic-search-page (GitHub)](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-first-app/v11/1-basic-search-page)
-This tutorial has been updated to use the Azure.Search.Documents (version 11) package. For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-first-app/v10).
+This tutorial uses the Azure.Search.Documents (version 11) package. For an earlier version of the .NET SDK, see [Microsoft.Azure.Search (version 10) code sample](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/create-first-app/v10).
## Prerequisites * [Create](search-create-service-portal.md) or [find an existing search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Search%2FsearchServices).
-* Create the hotels-sample-index using the instructions in [Quickstart: Create a search index](search-get-started-portal.md).
+* [Sample index (hotels-sample-index)](search-get-started-portal.md), hosted on your search service.
* [Visual Studio](https://visualstudio.microsoft.com/)
-* [Azure Cognitive Search client library (version 11)](https://www.nuget.org/packages/Azure.Search.Documents/)
+* [Azure.Cognitive.Search client library (version 11)](https://www.nuget.org/packages/Azure.Search.Documents/)
### Install and run the project from GitHub
If you want to jump ahead to a working app, follow the steps below to download a
1. Using Visual Studio, navigate to, and open the solution for the basic search page ("1-basic-search-page"), and select **Start without debugging** (or press F5) to build and run the program.
-1. This is a hotels index, so type in some words that you might use to search for hotels (for example, "wifi", "view", "bar", "parking"), and examine the results.
+1. This is a hotels index, so type in some words that you might use to search for hotels (for example, "wifi", "view", "bar", "parking"). Examine the results.
:::image type="content" source="media/tutorial-csharp-create-first-app/azure-search-wifi.png" alt-text="Searching for *wifi*" border="true":::
-Hopefully this project will run smoothly, and you have Web app running. Many of the essential components for more sophisticated searches are included in this one app, so it is a good idea to go through it, and recreate it step by step. The following sections cover these steps.
+The essential components for more sophisticated searches are included in this one app. If you're new to search development, you can recreate this app step by step to learn the workflow. The following sections show you how.
## Set up a development environment
-To create this project from scratch, and thus reinforce the concepts of Azure Cognitive Search in your mind, start with a Visual Studio project.
+To create this project from scratch, and thus reinforce the concepts of Azure Cognitive Search, start with a Visual Studio project.
-1. In Visual Studio, select **New** > **Project**, then **ASP.NET Core Web Application**.
+1. In Visual Studio, select **New** > **Project**, then **ASP.NET Core Web App (Model-View-Controller)**.
:::image type="content" source="media/tutorial-csharp-create-first-app/azure-search-project1.png" alt-text="Creating a cloud project" border="true":::
-1. Give the project a name such as "FirstSearchApp" and set the location. Select **Create**.
+1. Give the project a name such as "FirstSearchApp" and set the location. Select **Next**.
-1. Choose the **Web Application (Model-View-Controller)** project template.
-
- :::image type="content" source="media/tutorial-csharp-create-first-app/azure-search-project2.png" alt-text="Creating an MVC project" border="true":::
+1. Accept the defaults for target framework, authentication type, and HTTPS. Select **Create**.
1. Install the client library. In **Tools** > **NuGet Package Manager** > **Manage NuGet Packages for Solution...**, select **Browse** and then search for "azure.search.documents". Install **Azure.Search.Documents** (version 11 or later), accepting the license agreements and dependencies.
To create this project from scratch, and thus reinforce the concepts of Azure Co
### Initialize Azure Cognitive Search
-For this sample, you are using publicly available hotel data. This data is an arbitrary collection of 50 fictional hotel names and descriptions, created solely for the purpose of providing demo data. To access this data, specify a name and API key.
+In this step, set the endpoint and access key for connecting to the search service that provides the [hotels sample index](search-get-started-portal.md).
1. Open **appsettings.json** and replace the default lines with the search service URL (in the format `https://<service-name>.search.windows.net`) and an [admin or query API key](search-security-api-keys.md) of your search service. Since you don't need to create or update an index, you can use the query key for this tutorial.
- ```csharp
+ ```json
{ "SearchServiceUri": "<YOUR-SEARCH-SERVICE-URI>", "SearchServiceQueryApiKey": "<YOUR-SEARCH-SERVICE-API-KEY>"
For this sample, you are using publicly available hotel data. This data is an ar
## Model data structures
-Models (C# classes) are used to communicate data between the client (the view), the server (the controller), and also the Azure cloud using the MVC (model, view, controller) architecture. Typically, these models will reflect the structure of the data that is being accessed.
+Models (C# classes) are used to communicate data between the client (the view), the server (the controller), and also the Azure cloud using the MVC (model, view, controller) architecture. Typically, these models reflect the structure of the data that is being accessed.
-In this step, you'll model the data structures of the search index, as well as the search string used in view/controller communications. In the hotels index, each hotel has many rooms, and each hotel has a multi-part address. Altogether, the full representation of a hotel is a hierarchical and nested data structure. You will need three classes to create each component.
+In this step, you'll model the data structures of the search index, as well as the search string used in view/controller communications. In the hotels index, each hotel has many rooms, and each hotel has a multi-part address. Altogether, the full representation of a hotel is a hierarchical and nested data structure. You'll need three classes to create each component.
The set of **Hotel**, **Address**, and **Room** classes are known as [*complex types*](search-howto-complex-data-types.md), an important feature of Azure Cognitive Search. Complex types can be many levels deep of classes and subclasses, and enable far more complex data structures to be represented than using *simple types* (a class containing only primitive members). 1. In Solution Explorer, right-click **Models** > **Add** > **New Item**.
-1. Select**Class** and name the item Hotel.cs. Replace all the contents of Hotel.cs with the following code. Notice the **Address** and **Room** members of the class, these fields are classes themselves so you will need models for them too.
+1. Select **Class** and name the item Hotel.cs. Replace all the contents of Hotel.cs with the following code. Notice the **Address** and **Room** members of the class, these fields are classes themselves so you'll need models for them too.
```csharp using Azure.Search.Documents.Indexes;
The set of **Hotel**, **Address**, and **Room** classes are known as [*complex t
## Create a web page
-Project templates come with a number of client views located in the **Views** folder. The exact views depend on the version of Core .NET you are using (3.1 is used in this sample). For this tutorial, you will modify **Index.cshtml** to include the elements of a search page.
+Project templates come with a number of client views located in the **Views** folder. The exact views depend on the version of Core .NET you're using (3.1 is used in this sample). For this tutorial, you'll modify **Index.cshtml** to include the elements of a search page.
Delete the content of Index.cshtml in its entirety, and rebuild the file in the following steps.
Delete the content of Index.cshtml in its entirety, and rebuild the file in the
@model FirstAzureSearchApp.Models.SearchData ```
-1. It is standard practice to enter a title for the view, so the next lines should be:
+1. It's standard practice to enter a title for the view, so the next lines should be:
```csharp @{
Delete the content of Index.cshtml in its entirety, and rebuild the file in the
} ```
-1. Following the title, enter a reference to an HTML stylesheet, which you will create shortly.
+1. Following the title, enter a reference to an HTML stylesheet, which you'll create shortly.
```csharp <head>
Delete the content of Index.cshtml in its entirety, and rebuild the file in the
1. Add the stylesheet. In Visual Studio, in **File**> **New** > **File**, select **Style Sheet** (with **General** highlighted).
- Replace the default code with the following. We will not be going into this file in any more detail, the styles are standard HTML.
+ Replace the default code with the following. We won't be going into this file in any more detail, the styles are standard HTML.
```html textarea.box1 {
In this section, we extend the method to support a second use case: rendering th
Notice the **async** declaration of the method, and the **await** call to **RunQueryAsync**. These keywords take care of making asynchronous calls, and thus avoid blocking threads on the server.
- The **catch** block uses the error model that was created by default.
+ The **catch** block uses the default error model that was created.
### Note the error handling and other default views and methods
-Depending on which version of .NET Core you are using, a slightly different set of default views are created by default. For .NET Core 3.1 the default views are Index, Privacy, and Error. You can view these default pages when running the app, and examine how they are handled in the controller.
+Depending on which version of .NET Core you're using, a slightly different set of default views is created. For .NET Core 3.1 the default views are Index, Privacy, and Error. You can view these default pages when running the app, and examine how they're handled in the controller.
-You will be testing the Error view later on in this tutorial.
+You'll be testing the Error view later on in this tutorial.
In the GitHub sample, unused views and their associated actions are deleted.
The Azure Cognitive Search call is encapsulated in our **RunQueryAsync** method.
} ```
- In this method, first ensure our Azure configuration is initiated, then set some search options. The **Select** option specifies which fields to return in results, and thus match the property names in the **hotel** class. If you omit **Select**, all unhidden fields are returned, which can be inefficient if you are only interested in a subset of all possible fields.
+ In this method, first ensure our Azure configuration is initiated, then set some search options. The **Select** option specifies which fields to return in results, and thus match the property names in the **hotel** class. If you omit **Select**, all unhidden fields are returned, which can be inefficient if you're only interested in a subset of all possible fields.
- The asynchronous call to search formulates the request (modeled as **searchText**) and response (modeled as **searchResult**). If you are debugging this code, the **SearchResult** class is a good candidate for setting a break point if you need to examine the contents of **model.resultList**. You should find that it is intuitive, providing you with the data you asked for, and not much else.
+ The asynchronous call to search formulates the request (modeled as **searchText**) and response (modeled as **searchResult**). If you're debugging this code, the **SearchResult** class is a good candidate for setting a break point if you need to examine the contents of **model.resultList**. You should find that it's intuitive, providing just the data you asked for, and not much else.
### Test the app
Now, let's check whether the app runs correctly.
1. Try entering "five star". Notice that this query returns no results. A more sophisticated search would treat "five star" as a synonym for "luxury" and return those results. Support for [synonyms](search-synonyms.md) is available in Azure Cognitive Search, but isn't be covered in this tutorial series.
-1. Try entering "hot" as search text. It does _not_ return entries with the word "hotel" in them. Our search is only locating whole words, though a few results are returned.
+1. Try entering "hot" as search text. It doesn't return entries with the word "hotel" in them. Our search is only locating whole words, though a few results are returned.
-1. Try other words: "pool", "sunshine", "view", and whatever. You will see Azure Cognitive Search working at its simplest, but still convincing level.
+1. Try other words: "pool", "sunshine", "view", and whatever. You'll see Azure Cognitive Search working at its simplest, but still convincing level.
## Test edge conditions and errors
-It is important to verify that our error handling features work as they should, even when things are working perfectly.
+It's important to verify that our error handling features work as they should, even when things are working perfectly.
1. In the **Index** method, after the **try {** call, enter the line **Throw new Exception()**. This exception will force an error when you search on text.
It is important to verify that our error handling features work as they should,
:::image type="content" source="media/tutorial-csharp-create-first-app/azure-search-error.png" alt-text="Force an error" border="true"::: > [!Important]
- > It is considered a security risk to return internal error numbers in error pages. If your app is intended for general use, do some investigation into secure and best practices of what to return when an error occurs.
+ > It's considered a security risk to return internal error numbers in error pages. If your app is intended for general use, follow security best practices of what to return when an error occurs.
-3. Remove **Throw new Exception()** when you are satisfied the error handling works as it should.
+3. Remove **Throw new Exception()** when you're satisfied the error handling works as it should.
## Takeaways Consider the following takeaways from this project:
-* An Azure Cognitive Search call is concise, and it is easy to interpret the results.
-* Asynchronous calls add a small amount of complexity to the controller, but are the best practice if you intend to develop quality apps.
-* This app performed a straightforward text search, defined by what is set up in **searchOptions**. However, this one class can be populated with many members that add sophistication to a search. Not much additional work is needed to make this app considerably more powerful.
+* An Azure Cognitive Search call is concise, and it's easy to interpret the results.
+* Asynchronous calls add a small amount of complexity to the controller, but are a best practice that improves performance.
+* This app performed a straightforward text search, defined by what's set up in **searchOptions**. However, this one class can be populated with many members that add sophistication to a search. With a bit more work, you can make this app considerably more powerful.
## Next steps
-To improve upon the user experience, add more features, notably paging (either using page numbers, or infinite scrolling), and autocomplete/suggestions. You can also consider more sophisticated search options (for example, geographical searches on hotels within a specified radius of a given point, and search results ordering).
+To improve upon the user experience, add more features, notably paging (either using page numbers, or infinite scrolling), and autocomplete/suggestions. You can also consider other search options (for example, geographical searches on hotels within a specified radius of a given point) and search results ordering.
These next steps are addressed in the remaining tutorials. Let's start with paging.
static-web-apps Add Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/add-api.md
There is no need to build the app.
# [Angular](#tab/angular)
-Build the app into the _dist/angular-basic_ folder.
+Install npm dependencies and build the app into the _dist/angular-basic_ folder.
```bash
+npm install
npm run build --prod ``` # [React](#tab/react)
-Build the app into the _build_ folder.
+Install npm dependencies and build the app into the _build_ folder.
```bash
+npm install
npm run build ``` # [Vue](#tab/vue)
-Build the app into the _dist_ folder.
+Install npm dependencies and build the app into the _dist_ folder.
```bash
+npm install
npm run build ```
static-web-apps Apis App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-app-service.md
All Azure App Service hosting plans are available for use with Azure Static Web
[!INCLUDE [APIs overview](../../includes/static-web-apps-apis-overview.md)] > [!NOTE]
-> The integration with Azure API Management is currently in preview and requires the Static Web Apps Standard plan.
+> The integration with Azure App Service is currently in preview and requires the Static Web Apps Standard plan.
> > You cannot link a web app to a Static Web Apps [pull request environment](review-publish-pull-requests.md).
static-web-apps Apis Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-container-apps.md
By default, when a container app is linked to a static web app, the container ap
## Link a container app
-To link a web app as the API backend for a static web app, follow these steps:
+To link a container app as the API backend for a static web app, follow these steps:
1. In the Azure portal, navigate to the static web app.
static-web-apps Apis Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-functions.md
Logs are only available if you add [Application Insights](monitor.md).
## Constraints -- The API route prefix must be `/api`.-- Route rules for API functions only support [redirects](configuration.md#defining-routes) and [securing routes with roles](configuration.md#securing-routes-with-roles).
+In addition to the Static Web Apps API [constraints](apis-overview.md#constraints), the following restrictions are also applicable to Azure Functions APIs:
| Managed functions | Bring your own functions | |||
static-web-apps Apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-overview.md
The following Azure services can be integrated with Azure Static Web Apps:
- **Managed APIs**: By default, Azure Static Web Apps automatically integrates with Azure Functions as an API backend. You deploy an API with your static web app without managing a separate Azure Functions resource. - **Bring your own APIs**: You can integrate your static web app with existing APIs hosted in Azure Functions, API Management, App Service, or Container Apps. You manage and deploy the API resources yourself.
-Each static web app environment can only be configured with one type of backend API at a time.
- > [!NOTE] > Bring your own APIs is only available in the Azure Static Web Apps Standard plan. Built-in, managed Azure Functions APIs are available in all Azure Static Web Apps plans.
+## <a name="constraints"></a>API constraints
+
+The following constraints apply to all API backends:
+
+- Each static web app environment can only be configured with one type of backend API at a time.
+- The API route prefix must be `/api`.
+- Route rules for APIs only support [redirects](configuration.md#defining-routes) and [securing routes with roles](configuration.md#securing-routes-with-roles).
+- Only HTTP requests are supported for APIs. WebSocket, for example, is not supported.
+- The maximum duration of each API request 45 seconds.
+ ## Next steps > [!div class="nextstepaction"]
static-web-apps User Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/user-information.md
async function getUserInfo() {
return clientPrincipal; }
-console.log(getUserInfo());
+console.log(await getUserInfo());
``` ## API functions
synapse-analytics Synapse Workspace Access Control Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security/synapse-workspace-access-control-overview.md
To simplify managing access control, you can use security groups to assign roles
Synapse Studio will behave differently based on your permissions and the current mode: - **Synapse live mode:** Synapse Studio will prevent you from seeing published content, publishing content, or taking other actions if you don't have the required permission. In some cases, you'll be prevented from creating code artifacts that you can't use or save. -- **Git-mode:** If you have Git permissions that let you commit changes to the current branch, then the commit action will be permitted if you have permission to publish changes to the live service (Synapse Artifact Publisher role), and the Azure Contributor role on the workspace.
+- **Git-mode:** If you have Git permissions that let you commit changes to the current branch, then the commit action will be permitted if you have permission to publish changes to the live service (Synapse Artifact Publisher role).
In some cases, you're allowed to create code artifacts even without permission to publish or commit. This allows you to execute code (with the required execution permissions). For more information on the roles required for common tasks, see [Understand the roles required to perform common tasks in Azure Synapse](./synapse-workspace-understand-what-role-you-need.md).
synapse-analytics Microsoft Spark Utilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/microsoft-spark-utilities.md
Title: Introduction to Microsoft Spark utilities description: "Tutorial: MSSparkutils in Azure Synapse Analytics notebooks"---+++ Last updated 09/10/2020 -+ zone_pivot_groups: programming-languages-spark-all-minus-sql
Microsoft Spark Utilities (MSSparkUtils) is a builtin package to help you easily
## Pre-requisites
-### Configure access to Azure Data Lake Storage Gen2
+### Configure access to Azure Data Lake Storage Gen2
-Synapse notebooks use Azure Active Directory (Azure AD) pass-through to access the ADLS Gen2 accounts. You need to be a **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
+Synapse notebooks use Azure Active Directory (Azure AD) pass-through to access the ADLS Gen2 accounts. You need to be a **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
Synapse pipelines use workspace's Managed Service Identity (MSI) to access the storage accounts. To use MSSparkUtils in your pipeline activities, your workspace identity needs to be **Storage Blob Data Contributor** to access the ADLS Gen2 account (or folder).
Follow these steps to make sure your Azure AD and workspace MSI have access to t
1. Select the **Access control (IAM)** from the left panel. 1. Select **Add** > **Add role assignment** to open the Add role assignment page. 1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
-
+ | Setting | Value | | | | | Role | Storage Blob Data Contributor |
Follow these steps to make sure your Azure AD and workspace MSI have access to t
> The managed identity name is also the workspace name. ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
-
+ 1. Select **Save**. You can access data on ADLS Gen2 with Synapse Spark via the following URL: `abfss://<container_name>@<storage_account_name>.dfs.core.windows.net/<path>`
-### Configure access to Azure Blob Storage
+### Configure access to Azure Blob Storage
Synapse use [**Shared access signature (SAS)**](../../storage/common/storage-sas-overview.md) to access Azure Blob Storage. To avoid exposing SAS keys in the code, we recommend creating a new linked service in Synapse workspace to the Azure Blob Storage account you want to access.
Follow these steps to add a new linked service for an Azure Blob Storage account
4. Select **Continue**. 5. Select the Azure Blob Storage Account to access and configure the linked service name. Suggest using **Account key** for the **Authentication method**. 6. Select **Test connection** to validate the settings are correct.
-7. Select **Create** first and click **Publish all** to save your changes.
+7. Select **Create** first and click **Publish all** to save your changes.
You can access data on Azure Blob Storage with Synapse Spark via following URL:
Console.WriteLine(wasbs_path);
```
-
+ ### Configure access to Azure Key Vault
-You can add an Azure Key Vault as a linked service to manage your credentials in Synapse.
+You can add an Azure Key Vault as a linked service to manage your credentials in Synapse.
Follow these steps to add an Azure Key Vault as a Synapse linked service: 1. Open the [Azure Synapse Studio](https://web.azuresynapse.net/). 2. Select **Manage** from the left panel and select **Linked services** under the **External connections**. 3. Search **Azure Key Vault** in the **New linked Service** panel on the right. 4. Select the Azure Key Vault Account to access and configure the linked service name. 5. Select **Test connection** to validate the settings are correct.
-6. Select **Create** first and click **Publish all** to save your change.
+6. Select **Create** first and click **Publish all** to save your change.
Synapse notebooks use Azure active directory(Azure AD) pass-through to access Azure Key Vault. Synapse pipelines use workspace identity(MSI) to access Azure Key Vault. To make sure your code work both in notebook and in Synapse pipeline, we recommend granting secret access permission for both your Azure AD account and workspace identity. Follow these steps to grant secret access to your workspace identity:
-1. Open the [Azure portal](https://portal.azure.com/) and the Azure Key Vault you want to access.
+1. Open the [Azure portal](https://portal.azure.com/) and the Azure Key Vault you want to access.
2. Select the **Access policies** from the left panel.
-3. Select **Add Access Policy**:
+3. Select **Add Access Policy**:
- Choose **Key, Secret, & Certificate Management** as config template.
- - Select **your Azure AD account** and **your workspace identity** (same as your workspace name) in the select principal or make sure it is already assigned.
+ - Select **your Azure AD account** and **your workspace identity** (same as your workspace name) in the select principal or make sure it is already assigned.
4. Select **Select** and **Add**.
-5. Select the **Save** button to commit changes.
+5. Select the **Save** button to commit changes.
## File system utilities
Removes a file or a directory.
:::zone pivot = "programming-language-python" ```python
-mssparkutils.fs.rm('file path', True) # Set the last parameter as True to remove all files and directories recursively
+mssparkutils.fs.rm('file path', True) # Set the last parameter as True to remove all files and directories recursively
``` ::: zone-end :::zone pivot = "programming-language-scala" ```scala
-mssparkutils.fs.rm("file path", true) // Set the last parameter as True to remove all files and directories recursively
+mssparkutils.fs.rm("file path", true) // Set the last parameter as True to remove all files and directories recursively
``` ::: zone-end
mssparkutils.fs.rm("file path", true) // Set the last parameter as True to remov
:::zone pivot = "programming-language-csharp" ```csharp
-FS.Rm("file path", true) // Set the last parameter as True to remove all files and directories recursively
+FS.Rm("file path", true) // Set the last parameter as True to remove all files and directories recursively
``` ::: zone-end
-## Notebook utilities
+## Notebook utilities
:::zone pivot = "programming-language-csharp"
Not supported.
:::zone pivot = "programming-language-python"
-You can use the MSSparkUtils Notebook Utilities to run a notebook or exit a notebook with a value.
+You can use the MSSparkUtils Notebook Utilities to run a notebook or exit a notebook with a value.
Run the following command to get an overview of the available methods: ```python
run(path: String, timeoutSeconds: int, arguments: Map): String -> This method ru
``` ### Reference a notebook
-Reference a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
+Reference a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
```python
After the run finished, you will see a snapshot link named '**View notebook run:
![Screenshot of a snap link python](./media/microsoft-spark-utilities/spark-utilities-run-notebook-snap-link-sample-python.png) ### Exit a notebook
-Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
+Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
- When you call an `exit()` function a notebook interactively, Azure Synapse will throw an exception, skip running subsequence cells, and keep Spark session alive. -- When you orchestrate a notebook that calls an `exit()` function in a Synapse pipeline, Azure Synapse will return an exit value, complete the pipeline run, and stop the Spark session.
+- When you orchestrate a notebook that calls an `exit()` function in a Synapse pipeline, Azure Synapse will return an exit value, complete the pipeline run, and stop the Spark session.
-- When you call an `exit()` function in a notebook being referenced, Azure Synapse will stop the further execution in the notebook being referenced, and continue to run next cells in the notebook that call the `run()` function. For example: Notebook1 has three cells and calls an `exit()` function in the second cell. Notebook2 has five cells and calls `run(notebook1)` in the third cell. When you run Notebook2, Notebook1 will be stopped at the second cell when hitting the `exit()` function. Notebook2 will continue to run its fourth cell and fifth cell.
+- When you call an `exit()` function in a notebook being referenced, Azure Synapse will stop the further execution in the notebook being referenced, and continue to run next cells in the notebook that call the `run()` function. For example: Notebook1 has three cells and calls an `exit()` function in the second cell. Notebook2 has five cells and calls `run(notebook1)` in the third cell. When you run Notebook2, Notebook1 will be stopped at the second cell when hitting the `exit()` function. Notebook2 will continue to run its fourth cell and fifth cell.
```python
mssparkutils.notebook.exit("value string")
For example:
-**Sample1** notebook locates under **folder/** with following two cells:
+**Sample1** notebook locates under **folder/** with following two cells:
- cell 1 defines an **input** parameter with default value set to 10.-- cell 2 exits the notebook with **input** as exit value.
+- cell 2 exits the notebook with **input** as exit value.
![Screenshot of a sample notebook](./media/microsoft-spark-utilities/spark-utilities-run-notebook-sample.png)
Sample1 run success with input is 20
:::zone pivot = "programming-language-scala"
-You can use the MSSparkUtils Notebook Utilities to run a notebook or exit a notebook with a value.
+You can use the MSSparkUtils Notebook Utilities to run a notebook or exit a notebook with a value.
Run the following command to get an overview of the available methods: ```scala
run(path: String, timeoutSeconds: int, arguments: Map): String -> This method ru
``` ### Reference a notebook
-Reference a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
+Reference a notebook and returns its exit value. You can run nesting function calls in a notebook interactively or in a pipeline. The notebook being referenced will run on the Spark pool of which notebook calls this function.
```scala
After the run finished, you will see a snapshot link named '**View notebook run:
### Exit a notebook
-Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
+Exits a notebook with a value. You can run nesting function calls in a notebook interactively or in a pipeline.
- When you call an `exit()` function a notebook interactively, Azure Synapse will throw an exception, skip running subsequence cells, and keep Spark session alive. -- When you orchestrate a notebook that calls an `exit()` function in a Synapse pipeline, Azure Synapse will return an exit value, complete the pipeline run, and stop the Spark session.
+- When you orchestrate a notebook that calls an `exit()` function in a Synapse pipeline, Azure Synapse will return an exit value, complete the pipeline run, and stop the Spark session.
-- When you call an `exit()` function in a notebook being referenced, Azure Synapse will stop the further execution in the notebook being referenced, and continue to run next cells in the notebook that call the `run()` function. For example: Notebook1 has three cells and calls an `exit()` function in the second cell. Notebook2 has five cells and calls `run(notebook1)` in the third cell. When you run Notebook2, Notebook1 will be stopped at the second cell when hitting the `exit()` function. Notebook2 will continue to run its fourth cell and fifth cell.
+- When you call an `exit()` function in a notebook being referenced, Azure Synapse will stop the further execution in the notebook being referenced, and continue to run next cells in the notebook that call the `run()` function. For example: Notebook1 has three cells and calls an `exit()` function in the second cell. Notebook2 has five cells and calls `run(notebook1)` in the third cell. When you run Notebook2, Notebook1 will be stopped at the second cell when hitting the `exit()` function. Notebook2 will continue to run its fourth cell and fifth cell.
```python
mssparkutils.notebook.exit("value string")
For example:
-**Sample1** notebook locates under **mssparkutils/folder/** with following two cells:
+**Sample1** notebook locates under **mssparkutils/folder/** with following two cells:
- cell 1 defines an **input** parameter with default value set to 10.-- cell 2 exits the notebook with **input** as exit value.
+- cell 2 exits the notebook with **input** as exit value.
![Screenshot of a sample notebook](./media/microsoft-spark-utilities/spark-utilities-run-notebook-sample.png)
Sample1 run success with input is 20
## Credentials utilities
-You can use the MSSparkUtils Credentials Utilities to get the access tokens of linked services and manage secrets in Azure Key Vault.
+You can use the MSSparkUtils Credentials Utilities to get the access tokens of linked services and manage secrets in Azure Key Vault.
Run the following command to get an overview of the available methods:
putSecret(akvName, secretName, secretValue): puts AKV secret for a given akvName
### Get token
-Returns Azure AD token for a given audience, name (optional). The table below list all the available audience types:
+Returns Azure AD token for a given audience, name (optional). The table below list all the available audience types:
|Audience Type|Audience key| |--|--|
Credentials.IsValidToken("your token")
### Get connection string or credentials for linked service
-Returns connection string or credentials for linked service.
+Returns connection string or credentials for linked service.
:::zone pivot = "programming-language-python"
Credentials.GetSecret("azure key vault name","secret name","linked service name"
### Get secret using user credentials
-Returns Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials.
+Returns Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials.
:::zone pivot = "programming-language-python"
Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and l
### Put secret using user credentials
-Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials.
+Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials.
```python mssparkutils.credentials.putSecret('azure key vault name','secret name','secret value')
mssparkutils.credentials.putSecret('azure key vault name','secret name','secret
### Put secret using user credentials
-Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials.
+Puts Azure Key Vault secret for a given Azure Key Vault name, secret name, and linked service name using user credentials.
```scala mssparkutils.credentials.putSecret("azure key vault name","secret name","secret value")
mssparkutils.credentials.putSecret("azure key vault name","secret name","secret
::: zone-end -->
-## Environment utilities
+## Environment utilities
Run following commands to get an overview of the available methods:
mssparkutils.runtime.context
``` ::: zone-end
+## Session management
+
+### Stop an interactive session
+
+Instead of manually click stop button, sometimes it's more convenient to stop an interactive session by calling an API in the code. For such cases, we provide an API `mssparkutils.session.stop()` to support stopping the interactive session via code, it's available for Scala and Python.
++
+```python
+mssparkutils.session.stop()
+```
++
+```scala
+mssparkutils.session.stop()
+```
+
+`mssparkutils.session.stop()` API will stop the current interactive session asynchronously in the background, it stops the Spark session and release resources occupied by the session so they are available to other sessions in the same pool.
+
+> [!NOTE]
+> We don't recommend call language built-in APIs like `sys.exit` in Scala or `sys.exit()` in Python in your code, because such APIs just
+> kill the interpreter process, leaving Spark session alive and resources not released.
+ ## Next steps - [Check out Synapse sample notebooks](https://github.com/Azure-Samples/Synapse/tree/master/Notebooks)
mssparkutils.runtime.context
- [What is Apache Spark in Azure Synapse Analytics](apache-spark-overview.md) - [Azure Synapse Analytics](../index.yml) - [How to use file mount/unmount API in Synapse](./synapse-file-mount-api.md)-
synapse-analytics Create Use External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/create-use-external-tables.md
CREATE EXTERNAL TABLE Covid (
External tables cannot be created on a partitioned folder. Review the other known issues on [Synapse serverless SQL pool self-help page](resources-self-help-sql-on-demand.md#delta-lake).
+### Delta tables on partitioned folders
+
+External tables in serverless SQL pools do not support partitioning on Delta Lake format. Use [Delta partitioned views](create-use-views.md#delta-lake-partitioned-views) instead of tables if you have partitioned Delta Lake data sets.
+
+> [!IMPORTANT]
+> Do not create external tables on partitioned Delta Lake folders even if you see that they might work in some cases. Using unsupported features like external tables on partitioned delta folders might cause issues or instability of the serverless pool. Azure support will not be able to resolve any issue if it is using tables on partitioned folders. You would be asked to transition to [Delta partitioned views](create-use-views.md#delta-lake-partitioned-views) and rewrite your code to use only the supported feature before proceeding with issue resolution.
+ ## Use an external table You can use [external tables](develop-tables-external-tables.md) in your queries the same way you use them in SQL Server queries.
synapse-analytics Develop Tables External Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/develop-tables-external-tables.md
The key differences between Hadoop and native external tables are presented in t
| Dedicated SQL pool | Available | Only Parquet tables are available in **public preview**. | | Serverless SQL pool | Not available | Available | | Supported formats | Delimited/CSV, Parquet, ORC, Hive RC, and RC | Serverless SQL pool: Delimited/CSV, Parquet, and [Delta Lake](query-delta-lake-format.md)<br/>Dedicated SQL pool: Parquet (preview) |
-| [Folder partition elimination](#folder-partition-elimination) | No | Only for partitioned tables synchronized from Apache Spark pools in Synapse workspace to serverless SQL pools |
+| [Folder partition elimination](#folder-partition-elimination) | No | Partition elimination is available only in the partitioned tables created on Parquet or CSV formats that are synchronized from Apache Spark pools. You might create external tables on Parquet partitioned folders, but the partitioning columns will be inaccessible and ignored, while the partition elimination will not be applied. Do not create [external tables on Delta Lake folders](create-use-external-tables.md#delta-tables-on-partitioned-folders) because they are not supported. Use [Delta partitioned views](create-use-views.md#delta-lake-partitioned-views) if you need to query partitioned Delta Lake data. |
| [File elimination](#file-elimination) (predicate pushdown) | No | Yes in serverless SQL pool. For the string pushdown, you need to use `Latin1_General_100_BIN2_UTF8` collation on the `VARCHAR` columns to enable pushdown. |
-| Custom format for location | No | Yes, using wildcards like `/year=*/month=*/day=*`. In the serverless SQL pol you can also use recursive wildcards `/logs/**`. |
+| Custom format for location | No | Yes, using wildcards like `/year=*/month=*/day=*` for Parquet or CSV formats. Custom folder paths are not available in Delta Lake. In the serverless SQL pool you can also use recursive wildcards `/logs/**` to reference Parquet or CSV files in any sub-folder beneath the referenced folder. |
| Recursive folder scan | Yes | Yes. In serverless SQL pools must be specified `/**` at the end of the location path. In Dedicated pool the folders are alwasy scanned recursively. | | Storage authentication | Storage Access Key(SAK), AAD passthrough, Managed identity, Custom application Azure AD identity | [Shared Access Signature(SAS)](develop-storage-files-storage-access-control.md?tabs=shared-access-signature), [AAD passthrough](develop-storage-files-storage-access-control.md?tabs=user-identity), [Managed identity](develop-storage-files-storage-access-control.md?tabs=managed-identity), [Custom application Azure AD identity](develop-storage-files-storage-access-control.md?tabs=service-principal). | | Column mapping | Ordinal - the columns in the external table definition are mapped to the columns in the underlying Parquet files by position. | Serverless pool: by name. The columns in the external table definition are mapped to the columns in the underlying Parquet files by column name matching. <br/> Dedicated pool: ordinal matching. The columns in the external table definition are mapped to the columns in the underlying Parquet files by position.|
synapse-analytics Synapse Link For Sql Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/synapse-link/synapse-link-for-sql-known-issues.md
This is the list of known limitations for Azure Synapse Link for SQL.
* When enabling Azure Synapse Link for SQL on your Azure SQL Database, you should ensure that aggressive log truncation is disabled. ### SQL Server 2022 only
-* When creating SQL Server linked service, choose SQL Authentication.
* Azure Synapse Link for SQL works with SQL Server on Linux, but HA scenarios with Linux Pacemaker aren't supported. Shelf hosted IR cannot be installed on Linux environment. * Azure Synapse Link for SQL can't be enabled on databases that are transactional replication publishers or distributors. * If the SAS key of landing zone expires and gets rotated during the snapshot process, the new key won't get picked up. The snapshot will fail and restart automatically with the new key.
virtual-desktop Create Fslogix Profile Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-fslogix-profile-container.md
Title: FSLogix profile containers NetApp Azure Virtual Desktop - Azure
description: How to create an FSLogix profile container using Azure NetApp Files in Azure Virtual Desktop. Previously updated : 06/09/2020 Last updated : 07/01/2020
After you create the volume, configure the volume access parameters.
2. Under Configuration in the **Active Directory** drop-down menu, select the same directory that you originally connected in [Join an Active Directory connection](create-fslogix-profile-container.md#join-an-active-directory-connection). Keep in mind that there's a limit of one Active Directory per subscription. 3. In the **Share name** text box, enter the name of the share used by the session host pool and its users.
- If you want to enable Continuous Availability for the SMB volume, select **Enable Continuous Availability**.
-
- The SMB Continuous Availability feature is currently in public preview. You need to submit a waitlist request for accessing the feature through the **[Azure NetApp Files SMB Continuous Availability Shares Public Preview waitlist submission page](https://aka.ms/anfsmbcasharespreviewsignup)**. Wait for an official confirmation email from the Azure NetApp Files team before using the Continuous Availability feature.
-
- Using SMB Continuous Availability shares is only supported for workloads using:
- * Citrix App Layering
- * FSLogix user profile containers
- * Microsoft SQL Server (not Linux SQL Server)
-
- If you are using a non-administrator (domain) account to install SQL Server, ensure that the account has the required security privilege assigned. If the domain account does not have the required security privilege (`SeSecurityPrivilege`), and the privilege cannot be set at the domain level, you can grant the privilege to the account by using the *Security privilege users* field of Active Directory connections. See [Create an Active Directory connection](../azure-netapp-files/create-active-directory-connections.md).
+ It is recommended that you enable Continuous Availability on the SMB volume for use with FsLogix profile containers, so select **Enable Continuous Availability**. For more information see [Enable Continuous Availability on existing SMB volumes](../azure-netapp-files/enable-continuous-availability-existing-smb.md).
4. Select **Review + create** at the bottom of the page. This opens the validation page. After your volume is validated successfully, select **Create**.
virtual-desktop Whats New Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-agent.md
Title: What's new in the Azure Virtual Desktop Agent? - Azure
description: New features and product updates for the Azure Virtual Desktop Agent. Previously updated : 03/28/2022 Last updated : 06/29/2022
The Azure Virtual Desktop Agent updates regularly. This article is where you'll
Make sure to check back here often to keep up with new updates.
+## Version 1.0.4574.1600
+
+This update was released in June 2022 and includes the following changes:
+
+- Fixed broker URL cache to address Agent Telemetry calls.
+- Fixed some network-related issues.
+- Created two new mechanisms to trigger health checks.
+- Additional general bug fixes and agent upgrades.
+ ## Version 1.0.4230.1600 This update was released in March 2022 and includes the following changes:
virtual-machine-scale-sets Disk Encryption Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md
Then follow these steps:
2. Fill in the required fields then agree to the terms and conditions. 3. Click **Purchase** to deploy the template.
+> [!NOTE]
+> Virtual machine scale set encryption is supported with API version `2017-03-30` onwards. If you are using templates to enable scale set encryption, update the API version for virtual machine scale sets and the ADE extension inside the template. See this [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/201-encrypt-running-vmss-windows/azuredeploy.json) for more information.
+ ## Next steps - [Azure Disk Encryption for virtual machine scale sets](disk-encryption-overview.md)
virtual-machines Extensions Rmpolicy Howto Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/extensions-rmpolicy-howto-cli.md
Previously updated : 03/23/2018 Last updated : 07/01/2022 # Use Azure Policy to restrict extensions installation on Linux VMs
-If you want to prevent the use or installation of certain extensions on your Linux VMs, you can create an Azure Policy definition using the CLI to restrict extensions for VMs within a resource group.
+If you want to prevent the installation of certain extensions on your Linux VMs, you can create an Azure Policy definition using the Azure CLI to restrict extensions for VMs within a resource group. To learn the basics of Azure VM extensions for Linux, see [Virtual machine extensions and features for Linux](/azure/virtual-machines/extensions/features-linux).
-This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. If you want to run the Azure CLI locally, you need to install version 2.0.26 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
+This tutorial uses the CLI within the Azure Cloud Shell, which is constantly updated to the latest version. If you want to run the Azure CLI locally, you need to install version 2.0.26 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
## Create a rules file
-In order to restrict what extensions can be installed, you need to have a [rule](../../governance/policy/concepts/definition-structure.md#policy-rule) to provide the logic to identify the extension.
+In order to restrict what extensions are available, you need to create a [rule](../../governance/policy/concepts/definition-structure.md#policy-rule) to identify the extension.
-This example shows you how to deny installing extensions published by 'Microsoft.OSTCExtensions' by creating a rules file in Azure Cloud Shell, but if you are working in CLI locally, you can also create a local file and replace the path (~/clouddrive) with the path to the local file on your machine.
+This example demonstrates how to deny the installation of disallowed VM extensions by defining a rules file in Azure Cloud Shell. However, if you're working in Azure CLI locally, you can create a local file and replace the path (~/clouddrive) with the path to the file on your local file system.
In a [bash Cloud Shell](https://shell.azure.com/bash), type:
In a [bash Cloud Shell](https://shell.azure.com/bash), type:
vim ~/clouddrive/azurepolicy.rules.json ```
-Copy and paste the following .json into the file.
+Copy and paste the following `.json` data into the file.
```json {
Copy and paste the following .json into the file.
"allOf": [ { "field": "type",
- "equals": "Microsoft.OSTCExtensions/virtualMachines/extensions"
+ "equals": "Microsoft.Compute/virtualMachines/extensions"
}, {
- "field": "Microsoft.OSTCExtensions/virtualMachines/extensions/publisher",
- "equals": "Microsoft.OSTCExtensions"
+ "field": "Microsoft.Compute/virtualMachines/extensions/publisher",
+ "equals": "Microsoft.Compute"
}, {
- "field": "Microsoft.OSTCExtensions/virtualMachines/extensions/type",
+ "field": "Microsoft.Compute/virtualMachines/extensions/type",
"in": "[parameters('notAllowedExtensions')]" } ]
Copy and paste the following .json into the file.
} ```
-When you are done, hit the **Esc** key and then type **:wq** to save and close the file.
-
+When you're finished, press **Esc**, and then type **:wq** to save and close the file.
## Create a parameters file
-You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the extensions to block.
-
-This example shows you how to create a parameters file for Linux VMs in Cloud Shell, but if you are working in CLI locally, you can also create a local file and replace the path (~/clouddrive) with the path to the local file on your machine.
+You also need a [parameters](../../governance/policy/concepts/definition-structure.md#parameters) file that creates a structure for you to use for passing in a list of the unauthorized extensions.
-In the [bash Cloud Shell](https://shell.azure.com/bash), type:
-
-```bash
-vim ~/clouddrive/azurepolicy.parameters.json
+This example shows you how to create a parameter file for Linux VMs in Cloud Shell.
```
-Copy and paste the following .json into the file.
+Copy and paste the following `.json` data into the file.
```json {
Copy and paste the following .json into the file.
} ```
-When you are done, hit the **Esc** key and then type **:wq** to save and close the file.
+When you're finished, press **Esc**, and then type **:wq** to save and close the file.
## Create the policy
-A policy definition is an object used to store the configuration that you would like to use. The policy definition uses the rules and parameters files to define the policy. Create the policy definition using [az policy definition create](/cli/azure/role/assignment).
+A _policy definition_ is an object used to store the configuration that you would like to use. The policy definition uses the rules and parameters files to define the policy. Create the policy definition using [az policy definition create](/cli/azure/role/assignment).
-In this example, the rules and parameters are the files you created and stored as .json files in your cloud shell.
+In this example, the rules and parameters are the files you created and stored as .json files in Cloud Shell or in your local file system.
```azurecli-interactive az policy definition create \
az policy definition create \
--mode All ``` - ## Assign the policy
-This example assigns the policy to a resource group using [az policy assignment create](/cli/azure/policy/assignment). Any VM created in the **myResourceGroup** resource group will not be able to install the Linux VM Access or the Custom Script extensions for Linux. The resource group must exist before you can assign the policy.
+This example assigns the policy to a resource group using [`az policy assignment create`](/cli/azure/policy/assignment). Any VM created in the **myResourceGroup** resource group will be unable to install the Linux VM Access or the Custom Script Extensions for Linux.
-Use [az account list](/cli/azure/account) to get your subscription ID to use in place of the one in the example.
+> [!NOTE]
+> The resource group must exist before you can assign the policy.
+Use [`az account list`](/cli/azure/account) to find your subscription ID and replace the placeholder in the following example:
```azurecli-interactive az policy assignment create \
az policy assignment create \
## Test the policy
-Test the policy by creating a new VM and trying to add a new user.
-
+Test the policy by creating a new VM and adding a new user.
```azurecli-interactive az vm create \
az vm user update \
--password 'mynewuserpwd123!' ``` -- ## Remove the assignment ```azurecli-interactive
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/build-image-with-packer.md
You use the output from these two commands in the next step.
## Define Packer template To build images, you create a template as a JSON file. In the template, you define builders and provisioners that carry out the actual build process. Packer has a [provisioner for Azure](https://www.packer.io/docs/builders/azure.html) that allows you to define Azure resources, such as the service principal credentials created in the preceding step.
-Create a file named *ubuntu.json* and paste the following content. Enter your own values for the following:
+Create a file named *ubuntu.json* and paste the following content. Enter your own values for the following parameters:
| Parameter | Where to obtain | |-|-|
Create a file named *ubuntu.json* and paste the following content. Enter your ow
}] } ```
+You can also create a filed named *ubuntu.pkr.hcl* and paste the following content with your own values as used for the above parameters table.
+
+```HCL
+source "azure-arm" "autogenerated_1" {
+ azure_tags = {
+ dept = "Engineering"
+ task = "Image deployment"
+ }
+ client_id = "f5b6a5cf-fbdf-4a9f-b3b8-3c2cd00225a4"
+ client_secret = "0e760437-bf34-4aad-9f8d-870be799c55d"
+ image_offer = "UbuntuServer"
+ image_publisher = "Canonical"
+ image_sku = "16.04-LTS"
+ location = "East US"
+ managed_image_name = "myPackerImage"
+ managed_image_resource_group_name = "myResourceGroup"
+ os_type = "Linux"
+ subscription_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
+ tenant_id = "72f988bf-86f1-41af-91ab-2d7cd011db47"
+ vm_size = "Standard_DS2_v2"
+}
+
+build {
+ sources = ["source.azure-arm.autogenerated_1"]
+
+ provisioner "shell" {
+ execute_command = "chmod +x {{ .Path }}; {{ .Vars }} sudo -E sh '{{ .Path }}'"
+ inline = ["apt-get update", "apt-get upgrade -y", "apt-get -y install nginx", "/usr/sbin/waagent -force -deprovision+user && export HISTSIZE=0 && sync"]
+ inline_shebang = "/bin/sh -x"
+ }
+
+}
+```
+ This template builds an Ubuntu 16.04 LTS image, installs NGINX, then deprovisions the VM.
Build the image by specifying your Packer template file as follows:
./packer build ubuntu.json ```
+You can also build the image by specifying the *ubuntu.pkr.hcl* file as follows:
+
+```bash
+packer build ubuntu.pkr.hcl
+```
+ An example of the output from the preceding commands is as follows: ```output
It takes a few minutes for Packer to build the VM, run the provisioners, and cle
## Create VM from Azure Image
-You can now create a VM from your Image with [az vm create](/cli/azure/vm). Specify the Image you created with the `--image` parameter. The following example creates a VM named *myVM* from *myPackerImage* and generates SSH keys if they do not already exist:
+You can now create a VM from your Image with [az vm create](/cli/azure/vm). Specify the Image you created with the `--image` parameter. The following example creates a VM named *myVM* from *myPackerImage* and generates SSH keys if they don't already exist:
```azurecli az vm create \
virtual-machines Trusted Launch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/trusted-launch.md
No additional cost to existing VM pricing.
- Azure Site Recovery - Shared disk - Ultra disk
+- Managed image
- Azure Dedicated Host - Nested Virtualization
virtual-machines Build Image With Packer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/build-image-with-packer.md
Packer authenticates with Azure using a service principal. An Azure service prin
Create a service principal with [New-AzADServicePrincipal](/powershell/module/az.resources/new-azadserviceprincipal). The value for `-DisplayName` needs to be unique; replace with your own value as needed. ```azurepowershell
-$sp = New-AzADServicePrincipal -DisplayName "PackerSP$(Get-Random)"
+$sp = New-AzADServicePrincipal -DisplayName "PackerPrincipal" -role Contributor -scope /subscriptions/yyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyy
$plainPassword = (New-AzADSpCredential -ObjectId $sp.Id).SecretText ```
Create a file named *windows.json* and paste the following content. Enter your o
}] } ```
+You can also create a filed named *windows.pkr.hcl* and paste the following content with your own values as used for the above parameters table.
+
+```HCL
+source "azure-arm" "autogenerated_1" {
+ azure_tags = {
+ dept = "Engineering"
+ task = "Image deployment"
+ }
+ build_resource_group_name = "myPackerGroup"
+ client_id = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx"
+ client_secret = "ppppppp-pppp-pppp-pppp-ppppppppppp"
+ communicator = "winrm"
+ image_offer = "WindowsServer"
+ image_publisher = "MicrosoftWindowsServer"
+ image_sku = "2016-Datacenter"
+ managed_image_name = "myPackerImage"
+ managed_image_resource_group_name = "myPackerGroup"
+ os_type = "Windows"
+ subscription_id = "yyyyyyy-yyyy-yyyy-yyyy-yyyyyyyyyyy"
+ tenant_id = "zzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz"
+ vm_size = "Standard_D2_v2"
+ winrm_insecure = true
+ winrm_timeout = "5m"
+ winrm_use_ssl = true
+ winrm_username = "packer"
+}
+
+build {
+ sources = ["source.azure-arm.autogenerated_1"]
+
+ provisioner "powershell" {
+ inline = ["Add-WindowsFeature Web-Server", "while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit", "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }"]
+ }
+
+}
+```
This template builds a Windows Server 2016 VM, installs IIS, then generalizes the VM with Sysprep. The IIS install shows how you can use the PowerShell provisioner to run additional commands. The final Packer image then includes the required software install and configuration.
Build the image by opening a cmd prompt and specifying your Packer template file
``` ./packer build windows.json ```
+You can also build the image by specifying the *windows.pkr.hcl* file as follows:
+
+```
+packer build windows.pkr.hcl
+```
An example of the output from the preceding commands is as follows:
virtual-machines Automation Configure Workload Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/automation-configure-workload-zone.md
ANF_service_level = "Ultra"
The table below contains the Terraform parameters. These parameters need to be entered manually if not using the deployment scripts.
-| Variable | Type | Description |
+| Variable | Description | Type |
| -- | - | - |
-| `tfstate_resource_id` | Required * | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files |
-| `deployer_tfstate_key` | Required * | The name of the state file for the Deployer |
+| `tfstate_resource_id` | Azure resource identifier for the Storage account in the SAP Library that will contain the Terraform state files | Required |
+| `deployer_tfstate_key` | The name of the state file for the Deployer | Required |
## Next Step
virtual-wan Global Hub Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md
description: Learn how Azure Virtual WAN offers two types of connectivity for re
Previously updated : 06/29/2022 Last updated : 06/30/2022
To download the global profile:
:::image type="content" source="./media/global-hub-profile/global.png" alt-text="Screenshot that shows selections for downloading a global profile." lightbox="./media/global-hub-profile/global.png":::
+1. On the download page, select **EAPTLS**, then **Generate and download profile**. A profile package (zip file) containing the client configuration settings is generated and downloads to your computer. The contents of the package depend on the authentication and tunnel type choices for your configuration.
+ ### Include or exclude a hub from a global profile By default, every hub that uses a specific User VPN configuration is included in the corresponding global VPN profile. You can choose to exclude a hub from the global VPN profile. If you do, a user won't be load balanced to connect to that hub's gateway if they're using the global VPN profile.
To include or exclude a specific hub from the global VPN profile:
The profile points to a single hub. The user can connect to only the particular hub by using this profile. To download the hub-based profile:
-1. Go to the virtual hub.
+1. Go to the **virtual hub**.
1. In the left pane, select **User VPN (Point to site)**. 1. Select **Download virtual Hub User VPN profile**. :::image type="content" source="./media/global-hub-profile/hub-profile.png" alt-text="Screenshot that shows how to download a hub profile." lightbox="./media/global-hub-profile/hub-profile.png":::
-1. On the **Download virtual WAN user VPN**, select **EAPTLS** as the authentication type.
-1. Select **Generate and download profile**.
+1. On the download page, select **EAPTLS**, then **Generate and download profile**. A profile package (zip file) containing the client configuration settings is generated and downloads to your computer. The contents of the package depend on the authentication and tunnel type choices for your configuration.
## Next steps
virtual-wan Virtual Wan Point To Site Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-portal.md
In this tutorial, you learn how to:
## <a name="p2sconfig"></a>Create a User VPN configuration
-The User VPN (P2S) configuration defines the parameters for remote clients to connect. The instructions you follow depend on the authentication method you want to use.
+The User VPN (P2S) configuration defines the parameters for remote clients to connect. You create User VPN configurations before you create the P2S gateway in the hub. You can create multiple User VPN configurations. When you create the P2S gateway, you select the User VPN configuration that you want to use.
-In the following steps, when selecting the authentication method, you have three choices. Each method has specific requirements. Select one of the following methods, and then complete the steps.
+The instructions you follow depend on the authentication method you want to use. For this exercise, we select **OpenVpn and IKEv2** and certificate authentication. However, other configurations are available. Each authentication method has specific requirements.
* **Azure certificates:** For this configuration, certificates are required. You need to either generate or obtain certificates. A client certificate is required for each client. Additionally, the root certificate information (public key) needs to be uploaded. For more information about the required certificates, see [Generate and export certificates](certificates-point-to-site.md).
In the following steps, when selecting the authentication method, you have three
## <a name="download"></a>Generate client configuration files
-When you connect to VNet using User VPN (P2S), you use the VPN client that is natively installed on the operating system from which you're connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients.
+When you connect to VNet using User VPN (P2S), you can use the VPN client that is natively installed on the operating system from which you're connecting. All of the necessary configuration settings for the VPN clients are contained in a VPN client configuration zip file. The settings in the zip file help you easily configure the VPN clients. The VPN client configuration files that you generate are specific to the User VPN configuration for your gateway. In this section, you generate and download the files used to configure your VPN clients.
+
+There are two different types of configuration profiles that you can download: global and hub. The global profile is a WAN-level configuration profile. When you download the WAN-level configuration profile, you get a built-in Traffic Manager-based User VPN profile. When you use a global profile, if for some reason a hub is unavailable, the built-in traffic management provided by the service ensures connectivity (via a different hub) to Azure resources for point-to-site users. For more information, or to download a hub-level profile VPN client configuration package, see [Global and hub profiles](global-hub-profile.md).
[!INCLUDE [Download profile](../../includes/virtual-wan-p2s-download-profile-include.md)] ## <a name="configure-client"></a>Configure VPN clients
-Use the downloaded profile package to configure the remote access VPN clients. The procedure for each operating system is different. Follow the instructions that apply to your system.
+Use the downloaded profile package to configure the native VPN client on your computer. The procedure for each operating system is different. Follow the instructions that apply to your system.
Once you have finished configuring your client, you can connect. [!INCLUDE [Configure clients](../../includes/virtual-wan-p2s-configure-clients-include.md)]
vpn-gateway Nat Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/nat-overview.md
NAT on a gateway device translates the source and/or destination IP addresses, b
Another consideration is the address pool size for translation. If the target address pool size is the same as the original address pool, use static NAT rule to define a 1:1 mapping in a sequential order. If the target address pool is smaller than the original address pool, use dynamic NAT rule to accommodate the differences. > [!IMPORTANT]
-> * NAT is supported on the the following SKUs: VpnGw2~5, VpnGw2AZ~5AZ.
+> * NAT is supported on the following SKUs: VpnGw2~5, VpnGw2AZ~5AZ.
> * NAT is supported on IPsec cross-premises connections only. VNet-to-VNet connections or P2S connections are not supported. ## <a name="mode"></a>NAT mode: ingress & egress
vpn-gateway Point To Site Vpn Client Cert Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-mac.md
You can generate client configuration files using PowerShell, or by using the Az
1. Copy the URL to your browser to download the zip file, then unzip the file to view the folders.
-## IKEv2 - macOS steps
+## <a name="ikev2-macOS"></a>IKEv2 - macOS steps
### <a name="view"></a>View files
Configure authentication settings. There are two sets of instructions. Choose th
:::image type="content" source="./media/point-to-site-vpn-client-cert-mac/connected.png" alt-text="Screenshot shows Connected." lightbox="./media/point-to-site-vpn-client-cert-mac/expanded/connected.png":::
-## OpenVPN - macOS steps
+## <a name="openvpn-macOS"></a>OpenVPN - macOS steps
>[!INCLUDE [OpenVPN Mac](../../includes/vpn-gateway-vwan-config-openvpn-mac.md)]
-## OpenVPN - iOS steps
+## <a name="OpenVPN-iOS"></a>OpenVPN - iOS steps
>[!INCLUDE [OpenVPN iOS](../../includes/vpn-gateway-vwan-config-openvpn-ios.md)]