Updates from: 08/24/2021 03:19:10
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/azure-sentinel.md
Title: Secure Azure AD B2C with Azure Sentinel
-description: This tutorial shows how to perform Security Analytics for Azure AD B2C with Azure Sentinel.
+description: Tutorial to perform security analytics for Azure Active Directory B2C data with Azure Sentinel
--++ - Last updated : 08/17/2021+ Previously updated : 07/19/2021
-# Tutorial: How to perform security analytics for Azure AD B2C data with Azure Sentinel
+# Tutorial: Configure security analytics for Azure Active Directory B2C data with Azure Sentinel
-You can further secure your Azure AD B2C environment by routing logs and audit information to Azure Sentinel. Azure Sentinel is a cloud-native **Security Information Event Management (SIEM) and Security Orchestration Automated Response (SOAR)** solution. Azure Sentinel provides alert detection, threat visibility, proactive hunting, and threat response for **Azure AD B2C**.
+You can further secure your Azure Active Directory (AD) B2C environment by routing logs and audit information to Azure Sentinel. Azure Sentinel is a cloud-native
+Security Information Event Management (SIEM) and Security Orchestration Automated Response (SOAR) solution. Azure Sentinel
+provides alert detection, threat visibility, proactive hunting, and threat response for Azure AD B2C.
-By utilizing Azure Sentinel in conjunction with Azure AD B2C, you can:
+By using Azure Sentinel with Azure AD B2C, you can:
-- Detect previously undetected threats, and minimize false positives using Microsoft's analytics and unparalleled threat intelligence.-- Investigate threats with artificial intelligence, and hunt for suspicious activities at scale, tapping into years of cyber security work at Microsoft.
+- Detect previously undetected threats and minimize false positives using Microsoft's analytics and unparalleled threat intelligence.
+- Investigate threats with artificial intelligence. Hunt for suspicious activities at scale, tap into years of cybersecurity-related work at Microsoft.
- Respond to incidents rapidly with built-in orchestration and automation of common tasks. - Meet security and compliance requirements for your organization.
-In this tutorial, youΓÇÖll learn:
+In this tutorial, you'll learn to:
-1. How to transfer the B2C logs to Azure Monitor Logs workspace.
-1. Enable **Azure Sentinel** on a Log Analytics workspace.
-1. Create a sample rule in Sentinel that will trigger an incident.
-1. And lastly, configure some automated response.
+1. [Transfer the Azure AD B2C logs to Azure Monitor logs workspace](#configure-azure-ad-b2c-with-azure-monitor-logs-analytics)
+2. [Enable Azure Sentinel on a Log analytics workspace](#deploy-an-azure-sentinel-instance)
+3. [Create a sample rule in Azure Sentinel that will trigger an incident](#create-an-azure-sentinel-rule)
+4. [Configure automated response](#automated-response)
-## Configure AAD B2C with Azure Monitor Logs Analytics
+## Configure Azure AD B2C with Azure Monitor logs analytics
-The next steps will take through the process to enable **_Diagnostic settings_** in Azure Active Directory within your Azure AD B2C tenant.
-Diagnostic settings define where logs and metrics for a resource should be sent.
+Enable **Diagnostic settings** in Azure AD within your Azure AD B2C tenant to define where logs and metrics for a resource should be sent.
-Follow steps **1 to 5** of the [Monitor Azure AD B2C with Azure monitor](./azure-monitor.md) to configure Azure AD B2C to send logs to Azure Monitor.
+Then after, [configure Azure AD B2C to send logs to Azure Monitor](https://docs.microsoft.com/azure/active-directory-b2c/azure-monitor).
## Deploy an Azure Sentinel instance
-> [!IMPORTANT]
-> To enable Azure Sentinel, you need **contributor permissions** to the subscription in which the Azure Sentinel workspace resides. To use Azure Sentinel, you need either contributor or reader permissions on the resource group that the workspace belongs to.
+>[!IMPORTANT]
+>To enable Azure Sentinel, you need **contributor permissions** to the subscription in which the Azure Sentinel workspace resides. To use Azure Sentinel, you need either contributor or reader permissions on the resource group that the workspace belongs to.
Once you've configured your Azure AD B2C instance to send logs to Azure Monitor, you need to enable an Azure Sentinel instance.
-1. Sign into the Azure portal. Make sure that the subscription where the LA (log analytics) is workspace created in the previous step is selected.
+1. Go to the [Azure portal](https://portal.azure.com). Select the subscription where the log analytics workspace is created.
2. Search for and select **Azure Sentinel**. 3. Select **Add**.
- :::image type="content" source="./media/azure-sentinel/azure-sentinel-add.png" alt-text="search for Azure Sentinel in the Azure portal":::
+![image shows search for Azure Sentinel in the Azure portal](./media/azure-sentinel/azure-sentinel-add.png)
-4. Select the workspace used in the previous step.
+4. Select the new workspace.
- :::image type="content" source="./media/azure-sentinel/create-new-workspace.png" alt-text="select the sentinel workspace":::
+![image select the sentinel workspace](./media/azure-sentinel/create-new-workspace.png)
5. Select **Add Azure Sentinel**.
- > [!NOTE]
- > You can run Azure Sentinel on more than one workspace, but the data is isolated to a single workspace. For additional details on enabling Sentinel, please see this [QuickStart](../sentinel/quickstart-onboard.md).
+>[!NOTE]
+>You can [run Azure Sentinel](../sentinel/quickstart-onboard.md) on more than one workspace, but the data is isolated to a single workspace.
-## Create a sentinel rule
+## Create an Azure Sentinel rule
-> [!NOTE]
-> Azure Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules designed by Microsoft's team of security experts and analysts. Rules created from these templates automatically search across your data for any suspicious activity. Because today there is no native Azure AD B2C connector we will not use native rules in our example. For this tutorial we will create our own rule.
+>[!NOTE]
+>Azure Sentinel provides out-of-the-box, built-in templates to help you create threat detection rules designed by Microsoft's team of security experts and analysts. Rules created from these templates automatically search across your data for any suspicious activity. There are no native Azure AD B2C connectors available at this time. For the example in this tutorial, we'll create our own rule.
-Now that you've enabled Sentinel you'll want to be notified when something suspicious occurs in your B2C tenant.
+Now that you've enabled Azure Sentinel, get notified when something suspicious occurs in your Azure AD B2C tenant.
-You can create custom analytics rules to help you discover threats and anomalous behaviors that are present in your environment. These rules search for specific events or sets of events, alert you when certain event thresholds or conditions are reached to then generate incidents for further investigation.
+You can create [custom analytics rules](../sentinel/tutorial-detect-threats-custom.md) to discover threats and
+anomalous behaviors that are present in your environment. These rules search for specific events or sets of events, alert you when certain event thresholds or conditions are reached. Then after, generate incidents for further investigation.
-> [!NOTE]
-> For a detailed review on Analytic Rules you can see this [Tutorial](/azure/active-directory-b2c/articles/sentinel/detect-threats-custom.md).
-
-In our scenario, we want to receive a notification if someone is trying to force access to our environment but they are not successful, this could mean a brute-force attack, we want to get notified for **_2 or more non successful logins within 60 sec_**
+In the following example, we explain the scenario where you receive a notification if someone is trying to force access to your environment but they aren't successful. It could mean a brute-force attack. You want to get notified for two or more non-successful logins within 60 seconds.
1. From the Azure Sentinel navigation menu, select **Analytics**.
-2. In the action bar at the top, select **+Create** and select **Scheduled query rule**. This opens the **Analytics rule wizard**.
-
- :::image type="content" source="./media/azure-sentinel/create-scheduled-rule.png" alt-text="select create scheduled query rule":::
-
-3. Analytics rule wizard - General tab
-
- - Provide a unique **Name** and a **Description**
- - **Name**: _B2C Non-successful logins_ **Description**: _Notify on two or more non-successful logins within 60 sec_
- - In the **Tactics** field, you can choose from among categories of attacks by which to classify the rule. These are based on the tactics of the [MITRE ATT&CK](https://attack.mitre.org/) framework.
-
- - For our example, we will choose _PreAttack_
-
- > [!Tip]
- > MITRE ATT&CK® is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies.
- - Set the alert **Severity** as appropriate.
- - As this is our first rule, we will choose _High_. We can makes changes to our rule later
- - When you create the rule, its **Status** is **Enabled** by default, which means it will run immediately after you finish creating it. If you donΓÇÖt want it to run immediately, select **Disabled**, and the rule will be added to your **Active rules** tab and you can enable it from there when you need it.
+2. In the action bar at the top, select **+ Create** and select
+ **Scheduled query rule**. It will open the **Analytics rule wizard**.
- :::image type="content" source="./media/azure-sentinel/create-new-rule.png" alt-text="provide basic rule properties":::
+![image shows select create scheduled query rule](./media/azure-sentinel/create-scheduled-rule.png)
-4. Define the rule query logic and configure settings.
+3. In the Analytics rule wizard, go to the **General** tab.
- In the **Set rule logic** tab, we will write a query directly in the **Rule query** field. This query will alert you when there are two or more non-successful logins within 60 sec to your B2C tenant and will organize by _UserPrincipalName_
+ | Field | Value |
+ |:--|:--|
+ |Name | B2C non-successful logins |
+ |Description | Notify on two or more non-successful logins within 60 seconds |
+ | Tactics | Choose from the categories of attacks by which to classify the rule. These categories are based on the tactics of the [MITRE ATT&CK](https://attack.mitre.org/) framework.<BR>For our example, we'll choose `PreAttack` <BR> MITRE ATT&CK® is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. The ATT&CK knowledge base is used as a foundation for the development of specific threat models and methodologies.
+ | Severity | As appropriate |
+ | Status | When you create the rule, its Status is `Enabled` by default, which means it will run immediately after you finish creating it. If you don't want it to run immediately, select `Disabled`, and the rule will be added to your Active rules tab and you can enable it from there when you need it.|
- ```kusto
- SigninLogs
- | where ResultType != "0"
- | summarize Count = count() by bin(TimeGenerated, 60s), UserPrincipalName
- | project Count = toint(Count), UserPrincipalName
- | where Count >= 1
- ```
+![image provide basic rule properties](./media/azure-sentinel/create-new-rule.png)
- :::image type="content" source="./media/azure-sentinel/rule-query.png" alt-text="enter the rule query in the logic tab":::
+4. To define the rule query logic and configure settings, in the **Set rule logic** tab, write a query directly in the
+**Rule query** field. This query will alert you when there are two or more non-successful logins within 60 seconds to your Azure AD B2C tenant and will organize by `UserPrincipalName`.
- In the Query scheduling section, set the following parameters:
+![image shows enter the rule query in the logic tab](./media/azure-sentinel/rule-query.png)
- :::image type="content" source="./media/azure-sentinel/query-scheduling.png" alt-text="set query scheduling parameters":::
+In the Query scheduling section, set the following parameters:
-5. Click Next in **Incident Settings (Preview)** and in **Automated Response**. You will configure and add the Automated Response later.
+![image set query scheduling parameters](./media/azure-sentinel/query-scheduling.png)
-6. Click Next get to the **Review and create** tab to review all the settings for your new alert rule. When the "Validation passed" message appears, select **Create** to initialize your alert rule.
+5. Select **Next:Incident settings (Preview)**. You'll configure and add the Automated response later.
- :::image type="content" source="./media/azure-sentinel/review-create.png" alt-text="review and create rule":::
+6. Go to the **Review and create** tab to review all the
+ settings for your new alert rule. When the **Validation passed** message appears, select **Create** to initialize your alert rule.
-7. View the rule and Incidents it generates.
+![image review and create rule](./media/azure-sentinel/review-create.png)
- You can find your newly created custom rule (of type "Scheduled") in the table under the **Active rules** tab on the main **Analytics** screen. From this list you can **_edit_**, **_enable_**, **_disable_**, or **_delete_** rules.
+7. View the rule and incidents it generates. Find your newly created custom rule of type **Scheduled** in the table under the **Active rules** tab on the main **Analytics** screen. From this list you can **edit**, **enable**, **disable**, or **delete** rules.
- :::image type="content" source="./media/azure-sentinel/rule-crud.png" alt-text="analytics screen showing options to edit, enable, disable or delete rules":::
+![image analytics screen showing options to edit, enable, disable or delete rules](./media/azure-sentinel/rule-crud.png)
- To view the results of our new B2C Non-successful logins rule, go to the **Incidents** page, where you can triage, investigate, and remediate the threats.
+8. View the results of your new Azure AD B2C non-successful logins rule. Go to the **Incidents** page, where you can triage, investigate, and remediate the threats. An incident can include multiple alerts. It's an aggregation of all the relevant evidence for a specific investigation. You can set properties such as severity and status at the incident level.
- An incident can include multiple alerts. It's an aggregation of all the relevant evidence for a specific investigation. You can set properties such as severity and status at the incident level.
+>[!Note]
+>A key feature of Azure Sentinel is [incident investigation](../sentinel/tutorial-investigate-cases.md).
- > [!NOTE]
- > For detailed review on Incident investigation please see [this Tutorial](/azure/active-directory-b2c/articles/sentinel/investigate-cases.md)
+9. To begin the investigation, select a specific incident. On the
+right, you can see detailed information for the incident including its severity, entities involved, the raw events that triggered the incident, and the incident's unique ID.
- To begin the investigation, select a specific incident. On the right, you can see detailed information for the incident including its severity, entities involved, the raw events that triggered the incident, and the incidentΓÇÖs unique ID.
+![image alt-text="incident screen](./media/azure-sentinel/select-incident.png)
- :::image type="content" source="./media/azure-sentinel/select-incident.png" alt-text="incident screen":::
+10. Select **View full details** in the incident page and review the relevant tabs that summarize the incident information and provides more details.
- To view more details about the alerts and entities in the incident, select **View full details** in the incident page and review the relevant tabs that summarize the incident information
+![image rule 73](./media/azure-sentinel/full-details.png)
- :::image type="content" source="./media/azure-sentinel/full-details.png" alt-text="rule 73":::
+11. Select **Evidence** > **Events** > **Link to Log Analytics**. The result will display the `UserPrincipalName` of the identity trying to log in with the number of attempts.
- To review further details about the incident, you can select **Evidence->Events** or **Events -> Link to Log Analytics**
-
- The results will display the _UserPrincipalName_ of the identity trying to log in the _number_ of attempts.
-
- :::image type="content" source="./media/azure-sentinel/logs.png" alt-text="details of selected incident":::
+![image details of selected incident](./media/azure-sentinel/logs.png)
## Automated response
-Azure Sentinel also provides a robust SOAR capability; additional information can be found at the official Sentinel documentation [here](../sentinel/automation-in-azure-sentinel.md).
-
-Automated actions, called a playbook in Sentinel can be attached to Analytics rules to suit your requirements.
-
-In our example, we are going to add an Email notification upon an incident created by our rule.
-
-To accomplish our task, we will use an existing Playbook from the Sentinel GitHub repository [Incident-Email-Notification](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification)
+Azure Sentinel provides a [robust SOAR capability](../sentinel/automation-in-azure-sentinel.md). Automated actions, called a Playbook in Azure Sentinel can be attached to analytics rules to suit your requirements.
-Once the Playbook is configured, you'll have to just edit the existing rule and select the playbook into the Automation tab:
+In this example, we add an email notification upon an incident created by the rule. Use an [existing playbook from the Sentinel GitHub repository](https://github.com/Azure/Azure-Sentinel/tree/master/Playbooks/Incident-Email-Notification) to accomplish this task. Once the playbook is configured, edit the existing rule and select the playbook into the Automation tab.
+![image configuration screen for the automated response associated to a rule](./media/azure-sentinel/automation-tab.png)
## Next steps -- Because no rule is perfect, if needed you can update the rule query to exclude false positives. For more information, see [Handle false positives in Azure Sentinel](../sentinel/false-positives.md)
+- [Handle false positives in Azure Sentinel](../sentinel/false-positives.md)
-- To help with data analysis and creation of rich visual reports, choose and download from a gallery of expertly created workbooks that surface insights based on your data. [These workbooks](https://github.com/azure-ad-b2c/siem#workbooks) can be easily customized to your needs.
+- [Sample workbooks](https://github.com/azure-ad-b2c/siem#workbooks)
-- Learn more about Sentinel in the [Azure Sentinel documentation](../sentinel/index.yml)
+- [Azure Sentinel documentation](../sentinel/index.yml)
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/configure-authentication-sample-web-app.md
Previously updated : 06/11/2021 Last updated : 08/23/2021
Follow these steps to create the app registration:
For web apps that request an ID token directly from Azure AD B2C, enable the implicit grant flow in the app registration. 1. In the left menu, under **Manage**, select **Authentication**.
-1. Under **Implicit grant**, select the **ID tokens** check box.
+1. Under **Implicit grant**, select the **ID tokens (used for implicit and hybrid flows)**, and the **Access tokens (used for implicit flows)** check boxes.
1. Select **Save**. ## Step 3: Get the web app sample
You can add and modify redirect URIs in your registered applications at any time
## Next steps * Learn more [about the code sample](https://github.com/Azure-Samples/active-directory-aspnetcore-webapp-openidconnect-v2/tree/master/1-WebApp-OIDC/1-5-B2C#about-the-code)
-* Learn how to [Enable authentication in your own web application using Azure AD B2C](enable-authentication-web-application.md)
+* Learn how to [Enable authentication in your own web application using Azure AD B2C](enable-authentication-web-application.md)
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-akamai.md
In this sample tutorial, learn how to enable [Akamai Web Application Firewall (WAF)](https://www.akamai.com/us/en/resources/web-application-firewall.jsp) solution for Azure Active Directory (AD) B2C tenant using custom domains. Akamai WAF helps organization protect their web applications from malicious attacks that aim to exploit vulnerabilities such as SQL injection and Cross site scripting.
+>[!NOTE]
+>This feature is in public preview.
+ Benefits of using Akamai WAF solution: - An edge platform that allows traffic management to your services.
Check the following to ensure all traffic to Azure AD B2C is now going through t
- [Configure a custom domain in Azure AD B2C](./custom-domain.md?pivots=b2c-user-flow) -- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
+- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
active-directory-b2c Partner Azure Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-azure-web-application-firewall.md
In this sample tutorial, learn how to enable [Azure Web Application Firewall (WAF)](https://azure.microsoft.com/services/web-application-firewall/#overview) solution for Azure Active Directory (AD) B2C tenant with custom domain. Azure WAF provides centralized protection of your web applications from common exploits and vulnerabilities.
+>[!NOTE]
+>This feature is in public preview.
+ ## Prerequisites To get started, you'll need:
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory-b2c/partner-cloudflare.md
In this sample tutorial, learn how to enable [Cloudflare Web Application Firewall (WAF)](https://www.cloudflare.com/waf/) solution for Azure Active Directory (AD) B2C tenant with custom domain. Cloudflare WAF helps organization protect against malicious attacks that aim to exploit vulnerabilities such as SQLi, and XSS.
+>[!NOTE]
+>This feature is in public preview.
+ ## Prerequisites To get started, you'll need:
active-directory On Premises Application Provisioning Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-application-provisioning-architecture.md
There are three primary components to provisioning users into an on-premises app
You don't need to open inbound connections to the corporate network. The provisioning agents only use outbound connections to the provisioning service, which means there's no need to open firewall ports for incoming connections. You also don't need a perimeter (DMZ) network because all connections are outbound and take place over a secure channel.
+## ECMA Connector Host architecture
+The ECMA Connector Host has several areas it uses to achieve on-premises provisioning. The diagram below is a conceptual drawing that presents these individual areas. The table below describes the areas in more detail.
+
+[![ECMA connector host](.\media\on-premises-application-provisioning-architecture\ecma-2.png)](.\media\on-premises-application-provisioning-architecture\ecma-2.png#lightbox)
+++
+|Area|Description|
+|--|--|
+|Endpoints|Responsible for communication and data-transfer with the Azure AD provisioning service|
+|In-memory cache|Used to store the data imported from the on-premises data source|
+|Autosync|Provides asynchronous data synchronization between the ECMA Connector Host and the on-premises data source|
+|Business logic|Used to coordinate all of the ECMA Connector Host activities. The Autosync time is configurable in the ECMA host. This is in the properties page.|
+
+### About anchor attributes and distinguished names
+The following information is provided to better explain the anchor attributes and the distinguished names, particularly used by the genericSQL connector.
+
+The anchor attribute is a unique attribute of an object type that does not change and represents that object in the ECMA Connector Host in-memory cache.
+
+The distinguished name (DN) is a name that uniquely identifies an object by indicating its current location in the directory hierarchy. Or in the case of SQL, in the partition. The name is formed by concatenating the anchor attribute a the root of the directory partition.
+
+When we think of traditional DNs in a traditional format, for say, Active Directory or LDAP, we think of something similar to:
+
+ CN=Lola Jacobson,CN=Users,DC=contoso,DC=com
+
+However, for a data source such as SQL, which is flat, not hierarchical, the DN needs to be either already present in one of the table or created from the information we provide to the ECMA Connector Host.
+
+This can be achieved by checking **Autogenerated** in the checkbox when configuring the genericSQL connector. When you choose DN to be autogenerated, the ECMA host will generate a DN in an LDAP format: CN=&lt;anchorvalue&gt;,OBJECT=&lt;type&gt;. This also assumes that DN is Anchor is **unchecked** in the Connectivity page.
+
+ [![DN is Anchor unchecked](.\media\on-premises-application-provisioning-architecture\user-2.png)](.\media\on-premises-application-provisioning-architecture\user-2.png#lightbox)
+
+The genericSQL connector expects the DN to be populated using an LDAP format. The Generic SQL connector is using the LDAP style with the component name "OBJECT=". This allows it to use partitions (each object type is a partition).
+
+Since ECMA Connector Host currently only supports the USER object type, the OBJECT=&lt;type&gt; will be OBJECT=USER. So the DN for a user with an anchorvalue of ljacobson would be:
+
+ CN=ljacobson,OBJECT=USER
++
+### User creation workflow
+
+1. The Azure AD provisioning service queries the ECMA Connector Host to see if the user exists. It uses the **matching attribute** as the filter. This attribute is defined in the Azure AD portal under Enterprise applications -> On-premises provisioning -> provisioning -> attribute matching. It is denoted by the 1 for matching precedence.
+You can define one or more matching attribute(s) and prioritize them based on the precedence. Should you want to change the matching attribute you can also do so.
+ [![Matching attribute](.\media\on-premises-application-provisioning-architecture\match-1.png)](.\media\on-premises-application-provisioning-architecture\match-1.png#lightbox)
+
+2. ECMA Connector Host receives the GET request and queries its internal cache to see if the user exists and has based imported. This is done using the **query attribute**. The query attribute is defined in the object types page.
+ [![Query attribute](.\media\on-premises-application-provisioning-architecture\match-2.png)](.\media\on-premises-application-provisioning-architecture\match-2.png#lightbox)
++
+3. If the user does not exist, Azure AD will make a POST request to create the user. The ECMA Connector Host will respond back to Azure AD with the HTTP 201 and provide an ID for the user. This ID is derived from the anchor value defined in the object types page. This anchor will be used by Azure AD to query the ECMA Connector Host for future and subsequent requests.
+4. If a change happens to the user in Azure AD, then Azure AD will make a GET request to retrieve the user using the anchor from the previous step, rather than the matching attribute in step 1. This allows, for example, the UPN to change without breaking the link between the user in Azuer AD and in the app.
++ ## Agent best practices - Ensure the auto Azure AD Connect Provisioning Agent Auto Update service is running. It's enabled by default when you install the agent. Auto-update is required for Microsoft to support your deployment. - Avoid all forms of inline inspection on outbound TLS communications between agents and Azure. This type of inline inspection causes degradation to the communication flow.
You don't need to open inbound connections to the corporate network. The provisi
- Reducing the distance between the two ends of the hop. - Choosing the right network to traverse. For example, traversing a private network rather than the public internet might be faster because of dedicated links. ++ ## Provisioning agent questions Some common questions are answered here.
For the latest GA version of the provisioning agent, see [Azure AD connect provi
### How do I know the version of my provisioning agent? 1. Sign in to the Windows server where the provisioning agent is installed.
- 1. Go to **Control Panel** > **Uninstall or Change a Program**.
- 1. Look for the version that corresponds to the entry for **Microsoft Azure AD Connect Provisioning Agent**.
+ 2. Go to **Control Panel** > **Uninstall or Change a Program**.
+ 3. Look for the version that corresponds to the entry for **Microsoft Azure AD Connect Provisioning Agent**.
### Does Microsoft automatically push provisioning agent updates?
The provisioning agent supports use of outbound proxy. You can configure it by e
You can also check whether all the required ports are open. ### How do I uninstall the provisioning agent?
-1. Sign in to the Windows server where the provisioning agent is installed.
-1. Go to **Control Panel** > **Uninstall or Change a Program**.
-1. Uninstall the following programs:
+ 1. Sign in to the Windows server where the provisioning agent is installed.
+ 2. Go to **Control Panel** > **Uninstall or Change a Program**.
+ 3. Uninstall the following programs:
- Microsoft Azure AD Connect Provisioning Agent - Microsoft Azure AD Connect Agent Updater - Microsoft Azure AD Connect Provisioning Agent Package +++ ## Next steps - [App provisioning](user-provisioning.md)
active-directory On Premises Ecma Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/app-provisioning/on-premises-ecma-troubleshoot.md
After you configure the ECMA host and provisioning agent, it's time to test conn
1. Check that the agent and ECMA host are running: 1. On the server with the agent installed, open **Services** by going to **Start** > **Run** > **Services.msc**.
- 1. Under **Services**, make sure the **Microsoft Azure AD Connect Agent Updater**, **Microsoft Azure AD Connect Provisioning Agent**, and **Microsoft ECMA2Host** services are present and their status is *Running*.
+ 2. Under **Services**, make sure the **Microsoft Azure AD Connect Agent Updater**, **Microsoft Azure AD Connect Provisioning Agent**, and **Microsoft ECMA2Host** services are present and their status is *Running*.
![Screenshot that shows that the ECMA service is running.](./media/on-premises-ecma-troubleshoot/tshoot-1.png)
- 1. Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts** > **TestECMA2HostConnection**. Run the script. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. It should be run on the same computer as the ECMA Connector Host service itself.
- 1. Ensure that the agent is active by going to your application in the Azure portal, selecting **admin connectivity**, selecting the agent dropdown list, and ensuring your agent is active.
- 1. Check if the secret token provided is the same as the secret token on-premises. Go to on-premises, provide the secret token again, and then copy it into the Azure portal.
- 1. Ensure that you've assigned one or more agents to the application in the Azure portal.
- 1. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes.
- 1. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate.
- 1. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
- 1. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
+ 2. Go to the folder where the ECMA host was installed by selecting **Troubleshooting** > **Scripts** > **TestECMA2HostConnection**. Run the script. This script sends a SCIM GET or POST request to validate that the ECMA Connector Host is operating and responding to requests. It should be run on the same computer as the ECMA Connector Host service itself.
+ 3. Ensure that the agent is active by going to your application in the Azure portal, selecting **admin connectivity**, selecting the agent dropdown list, and ensuring your agent is active.
+ 4. Check if the secret token provided is the same as the secret token on-premises. Go to on-premises, provide the secret token again, and then copy it into the Azure portal.
+ 5. Ensure that you've assigned one or more agents to the application in the Azure portal.
+ 6. After you assign an agent, you need to wait 10 to 20 minutes for the registration to complete. The connectivity test won't work until the registration completes.
+ 7. Ensure that you're using a valid certificate. Go to the **Settings** tab of the ECMA host to generate a new certificate.
+ 8. Restart the provisioning agent by going to the taskbar on your VM by searching for the Microsoft Azure AD Connect provisioning agent. Right-click **Stop**, and then select **Start**.
+ 9. When you provide the tenant URL in the Azure portal, ensure that it follows the following pattern. You can replace `localhost` with your host name, but it isn't required. Replace `connectorName` with the name of the connector you specified in the ECMA host. The error message 'invalid resource' generally indicates that the URL does not follow the expected format.
``` https://localhost:8585/ecma2host_connectorName/scim
This problem is typically caused by a group policy that prevented permissions fr
To resolve this problem:
-1. Sign in to the server with an administrator account.
-1. Open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
-1. Under **Services**, double-click **Microsoft Azure AD Connect Provisioning Agent**.
-1. On the **Log On** tab, change **This account** to a domain admin. Then restart the service.
+ 1. Sign in to the server with an administrator account.
+ 2. Open **Services** by either navigating to it or by going to **Start** > **Run** > **Services.msc**.
+ 3. Under **Services**, double-click **Microsoft Azure AD Connect Provisioning Agent**.
+ 4. On the **Log On** tab, change **This account** to a domain admin. Then restart the service.
This test verifies that your agents can communicate with Azure over port 443. Open a browser, and go to the previous URL from the server where the agent is installed.
By default, the agent emits minimal error messages and stack trace information.
To gather more information for troubleshooting agent-related problems:
-1. Install the AADCloudSyncTools PowerShell module as described in [AADCloudSyncTools PowerShell Module for Azure AD Connect cloud sync](../../active-directory/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
-1. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. Use the following switches to fine-tune your data collection. Use:
+ 1. Install the AADCloudSyncTools PowerShell module as described in [AADCloudSyncTools PowerShell Module for Azure AD Connect cloud sync](../../active-directory/cloud-sync/reference-powershell.md#install-the-aadcloudsynctools-powershell-module).
+ 2. Use the `Export-AADCloudSyncToolsLogs` PowerShell cmdlet to capture the information. Use the following switches to fine-tune your data collection. Use:
- **SkipVerboseTrace** to only export current logs without capturing verbose logs (default = false). - **TracingDurationMins** to specify a different capture duration (default = 3 mins).
By using Azure AD, you can monitor the provisioning service in the cloud and col
```
+### I am getting an Invalid LDAP style DN error when trying to configure the ECMA Connector Host with SQL
+By default, the genericSQL connector expects the DN to be populated using the LDAP style (when the ΓÇÿDN is anchorΓÇÖ attribute is left unchecked in the first connectivity page). In the error message above, you can see that the DN is a UPN, rather than an LDAP style DN that the connector expects.
+
+To resolve this, ensure that **Autogenerated** is selected on the object types page when you configure the connector.
+
+See [About anchor attributes and distinguished names](on-premises-application-provisioning-architecture.md#about-anchor-attributes-and-distinguished-names) for more information.
## Next steps
active-directory Scenario Desktop Call Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-desktop-call-api.md
Now that you have a token, you can call a protected web API.
```Java HttpURLConnection conn = (HttpURLConnection) url.openConnection();
+PublicClientApplication pca = PublicClientApplication.builder(clientId)
+ .authority(authority)
+ .build();
+
+// Acquire a token, acquireTokenHelper would call publicClientApplication's acquireTokenSilently then acquireToken
+// see https://github.com/Azure-Samples/ms-identity-java-desktop for a full example
+IAuthenticationResult authenticationResult = acquireTokenHelper(pca);
+ // Set the appropriate header fields in the request header.
-conn.setRequestProperty("Authorization", "Bearer " + accessToken);
+conn.setRequestProperty("Authorization", "Bearer " + authenticationResult.accessToken);
conn.setRequestProperty("Accept", "application/json"); String response = HttpClientHelper.getResponseStringFromConn(conn);
active-directory Scenario Spa Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/develop/scenario-spa-overview.md
Many modern web applications are built as client-side single-page applications.
The Microsoft identity platform provides **two** options to enable single-page applications to sign in users and get tokens to access back-end services or web APIs: -- [OAuth 2.0 Authorization code flow (with PKCE)](./v2-oauth2-auth-code-flow.md). The authorization code flow allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users. This is the **recommended** approach.
+- [OAuth 2.0 Authorization code flow (with PKCE)](./v2-oauth2-auth-code-flow.md). The authorization code flow allows the application to exchange an authorization code for **ID** tokens to represent the authenticated user and **Access** tokens needed to call protected APIs. PKCE is Proof Key for Code Exchange and is designed to prevent several attacks and to be able to securely perform the OAuth exchange from public clients. PKCE is an IETF standard documented in RFC 7636. In addition, it returns **Refresh** tokens that provide long-term access to resources on behalf of users without requiring interaction with those users. This is the **recommended** approach.
![Single-page applications-auth](./media/scenarios/spa-app-auth.svg)
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
Previously updated : 8/20/2021 Last updated : 8/24/2021
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
- **Service plans included (friendly names)**: A list of service plans (friendly names) in the product that correspond to the string ID and GUID >[!NOTE]
->This information last updated on August 20th, 2021.
+>This information last updated on August 24th, 2021.
| Product name | String ID | GUID | Service plans included | Service plans included (friendly names) | | | | | | |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| AZURE ACTIVE DIRECTORY PREMIUM P1 | AAD_PREMIUM | 078d2b04-f1bd-4111-bbd4-b4b1b354cef4 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) | | AZURE ACTIVE DIRECTORY PREMIUM P2 | AAD_PREMIUM_P2 | 84a661c4-e949-4bd2-a560-ed7766fcaf2b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0) | | AZURE INFORMATION PROTECTION PLAN 1 | RIGHTSMANAGEMENT | c52ea49f-fe5d-4e95-93ba-1de91d380f89 | RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3) | AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
+| Business Apps (free) | SMB_APPS | 90d8b3f8-712e-4f7b-aa1e-62e7ae6cbe96 | DYN365BC_MS_INVOICING (39b5c996-467e-4e60-bd62-46066f572726)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) | Microsoft Invoicing (39b5c996-467e-4e60-bd62-46066f572726)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2) |
| COMMON AREA PHONE | MCOCAP | 295a8eb0-f78d-45c7-8b5b-1eed5ed02dff | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Common Area Phone for GCC | MCOCAP_GOV | b1511558-69bd-4e1b-8270-59ca96dba0f3 | MCOEV_GOV (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>TEAMS_GOV (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>MCOSTANDARD_GOV (a31ef4a2-f787-435e-8335-e47eb0cafc94) | Microsoft 365 Phone System for Government (db23fce2-a974-42ef-9002-d78dd42a0f22)<br/>Microsoft Teams for Government (304767db-7d23-49e8-a945-4a7eb65f9f28)<br/>Skype for Business Online (Plan 2) for Government (a31ef4a2-f787-435e-8335-e47eb0cafc94) |
+| Common Data Service Database Capacity | CDS_DB_CAPACITY | e612d426-6bc3-4181-9658-91aa906b0ac0 | CDS_DB_CAPACITY (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Database Capacity (360bcc37-0c11-4264-8eed-9fa7a3297c9b)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Common Data Service Log Capacity | CDS_LOG_CAPACITY | 448b063f-9cc6-42fc-a0e6-40e08724a395 | CDS_LOG_CAPACITY (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Common Data Service for Apps Log Capacity (dc48f5c5-e87d-43d6-b884-7ac4a59e7ee9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| COMMUNICATIONS CREDITS | MCOPSTNC | 47794cd0-f0e5-45c5-9033-2eb6b5fc84e0 | MCOPSTNC (505e180f-f7e0-4b65-91d4-00d670bbd18c) | COMMUNICATIONS CREDITS (505e180f-f7e0-4b65-91d4-00d670bbd18c) | | Dynamics 365 - Additional Database Storage (Qualified Offer) | CRMSTORAGE | 328dc228-00bc-48c6-8b09-1fbc8bc3435d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMSTORAGE (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Storage Add-On (77866113-0f3e-4e6e-9666-b1e25c6f99b0) | | Dynamics 365 - Additional Production Instance (Qualified Offer) | CRMINSTANCE | 9d776713-14cb-4697-a21d-9a52455c738a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRMINSTANCE (eeea837a-c885-4167-b3d5-ddde30cbd85f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Instance (eeea837a-c885-4167-b3d5-ddde30cbd85f) | | Dynamics 365 - Additional Non-Production Instance (Qualified Offer) | CRMTESTINSTANCE | e06abcc2-7ec5-4a79-b08b-d9c282376f72 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> CRMTESTINSTANCE (a98b7619-66c7-4885-bdfc-1d9c8c3d279f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online Additional Test Instance (a98b7619-66c7-4885-bdfc-1d9c8c3d279f) | | Dynamics 365 Asset Management Addl Assets | DYN365_ASSETMANAGEMENT | 673afb9d-d85b-40c2-914e-7bf46cd5cd75 | D365_AssetforSCM (90467813-5b40-40d4-835c-abd48009b1d9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Asset Maintenance Add-in (90467813-5b40-40d4-835c-abd48009b1d9)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Business Central Additional Environment Addon | DYN365_BUSCENTRAL_ADD_ENV_ADDON | a58f5506-b382-44d4-bfab-225b2fbf8390 | DYN365_BUSCENTRAL_ENVIRONMENT (d397d6c6-9664-4502-b71c-66f39c400ca4) | Dynamics 365 Business Central Additional Environment Addon (d397d6c6-9664-4502-b71c-66f39c400ca4) |
+| Dynamics 365 Business Central Database Capacity | DYN365_BUSCENTRAL_DB_CAPACITY | 7d0d4f9a-2686-4cb8-814c-eff3fdab6d74 | DYN365_BUSCENTRAL_DB_CAPACITY (ae6b27b3-fe31-4e77-ae06-ec5fabbc103a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Business Central Database Capacity (ae6b27b3-fe31-4e77-ae06-ec5fabbc103a)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Business Central Essentials | DYN365_BUSCENTRAL_ESSENTIAL | 2880026b-2b0c-4251-8656-5d41ff11e3aa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Dynamics 365 for Business Central Essentials (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
+| Dynamics 365 Business Central External Accountant| DYN365_FINANCIALS_ACCOUNTANT_SKU | 9a1e33ed-9697-43f3-b84c-1b0959dbb1d4 | DYN365_FINANCIALS_ACCOUNTANT (170991d7-b98e-41c5-83d4-db2052e1795f)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Dynamics 365 Business Central External Accountant (170991d7-b98e-41c5-83d4-db2052e1795f)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
| Dynamics 365 Business Central for IWs | PROJECT_MADEIRA_PREVIEW_IW_SKU | 6a4a1628-9b9a-424d-bed5-4118f0ede3fd | PROJECT_MADEIRA_PREVIEW_IW (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 Business Central for IWs (3f2afeed-6fb5-4bf9-998f-f2912133aead)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Business Central Premium | DYN365_BUSCENTRAL_PREMIUM | f991cecc-3f91-4cd0-a9a8-bf1c8167e029 | DYN365_BUSCENTRAL_PREMIUM (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Dynamics 365 Business Central Premium (8e9002c0-a1d8-4465-b952-817d2948e6e2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
| Dynamics 365 Customer Engagement Plan | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | D365_CSI_EMBED_CE (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>D365_ProjectOperations (69f07c66-bee4-4222-b051-195095efee5b)<br/>D365_ProjectOperationsCDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Forms_Pro_CE (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_FOR_PROJECT_OPERATIONS (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 Customer Service Insights for CE Plan (1412cdc1-d593-4ad1-9050-40c30ad0b023)<br/>Dynamics 365 P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>Dynamics 365 Project Operations (69f07c66-bee4-4222-b051-195095efee5b)<br/>Dynamics 365 Project Operations CDS (18fa3aba-b085-4105-87d7-55617b8585e6)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>Microsoft Dynamics 365 Customer Voice for Customer Engagement Plan (97f29a83-1a20-44ff-bf48-5e4ad11f3e51)<br/>Microsoft Social Engagement Enterprise (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Dynamics 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>Project for Project Operations (0a05d977-a21a-45b2-91ce-61c240dbafa2)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
-| DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN ENTERPRISE EDITION | DYN365_ENTERPRISE_PLAN1 | ea126fc5-a19e-42e2-a731-da9d437bffcf | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (b650d915-9886-424b-a08d-633cede56f57)<br/>DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>PROJECT ONLINE SERVICE (fe71d6c3-a2ea-4499-9778-da042bf08063) |
| Dynamics 365 Customer Service Insights Trial | DYN365_AI_SERVICE_INSIGHTS | 61e6bd70-fbdb-4deb-82ea-912842f39431 | DYN365_AI_SERVICE_INSIGHTS (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) |Dynamics 365 AI for Customer Service Trial (4ade5aa6-5959-4d2c-bf0a-f4c9e2cc00f2) | | Dynamics 365 Customer Voice Trial | FORMS_PRO | bc946dac-7877-4271-b2f7-99d2db13cd2c | DYN365_CDS_FORMS_PRO (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>FORMS_PRO (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (363430d1-e3f7-43bc-b07b-767b6bb95e4b)<br/>Dynamics 365 Customer Voice (17efdd9f-c22c-4ad8-b48e-3b1f3ee1dc9a)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) |
+| Dynamics 365 Customer Service Professional | DYN365_CUSTOMER_SERVICE_PRO | 1439b6e2-5d59-4873-8c59-d60e2a196e92 | DYN365_CUSTOMER_SERVICE_PRO (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_CUSTOMER_SERVICE_PRO (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>FLOW_CUSTOMER_SERVICE_PRO (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Dynamics 365 for Customer Service Pro (6929f657-b31b-4947-b4ce-5066c3214f54)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Customer Service Pro (c507b04c-a905-4940-ada6-918891e6d3ad)<br/>Power Automate for Customer Service Pro (0368fc9c-3721-437f-8b7d-3d0f888cdefc)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| Dynamics 365 Customer Voice Additional Responses | Forms_Pro_AddOn | 446a86f8-a0cb-4095-83b3-d100eb050e3d | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Forms_Pro_AddOn (90a816f6-de5f-49fd-963c-df490d73b7b5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics 365 Customer Voice Add-on (90a816f6-de5f-49fd-963c-df490d73b7b5) |
+| Dynamics 365 Customer Voice Additional Responses | DYN365_CUSTOMER_VOICE_ADDON | 65f71586-ade3-4ce1-afc0-1b452eaf3782 | CUSTOMER_VOICE_ADDON (e6e35e2d-2e7f-4e71-bc6f-2f40ed062f5d)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics Customer Voice Add-On (e6e35e2d-2e7f-4e71-bc6f-2f40ed062f5d)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
+| Dynamics 365 Customer Voice USL | Forms_Pro_USL | e2ae107b-a571-426f-9367-6d4c8f1390ba | CDS_FORM_PRO_USL (e9830cfd-e65d-49dc-84fb-7d56b9aa2c89)<br/>Forms_Pro_USL (3ca0766a-643e-4304-af20-37f02726339b)<br/>FLOW_FORMS_PRO (57a0746c-87b8-4405-9397-df365a9db793) | Common Data Service (e9830cfd-e65d-49dc-84fb-7d56b9aa2c89)<br/>Microsoft Dynamics 365 Customer Voice USL (3ca0766a-643e-4304-af20-37f02726339b)<br/>Power Automate for Dynamics 365 Customer Voice (57a0746c-87b8-4405-9397-df365a9db793) |
| Dynamics 365 Enterprise Edition - Additional Portal (Qualified Offer) | CRM_ONLINE_PORTAL | a4bfb28e-becc-41b0-a454-ac680dc258d3 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>CRM_ONLINE_PORTAL (1d4e9cb1-708d-449c-9f71-943aa8ed1d6a) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Dynamics CRM Online - Portal Add-On (1d4e9cb1-708d-449c-9f71-943aa8ed1d6a) | | DYNAMICS 365 FOR CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_CUSTOMER_SERVICE | 749742bf-0d37-4158-a120-33567104deeb | DYN365_ENTERPRISE_CUSTOMER_SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR CUSTOMER SERVICE (99340b49-fb81-4b1e-976b-8f2ae8e9394f)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR FINANCIALS BUSINESS EDITION | DYN365_FINANCIALS_BUSINESS_SKU | cc13a803-544e-4464-b4e4-6d6169a138fa | DYN365_FINANCIALS_BUSINESS (920656a2-7dd8-4c83-97b6-a356414dbd36)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>DYNAMICS 365 FOR FINANCIALS (920656a2-7dd8-4c83-97b6-a356414dbd36) | | DYNAMICS 365 FOR SALES AND CUSTOMER SERVICE ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES_CUSTOMERSERVICE | 8edc2cf8-6438-4fa9-b6e3-aa1660c640cc | DYN365_ENTERPRISE_P1 (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |DYNAMICS 365 CUSTOMER ENGAGEMENT PLAN (d56f3deb-50d8-465a-bedb-f079817ccac1)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SALES ENTERPRISE EDITION | DYN365_ENTERPRISE_SALES | 1e1a282c-9c54-43a2-9310-98ef728faace | DYN365_ENTERPRISE_SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>NBENTERPRISE (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR SALES (2da8e897-7791-486b-b08f-cc63c8129df7)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT SOCIAL ENGAGEMENT - SERVICE DISCONTINUATION (03acaee3-9492-4f40-aed4-bcb6b32981b6)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT | DYN365_SCM | f2e48cb3-9da0-42cd-8464-4a54ce198ad0 | DYN365_CDS_SUPPLYCHAINMANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYN365_REGULATORY_SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>D365_SCM (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | COMMON DATA SERVICE FOR DYNAMICS 365 SUPPLY CHAIN MANAGEMENT (b6a8b974-2956-4e14-ae81-f0384c363528)<br/>DYNAMICS 365 FOR FINANCE AND OPERATIONS, ENTERPRISE EDITION - REGULATORY SERVICE (c7657ae3-c0b0-4eed-8c1d-6a7967bd9c65)<br/>DYNAMICS 365 FOR SUPPLY CHAIN MANAGEMENT (1224eae4-0d91-474a-8a52-27ec96a63fe7)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
-| Dynamics 365 for Talent | SKU_Dynamics_365_for_HCM_Trial | 3a256e9a-15b6-4092-b0dc-82993f4debc6 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Hiring_Free_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_for_HCM_Trial (5ed38b64-c3b7-4d9f-b1cd-0de18c9c4331)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Common Data Service (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics 365 for Talent: Attract (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics 365 for Talent: Onboard (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics 365 for HCM Trial (5ed38b64-c3b7-4d9f-b1cd-0de18c9c4331)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/> PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
+| Dynamics 365 for Talent | SKU_Dynamics_365_for_HCM_Trial | 3a256e9a-15b6-4092-b0dc-82993f4debc6 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Hiring_Free_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_for_HCM_Trial (5ed38b64-c3b7-4d9f-b1cd-0de18c9c4331)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | Common Data Service (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics 365 for Talent: Attract (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics 365 for Talent: Onboard (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics 365 for HCM Trial (5ed38b64-c3b7-4d9f-b1cd-0de18c9c4331)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Dynamics 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>PowerApps for Dynamics 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) |
| DYNAMICS 365 FOR TEAM MEMBERS ENTERPRISE EDITION | DYN365_ENTERPRISE_TEAM_MEMBERS | 8e7a3d30-d97d-43ab-837c-d7701cef83dc | DYN365_Enterprise_Talent_Attract_TeamMember (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_Enterprise_Talent_Onboard_TeamMember (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYN365_ENTERPRISE_TEAM_MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>Dynamics_365_for_Retail_Team_members (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>Dynamics_365_for_Talent_Team_members (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TEAM MEMBERS (6a54b05e-4fab-40e7-9828-428db3b336fa)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) |
+| Dynamics 365 Guides | GUIDES_USER | 0a389a77-9850-4dc4-b600-bc66fdfefc60 | DYN365_CDS_GUIDES (1315ade1-0410-450d-b8e3-8050e6da320f)<br/>GUIDES (0b2c029c-dca0-454a-a336-887285d6ef07)<br/>POWERAPPS_GUIDES (816971f4-37c5-424a-b12b-b56881f402e7) | Common Data Service (1315ade1-0410-450d-b8e3-8050e6da320f)<br/>Dynamics 365 Guides (0b2c029c-dca0-454a-a336-887285d6ef07)<br/>Power Apps for Guides (816971f4-37c5-424a-b12b-b56881f402e7) |
| Dynamics 365 Operations ΓÇô Device | Dynamics_365_for_Operations_Devices | 3bbd44ed-8a70-4c07-9088-6232ddbd5ddd | DYN365_RETAIL_DEVICE (ceb28005-d758-4df7-bb97-87a617b93d6c)<br/>Dynamics_365_for_OperationsDevices (2c9fb43e-915a-4d61-b6ca-058ece89fd66)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Retail Device (ceb28005-d758-4df7-bb97-87a617b93d6c)<br/>Dynamics 365 for Operations Devices (2c9fb43e-915a-4d61-b6ca-058ece89fd66)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Operations - Sandbox Tier 2:Standard Acceptance Testing | Dynamics_365_for_Operations_Sandbox_Tier2_SKU | e485d696-4c87-4aac-bf4a-91b2fb6f0fa7 | Dynamics_365_for_Operations_Sandbox_Tier2 (d8ba6fb2-c6b1-4f07-b7c8-5f2745e36b54)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Operations non-production multi-box instance for standard acceptance testing (Tier 2) (d8ba6fb2-c6b1-4f07-b7c8-5f2745e36b54)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Operations - Sandbox Tier 4:Standard Performance Testing | Dynamics_365_for_Operations_Sandbox_Tier4_SKU | f7ad4bca-7221-452c-bdb6-3e6089f25e06 | Dynamics_365_for_Operations_Sandbox_Tier4 (f6b5efb1-1813-426f-96d0-9b4f7438714f)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Operations, Enterprise Edition - Sandbox Tier 4:Standard Performance Testing (f6b5efb1-1813-426f-96d0-9b4f7438714f)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS | DYN365_ENTERPRISE_P1_IW | 338148b6-1b11-4102-afb9-f92b6cdc0f8d | DYN365_ENTERPRISE_P1_IW (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | DYNAMICS 365 P1 TRIAL FOR INFORMATION WORKERS (056a5f80-b4e0-4983-a8be-7ad254a113c9)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | Dynamics 365 Remote Assist | MICROSOFT_REMOTE_ASSIST | 7a551360-26c4-4f61-84e6-ef715673e083 | CDS_REMOTE_ASSIST (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>MICROSOFT_REMOTE_ASSIST (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | Common Data Service for Remote Assist (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>Microsoft Remote Assist (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929) | | Dynamics 365 Remote Assist HoloLens | MICROSOFT_REMOTE_ASSIST_HOLOLENS | e48328a2-8e98-4484-a70f-a99f8ac9ec89 | CDS_REMOTE_ASSIST (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>MICROSOFT_REMOTE_ASSIST (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929) | Common Data Service for Remote Assist (0850ebb5-64ee-4d3a-a3e1-5a97213653b5)<br/>Microsoft Remote Assist (4f4c7800-298a-4e22-8867-96b17850d4dd)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929) |
+| Dynamics 365 Sales Enterprise Attach to Qualifying Dynamics 365 Base Offer | D365_SALES_ENT_ATTACH | 5b22585d-1b71-4c6b-b6ec-160b1a9c2323 | D365_SALES_ENT_ATTACH (3ae52229-572e-414f-937c-ff35a87d4f29)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | Dynamics 365 for Sales Enterprise Attach (3ae52229-572e-414f-937c-ff35a87d4f29)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318) |
| DYNAMICS 365 TALENT: ONBOARD | DYNAMICS_365_ONBOARDING_SKU | b56e7ccc-d5c7-421f-a23b-5c18bdbad7c0 | DYN365_CDS_DYN_APPS (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>Dynamics_365_Talent_Onboard (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | COMMON DATA SERVICE (2d925ad8-2479-4bd8-bb76-5b80f1d48935)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (048a552e-c849-4027-b54c-4c7ead26150a)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318) | | DYNAMICS 365 TEAM MEMBERS | DYN365_TEAM_MEMBERS | 7ac9fe77-66b7-4e5e-9e46-10eed1cff547 | DYNAMICS_365_FOR_RETAIL_TEAM_MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYN365_ENTERPRISE_TALENT_ATTRACT_TEAMMEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYN365_ENTERPRISE_TALENT_ONBOARD_TEAMMEMBER (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS_365_FOR_TALENT_TEAM_MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYN365_TEAM_MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS_365_FOR_OPERATIONS_TEAM_MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_TEAM (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_DYN_TEAM (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | DYNAMICS 365 FOR RETAIL TEAM MEMBERS (c0454a3d-32b5-4740-b090-78c32f48f0ad)<br/>DYNAMICS 365 FOR TALENT - ATTRACT EXPERIENCE TEAM MEMBER (643d201a-9884-45be-962a-06ba97062e5e)<br/>DYNAMICS 365 FOR TALENT - ONBOARD EXPERIENCE (f2f49eef-4b3f-4853-809a-a055c6103fe0)<br/>DYNAMICS 365 FOR TALENT TEAM MEMBERS (d5156635-0704-4f66-8803-93258f8b2678)<br/>DYNAMICS 365 TEAM MEMBERS (4092fdb5-8d81-41d3-be76-aaba4074530b)<br/>DYNAMICS 365 FOR OPERATIONS TEAM MEMBERS (f5aa7b45-8a36-4cd1-bc37-5d06dea98645)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (1ec58c70-f69c-486a-8109-4b87ce86e449)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS FOR DYNAMICS 365 (52e619e2-2730-439a-b0d3-d09ab7e8b705)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>SHAREPOINT (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72) | | DYNAMICS 365 UNF OPS PLAN ENT EDITION | Dynamics_365_for_Operations | ccba3cfe-71ef-423a-bd87-b6df3dce59a9 | DDYN365_CDS_DYN_P2 (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYN365_TALENT_ENTERPRISE (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>Dynamics_365_for_Operations (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>Dynamics_365_for_Retail (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS_365_HIRING_FREE_PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>Dynamics_365_Onboarding_Free_PLAN (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW_DYN_P2 (b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS_DYN_P2 (0b03f40b-c404-40c3-8651-2aceb74365fa) | COMMON DATA SERVICE (d1142cfd-872e-4e77-b6ff-d98ec5a51f66)<br/>DYNAMICS 365 FOR TALENT (65a1ebf4-6732-4f00-9dcb-3d115ffdeecd)<br/>DYNAMICS 365 FOR_OPERATIONS (95d2cd7b-1007-484b-8595-5e97e63fe189)<br/>DYNAMICS 365 FOR RETAIL (a9e39199-8369-444b-89c1-5fe65ec45665)<br/>DYNAMICS 365 HIRING FREE PLAN (f815ac79-c5dd-4bcc-9b78-d97f7b817d0d)<br/>DYNAMICS 365 FOR TALENT: ONBOARD (300b8114-8555-4313-b861-0c115d820f50)<br/>FLOW FOR DYNAMICS 365(b650d915-9886-424b-a08d-633cede56f57)<br/>POWERAPPS FOR DYNAMICS 365 (0b03f40b-c404-40c3-8651-2aceb74365fa) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| ENTERPRISE MOBILITY + SECURITY E5 | EMSPREMIUM | b05e124f-c7cc-45a0-a6aa-8cf78c946968 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AZURE ACTIVE DIRECTORY PREMIUM P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>AZURE INFORMATION PROTECTION PREMIUM P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFT CLOUD APP SECURITY (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR IDENTITY (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Enterprise Mobility + Security G3 GCC | EMS_GOV | c793db86-5237-494e-9b11-dcd4877c2c8c | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Enterprise Mobility + Security G5 GCC | EMSPREMIUM_GOV | 8a180c2b-f4cf-4d44-897c-3d32acc4a60b | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>RMS_S_ENTERPRISE) (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
+| Exchange Enterprise CAL Services (EOP, DLP) | EOP_ENTERPRISE_PREMIUM | e8ecdf70-47a8-4d39-9d15-093624b7f640 | EOP_ENTERPRISE_PREMIUM (75badc48-628e-4446-8460-41344d73abd6)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) | Exchange Enterprise CAL Services (EOP, DLP) (75badc48-628e-4446-8460-41344d73abd6)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90) |
| Exchange Online (Plan 1) | EXCHANGESTANDARD | 4b9405b0-7788-4568-add1-99614e613b69 | EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c) | Exchange Online (Plan 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>To-Do (Plan 1) (5e62787c-c316-451f-b873-1d05acd4d12c) | | EXCHANGE ONLINE (PLAN 2) | EXCHANGEENTERPRISE | 19ec0d23-8335-4cbd-94ac-6050e30712fa | EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0) | EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0) | | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE | EXCHANGEARCHIVE_ADDON | ee02fd1b-340e-4a4b-b355-4a514e4c8943 | EXCHANGE_S_ARCHIVE_ADDON (176a09a6-7ec5-4039-ac02-b2791c6ba793) | EXCHANGE ONLINE ARCHIVING FOR EXCHANGE ONLINE (176a09a6-7ec5-4039-ac02-b2791c6ba793) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT 365 DOMESTIC CALLING PLAN (120 Minutes) | MCOPSTN_5 | 11dee6af-eca8-419f-8061-6864517c1875 | MCOPSTN5 (54a152dc-90de-4996-93d2-bc47e670fc06) | MICROSOFT 365 DOMESTIC CALLING PLAN (120 min) (54a152dc-90de-4996-93d2-bc47e670fc06) | | Microsoft 365 Domestic Calling Plan for GCC | MCOPSTN_1_GOV | 923f58ab-fca1-46a1-92f9-89fda21238a8 | MCOPSTN1_GOV (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8) | Domestic Calling for Government (3c8a8792-7866-409b-bb61-1b20ace0368b)<br/>Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8) | | MICROSOFT 365 E3 | SPE_E3 | 05e9a617-0261-4cee-bb44-138d3ef5d965 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | AZURE ACTIVE DIRECTORY PREMIUM P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>CLOUD APP SECURITY DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>TO-DO (PLAN 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW FOR OFFICE 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>MICROSOFT FORMS (PLAN E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>MICROSOFT AZURE MULTI-FACTOR AUTHENTICATION (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>POWERAPPS FOR OFFICE 365(c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT AZURE ACTIVE DIRECTORY RIGHTS (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>AZURE INFORMATION PROTECTION PREMIUM P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
-| Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+|Microsoft 365 E3 - Unattended License | SPE_E3_RPA1 | c2ac2ee4-9bb1-47e4-8541-d689c7e83371 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION_unattended (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>M365_LIGHTHOUSE_CUSTOMER_PLAN1 (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>M365_LIGHTHOUSE_PARTNER_PLAN1 (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>FORMS_PLAN_E3 (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>UNIVERSAL_PRINT_01(795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/> WHITEBOARD_PLAN2 (94a54592-cd8b-425e-87c6-97868b000b91)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for Enterprise (Unattended) (8d77e2d9-9e28-4450-8431-0def64078fc5)<br/>Microsoft 365 Lighthouse (Plan 1) (6f23d6a9-adbf-481c-8538-b4c095654487)<br/>Microsoft 365 Lighthouse (Plan 2) (d55411c9-cfff-40a9-87c7-240f14df7da5)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan E3) (2789c901-c14e-48ab-a76a-be334d9d793a)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/> To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
| Microsoft 365 E3_USGOV_DOD | SPE_E3_USGOV_DOD | d61d61cc-f992-433f-a577-5bd016037eeb | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_DOD (fd500458-c24c-478e-856c-a6067a8376cd)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams for DOD (AR) (fd500458-c24c-478e-856c-a6067a8376cd)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | | Microsoft 365 E3_USGOV_GCCHIGH | SPE_E3_USGOV_GCCHIGH | ca9d1dd9-dfe9-4fef-b97c-9bc1ea3c3658 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>ADALLOM_S_DISCOVERY (932ad362-64a8-4783-9106-97849a1a30b9)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS_AR_GCCHIGH (9953b155-8aef-4c56-92f3-72b0487fce41)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Information Protection Premium P1(6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Cloud App Security Discovery (932ad362-64a8-4783-9106-97849a1a30b9)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/> Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/> Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/> Microsoft Teams for GCCHigh (AR) (9953b155-8aef-4c56-92f3-72b0487fce41)<br/> Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/> Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/> SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c) |
+| Microsoft 365 E5 | SPE_E5 | 06ebc4ee-1bb5-47dd-8120-11324bc54e06 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>RMS_S_PREMIUM (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MFA_PREMIUM (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>FORMS_PLAN_E5 (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Azure Information Protection Premium P1 (6c57d4b6-3b23-47a5-9bc9-69f17b4947b3)<br/>Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Azure Multi-Factor Authentication (8a256a2b-b617-496d-b51b-e76466e88db0)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Forms (Plan E5) (e212cbc7-0961-4c40-9825-01117710dcb1)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office Online (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SharePoint Online (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Windows 10 Enterprise (Original) (21b439ba-a0ca-424f-a6cc-52f954a5b111)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
| Microsoft 365 E5 Compliance | INFORMATION_PROTECTION_COMPLIANCE | 184efa21-98c3-4e5d-95ab-d07053a96e67 | RMS_S_PREMIUM2 (5689bec4-755d-4753-8b61-40975025187c)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f) | Azure Information Protection Premium P2 (5689bec4-755d-4753-8b61-40975025187c)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f) | | Microsoft 365 E5 Security | IDENTITY_THREAT_PROTECTION | 26124093-3d78-432b-b5dc-48bf992543d5 | AAD_PREMIUM_P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | Azure Active Directory Premium P2 (eec0eb4f-6444-4f95-aba0-50c24d67f998)<br/>Azure Advanced Threat Protection (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | | Microsoft 365 E5 Security for EMS E5 | IDENTITY_THREAT_PROTECTION_FOR_EMS_E5 | 44ac31e7-2999-4304-ad94-c948886741d4 | WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>SAFEDOCS (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) | MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 SafeDocs (bf6f5520-59e3-4f82-974b-7dbbc4fd27c7) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT BUSINESS CENTER | MICROSOFT_BUSINESS_CENTER | 726a0894-2c77-4d65-99da-9775ef05aad1 | MICROSOFT_BUSINESS_CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | MICROSOFT BUSINESS CENTER (cca845f9-fd51-4df6-b563-976a37c56ce0) | | Microsoft Cloud App Security | ADALLOM_STANDALONE | df845ce7-05f9-4894-b5f2-11bbfbcfd2b6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_STANDALONE (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Cloud App Security (2e2ddb96-6af9-4b1d-a3f0-d6ecfd22edb2) | | MICROSOFT DEFENDER FOR ENDPOINT | WIN_DEF_ATP | 111046dd-295b-4d6d-9724-d52ac90bd1f2 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT DEFENDER FOR ENDPOINT (871d91ec-ec1a-452b-a83f-bd76c7d770ef) |
+| Microsoft Defender for Endpoint Server | MDATP_Server | 509e8ab6-0274-4cda-bcbd-bd164fd562c4 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef) |
| MICROSOFT DYNAMICS CRM ONLINE BASIC | CRMPLAN2 | 906af65a-2970-46d5-9b58-4e9aa50f0657 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>CRMPLAN2 (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS CRM ONLINE BASIC (bf36ca64-95c6-4918-9275-eb9f4ce2c04f)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | Microsoft Defender for Identity | ATA | 98defdf7-f6c1-44f5-a1f6-943b6764e7a5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318 ATA (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>ADALLOM_FOR_AATP (61d18b02-6889-479f-8f36-56e6e0fe5792) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender for Identity (14ab5db5-e6c4-4b20-b4bc-13e36fd2227f)<br/>SecOps Investigation for MDI (61d18b02-6889-479f-8f36-56e6e0fe5792) | | Microsoft Defender for Office 365 (Plan 2) GCC | THREAT_INTELLIGENCE_GOV | 56a59ffb-9df1-421b-9e61-8b568583474d | MTP (bf28f719-7844-4079-9c78-c1307898e192)<br/>ATP_ENTERPRISE_GOV (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>THREAT_INTELLIGENCE_GOV (900018f1-0cdb-4ecb-94d4-90281760fdc6) | Microsoft 365 Defender (bf28f719-7844-4079-9c78-c1307898e192)<br/>Microsoft Defender for Office 365 (Plan 1) for Government (493ff600-6a2b-4db6-ad37-a7d4eb214516)<br/>Microsoft Defender for Office 365 (Plan 2) for Government (900018f1-0cdb-4ecb-94d4-90281760fdc6) | | MICROSOFT DYNAMICS CRM ONLINE | CRMSTANDARD | d17b27af-3f49-4822-99f9-56a661538792 | CRMSTANDARD (f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW_DYN_APPS (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MDM_SALES_COLLABORATION (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>NBPROFESSIONALFORCRM (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS_DYN_APPS (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | MICROSOFT DYNAMICS CRM ONLINE PROFESSIONAL(f9646fb2-e3b2-4309-95de-dc4833737456)<br/>FLOW FOR DYNAMICS 365 (7e6d7d78-73de-46ba-83b1-6d25117334ba)<br/>MICROSOFT DYNAMICS MARKETING SALES COLLABORATION - ELIGIBILITY CRITERIA APPLY (3413916e-ee66-4071-be30-6f94d4adfeda)<br/>MICROSOFT SOCIAL ENGAGEMENT PROFESSIONAL - ELIGIBILITY CRITERIA APPLY (3e58e97c-9abe-ebab-cd5f-d543d1529634)<br/>POWERAPPS FOR DYNAMICS 365 (874fc546-6efe-4d22-90b8-5c4e7aa59f4b) | | MS IMAGINE ACADEMY | IT_ACADEMY_AD | ba9a34de-4489-469d-879c-0f0f145321cd | IT_ACADEMY_AD (d736def0-1fde-43f0-a5be-e3f8b2de6e41) | MS IMAGINE ACADEMY (d736def0-1fde-43f0-a5be-e3f8b2de6e41) |
+| Microsoft Intune Device | INTUNE_A_D | 2b317a4a-77a6-4188-9437-b68a77b4e2c6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
| MICROSOFT INTUNE DEVICE FOR GOVERNMENT | INTUNE_A_D_GOV | 2c21e77a-e0d6-4570-b38a-7ff2dc17d2ca | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) | | Microsoft Power Apps Plan 2 Trial | POWERAPPS_VIRAL | dcb1a3ae-b33f-4487-846a-a640262fadf4 | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170)<br/>FLOW_P2_VIRAL_REAL (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>POWERAPPS_P2_VIRAL (d5368ca3-357e-4acb-9c21-8495fb025d1f) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170)<br/>Flow P2 Viral (d20bfa21-e9ae-43fc-93c2-20783f0840c3)<br/>PowerApps Trial (d5368ca3-357e-4acb-9c21-8495fb025d1f) |
+| MICROSOFT POWER AUTOMATE PLAN 2 | FLOW_P2 | 4755df59-3f73-41ab-a249-596ad72b5504 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) |
| MICROSOFT INTUNE SMB | INTUNE_SMB | e6025b08-2fa5-4313-bd0a-7e5ffca32958 | AAD_SMB (de377cbc-0019-4ec2-b77c-3f223947e102)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>INTUNE_SMBIZ (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/> | AZURE ACTIVE DIRECTORY (de377cbc-0019-4ec2-b77c-3f223947e102)<br/> EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> MICROSOFT INTUNE (8e9ff0ff-aa7a-4b20-83c1-2f636b600ac2)<br/> MICROSOFT INTUNE (c1ec4a95-1f05-45b3-a911-aa3fa01094f5) |
-| Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Plan 2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d)<br/>PowerApps Plan 2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81) |
-| Microsoft Power Automate Free | FLOW_FREE | f30db892-07e9-47e9-837c-80727f46fd3d | DYN365_CDS_VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_P2_VIRAL (50e68c76-46c6-4674-81f9-75456511b170) | Common Data Service ΓÇô VIRAL (17ab22cd-a0b3-4536-910a-cb6eb12696c0)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow Free (50e68c76-46c6-4674-81f9-75456511b170) |
+| Microsoft Power Apps Plan 2 (Qualified Offer) | POWERFLOW_P2 | ddfae3e3-fcb2-4174-8ebd-3023cb213c8b | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_P2 (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>FLOW_P2 (56be9436-e4b2-446c-bb7f-cc15d16cca4d) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> Power Apps (Plan 2) (00527d7f-d5bc-4c2a-8d1e-6c0de2410c81)<br/>Power Automate (Plan 2) (56be9436-e4b2-446c-bb7f-cc15d16cca4d) |
| MICROSOFT STREAM | STREAM | 1f2f344a-700d-42c9-9427-5cea1d5d7ba6 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFTSTREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>MICROSOFT STREAM (acffdce6-c30f-4dc2-81c0-372e33c515ec) | | Microsoft Stream Plan 2 | STREAM_P2 | ec156933-b85b-4c50-84ec-c9e5603709ef | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_P2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Plan 2 (d3a458d0-f10d-48c2-9e44-86f3f684029e) | |Microsoft Stream Storage Add-On (500 GB) | STREAM_STORAGE | 9bd7c846-9556-4453-a542-191d527209e8 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>STREAM_STORAGE (83bced11-77ce-4071-95bd-240133796768) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Stream Storage Add-On (83bced11-77ce-4071-95bd-240133796768) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| MICROSOFT TEAMS EXPLORATORY | TEAMS_EXPLORATORY | 710779e8-3d4a-4c88-adb9-386c958d1fdf | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>DESKLESS (s8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | COMMON DATA SERVICE FOR TEAMS_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>INSIGHTS BY MYANALYTICS (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MICROSOFT PLANNER (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MICROSOFT TEAMS (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MICROSOFT TEAMS (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>MOBILE DEVICE MANAGEMENT FOR OFFICE 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>OFFICE FOR THE WEB (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>OFFICE MOBILE APPS FOR OFFICE 365 (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWER APPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>POWER AUTOMATE FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER VIRTUAL AGENTS FOR OFFICE 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TO-DO (PLAN 1) (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>WHITEBOARD (PLAN 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653 | | Microsoft Teams Rooms Standard | MEETING_ROOM | 6070a4c8-34c6-4937-8dfb-39bbc6397a60 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Threat Experts - Experts on Demand | EXPERTS_ON_DEMAND | 9fa2f157-c8e4-4351-a3f2-ffa506da1406 | EXPERTS_ON_DEMAND (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) | Microsoft Threat Experts - Experts on Demand (b83a66d4-f05f-414d-ac0f-ea1c5239c42b) |
+| Multi-Geo Capabilities in Office 365 | OFFICE365_MULTIGEO | 84951599-62b7-46f3-9c9d-30551b2ad607 | EXCHANGEONLINE_MULTIGEO (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SHAREPOINTONLINE_MULTIGEO (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>TEAMSMULTIGEO (41eda15d-6b52-453b-906f-bc4a5b25a26b) | Exchange Online Multi-Geo (897d51f1-2cfa-4848-9b30-469149f5e68e)<br/>SharePoint Multi-Geo (735c1d98-dd3f-4818-b4ed-c8052e18e62d)<br/>Teams Multi-Geo (41eda15d-6b52-453b-906f-bc4a5b25a26b) |
| Teams Rooms Premium | MTR_PREM | 4fb214cb-a430-4a91-9c91-4976763aa78f | MMR_P1 (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Meeting Room Managed Services (bdaa59a3-74fd-4137-981a-31d4f84eb8a0)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Office 365 A3 for faculty | ENTERPRISEPACKPLUS_FACULTY | e578b273-6db4-4691-bba0-8d691f4da603 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>DYN365_CDS_O365_P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>CDS_O365_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>MYANALYTICS_P2 (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>OFFICE_FORMS_PLAN_2 (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>KAIZALA_O365_P3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E3 (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>POWERAPPS_O365_P2 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>FLOW_O365_P2 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>POWER_VIRTUAL_AGENTS_O365_P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>PROJECT_O365_P2 (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_2 (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>WHITEBOARD_PLAN2(94a54592-cd8b-425e-87c6-97868b000b91)<br/> YAMMER_EDU(2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Common Data Service - O365 P2 (4ff01e01-1ba7-4d71-8cf8-ce96c3bbcf14)<br/>Common Data Service for Teams_P2 (95b76021-6a53-4741-ab8b-1d1f3d66a95a)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Information Protection for Office 365 ΓÇô Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Insights by MyAnalytics (33c4f319-9bdd-48d6-9c4d-410b750a4a5a)<br/>Microsoft 365 Apps for enterprise (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Microsoft Azure Active Directory Rights (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Forms (Plan 2) (9b5de886-f035-4ff2-b3d8-c9127bea3620)<br/>Microsoft Kaizala Pro Plan 3 (aebd3021-9f8f-4bf8-bbe3-0ed2f4f047a1)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E3 SKU (9e700747-8b1d-45e5-ab8d-ef187ceec156)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power Apps for Office 365 (c68f8d98-5534-41c8-bf36-22fa496fa792)<br/>Power Automate for Office 365 (76846ad7-7776-4c40-a281-a386362dd1b9)<br/>Power Virtual Agents for Office 365 P2 (041fe683-03e4-45b6-b1af-c0cdc516daee)<br/>Project for Office (Plan E3) (31b4e2fc-4cd6-4e7d-9c1b-41407303bd66)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 2) (c87f142c-d1e9-4363-8630-aaea9c4d9ae5)<br/>Whiteboard (Plan 2) (94a54592-cd8b-425e-87c6-97868b000b91)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) | | Office 365 A5 for faculty| ENTERPRISEPREMIUM_FACULTY | a4585165-0533-458a-97e3-c400570268c4 | AAD_BASIC_EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>RMS_S_ENTERPRISE (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>LOCKBOX_ENTERPRISE (9f431833-0334-42de-a7dc-70aa40db46db)<br/>EducationAnalyticsP1 (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>EXCHANGE_S_ENTERPRISE (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>FLOW_O365_P3 (07699545-9485-468e-95b6-2fca3738be01)<br/>INFORMATION_BARRIERS (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>MIP_S_CLP2 (efb0351d-3b08-4503-993d-383af8de41e3)<br/>MIP_S_CLP1 (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>M365_ADVANCED_AUDITING (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>MICROSOFTBOOKINGS (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>COMMUNICATIONS_COMPLIANCE (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>COMMUNICATIONS_DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>CUSTOMER_KEY (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>DATA_INVESTIGATIONS (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>OFFICE_FORMS_PLAN_3 (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>INFO_GOVERNANCE (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>KAIZALA_STANDALONE (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>EXCHANGE_ANALYTICS (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>RECORDS_MANAGEMENT (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>MICROSOFT_SEARCH (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>STREAM_O365_E5 (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>INTUNE_O365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>EQUIVIO_ANALYTICS (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>THREAT_INTELLIGENCE (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>PAM_ENTERPRISE (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>OFFICESUBSCRIPTION (43de0ff5-c92c-492b-9116-175376d08c38)<br/>SHAREPOINTWAC_EDU (e03c7e47-402c-463c-ab25-949079bedb21)<br/>BI_AZURE_P2 (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>POWERAPPS_O365_P3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>PREMIUM_ENCRYPTION (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>SCHOOL_DATA_SYNC_P2 (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SHAREPOINTENTERPRISE_EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>BPOS_S_TODO_3 (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af)<br/>YAMMER_EDU (2078e8df-cff6-4290-98cb-5408261a760a) | Azure Active Directory Basic for EDU (1d0f309f-fdf9-4b2a-9ae7-9c48b91f1426)<br/>Azure Rights Management (bea4c11e-220a-4e6d-8eb8-8ea15d019f90)<br/>Customer Lockbox (9f431833-0334-42de-a7dc-70aa40db46db)<br/>Education Analytics (a9b86446-fa4e-498f-a92a-41b447e03337)<br/>Exchange Online (Plan 2) (efb87545-963c-4e0d-99df-69c6916d9eb0)<br/>Flow for Office 365 (07699545-9485-468e-95b6-2fca3738be01)<br/>Information Barriers (c4801e8a-cb58-4c35-aca6-f2dcc106f287)<br/>Information Protection for Office 365 - Premium (efb0351d-3b08-4503-993d-383af8de41e3)<br/>Information Protection for Office 365 - Standard (5136a095-5cf0-4aff-bec3-e84448b38ea5)<br/>Microsoft 365 Advanced Auditing (2f442157-a11c-46b9-ae5b-6e39ff4e5849)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Bookings (199a5c09-e0ca-4e37-8f7c-b05d533e1ea2)<br/>Microsoft Communications Compliance (41fcdd7d-4733-4863-9cf4-c65b83ce2df4)<br/>Microsoft Communications DLP (6dc145d6-95dd-4191-b9c3-185575ee6f6b)<br/>Microsoft Customer Key (6db1f1db-2b46-403f-be40-e39395f08dbb)<br/>Microsoft Data Investigations (46129a58-a698-46f0-aa5b-17f6586297d9)<br/>Microsoft Forms (Plan 3) (96c1e14a-ef43-418d-b115-9636cdaa8eed)<br/>Microsoft Information Governance (e26c2fcc-ab91-4a61-b35c-03cdc8dddf66)<br/>Microsoft Kaizala (0898bdbb-73b0-471a-81e5-20f1fe4dd66e)<br/>Microsoft MyAnalytics (Full) (34c0d7a0-a70f-4668-9238-47f9fc208882)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Records Management (65cc641f-cccd-4643-97e0-a17e3045e541)<br/>Microsoft Search (94065c59-bc8e-4e8b-89e5-5138d471eaff)<br/>Microsoft StaffHub (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>Microsoft Stream for O365 E5 SKU (6c6042f5-6f01-4d67-b8c1-eb99d36eed3e)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Mobile Device Management for Office 365 (882e1d05-acd1-4ccb-8708-6ee03664b117)<br/>Office 365 Advanced eDiscovery (4de31727-a228-4ec3-a5bf-8e45b5ca48cc)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b)<br/>Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939)<br/>Microsoft Defender for Office 365 (Plan 2) (8e0c0a52-6a6c-4d40-8370-dd62790dcd70)<br/>Office 365 Privileged Access Management (b1188c4c-1b36-4018-b48b-ee07604f6feb)<br/>Office 365 ProPlus (43de0ff5-c92c-492b-9116-175376d08c38)<br/>Office for the web (Education) (e03c7e47-402c-463c-ab25-949079bedb21)<br/>Power BI Pro (70d33638-9c74-4d01-bfd3-562de28bd4ba)<br/>PowerApps for Office 365 Plan 3 (9c0dab89-a30c-4117-86e7-97bda240acd2)<br/>Premium Encryption in Office 365 (617b097b-4b93-4ede-83de-5f075bb5fb2f)<br/>School Data Sync (Plan 2) (500b6a2a-7a50-4f40-b5f9-160e5b8c2f48)<br/>SharePoint Plan 2 for EDU (63038b2c-28d0-45f6-bc36-33062963b498)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>To-Do (Plan 3) (3fb82609-8c27-4f7b-bd51-30634711ee67)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af)<br/>Yammer for Academic (2078e8df-cff6-4290-98cb-5408261a760a) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Defender for Office 365 (Plan 1) | ATP_ENTERPRISE | 4ef96642-f096-40de-a3e9-d83fb2f90211 | ATP_ENTERPRISE (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | Microsoft Defender for Office 365 (Plan 1) (f20fedf3-f3c3-43c3-8267-2bfdd51c0939) | | Office 365 Extra File Storage for GCC | SHAREPOINTSTORAGE_GOV | e5788282-6381-469f-84f0-3d7d4021d34d | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTSTORAGE_GOV (e5bb877f-6ac9-4461-9e43-ca581543ab16) | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTSTORAGE_GOV (e5bb877f-6ac9-4461-9e43-ca581543ab16) | | Microsoft Teams Commercial Cloud | TEAMS_COMMERCIAL_TRIAL | 29a2f828-8f39-4837-b8ff-c957e86abe3c | CDS_O365_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>MCO_TEAMS_IW (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>POWER_VIRTUAL_AGENTS_O365_P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SHAREPOINTDESKLESS (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>WHITEBOARD_PLAN1 (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | Common Data Service for Teams_P1 (bed136c6-b799-4462-824d-fc045d3a9d25)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Forms (Plan E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>Microsoft Planner (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>Microsoft Stream for O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>Microsoft Teams (42a3ec34-28ba-46b6-992f-db53a675ac5b)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Power Apps for Office 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>Power Automate for Office 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>Power Virtual Agents for Office 365 P1 (0683001c-0492-4d59-9515-d9a6426b5813)<br/>SharePoint Kiosk (902b47e5-dcb2-4fdc-858b-c63a90a2bdb9)<br/>Sway (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>Whiteboard (Plan 1) (b8afc642-032e-4de5-8c0a-507a7bba7e5d)<br/>Yammer Enterprise (7547a3fe-08ee-4ccb-b430-5077c5041653) |
+| Office 365 Cloud App Security | ADALLOM_O365 | 84d5f90f-cd0d-4864-b90b-1c7ba63b4808 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ADALLOM_S_O365 (8c098270-9dd4-4350-9b30-ba4703f3b36b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Office 365 Advanced Security Management (8c098270-9dd4-4350-9b30-ba4703f3b36b) |
| Office 365 Extra File Storage | SHAREPOINTSTORAGE | 99049c9c-6011-4908-bf17-15f496e6519d | SHAREPOINTSTORAGE (be5a7ed5-c598-4fcd-a061-5e6724c68a58) | Office 365 Extra File Storage (be5a7ed5-c598-4fcd-a061-5e6724c68a58) | | OFFICE 365 E1 | STANDARDPACK | 18181a46-0d4e-45cd-891e-60aabd171b4e | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>OFFICEMOBILE_SUBSCRIPTION (c63d4d19-e8cb-460e-b37c-4d6c34603745)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653)) | | OFFICE 365 E2 | STANDARDWOFFPACK | 6634e0ce-1a9f-428c-a498-f84ec7b8aa2e | BPOS_S_TODO_1(5e62787c-c316-451f-b873-1d05acd4d12c)<br/>Deskless (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE_S_STANDARD (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW_O365_P1 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>POWERAPPS_O365_P1 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>PROJECTWORKMANAGEMENT (b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>STREAM_O365_E1 (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) | BPOS_S_TODO_1 (5e62787c-c316-451f-b873-1d05acd4d12c)<br/>MICROSOFT STAFFHUB (8c7d2df8-86f0-4902-b2ed-a0458298f3b3)<br/>EXCHANGE ONLINE (PLAN 1) (9aaf7827-d63c-4b61-89c3-182f06f82e5c)<br/>FLOW FOR OFFICE 365 (0f9b09cb-62d1-4ff4-9129-43f4996f83f4)<br/>MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SKYPE FOR BUSINESS ONLINE (PLAN 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>POWERAPPS FOR OFFICE 365 (92f7a6f3-b89b-4bbd-8c30-809e6da5ad1c)<br/>MICROSOFT PLANNER(b737dad2-2f6c-4c65-90e3-ca563267e8b9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>MICROSOFT STREAM FOR O365 E1 SKU (743dd19e-1ce3-4c62-a3ad-49ba8f63a2f6)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>YAMMER_ENTERPRISE (7547a3fe-08ee-4ccb-b430-5077c5041653) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| ONEDRIVE FOR BUSINESS (PLAN 2) | WACONEDRIVEENTERPRISE | ed01faf2-1d88-4947-ae91-45ca18703a96 | ONEDRIVEENTERPRISE (afcafa6a-d966-4462-918c-ec0b4e0fe642)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | ONEDRIVEENTERPRISE (afcafa6a-d966-4462-918c-ec0b4e0fe642)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014) | | POWERAPPS AND LOGIC FLOWS | POWERAPPS_INDIVIDUAL_USER | 87bbbc60-4754-4998-8c88-227dca264858 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERFLOWSFREE (0b4346bb-8dc3-4079-9dfc-513696f56039)<br/>POWERVIDEOSFREE (2c4ec2dc-c62d-4167-a966-52a3e6374015)<br/>POWERAPPSFREE (e61a2945-1d4e-4523-b6e7-30ba39d20f32) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>LOGIC FLOWS (0b4346bb-8dc3-4079-9dfc-513696f56039)<br/>MICROSOFT POWER VIDEOS BASIC (2c4ec2dc-c62d-4167-a966-52a3e6374015)<br/>MICROSOFT POWERAPPS (e61a2945-1d4e-4523-b6e7-30ba39d20f32) | | PowerApps per app baseline access | POWERAPPS_PER_APP_IW | bf666882-9c9b-4b2e-aa2f-4789b0a52ba2 | CDS_PER_APP_IWTRIAL (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow_Per_APP_IWTRIAL (dd14867e-8d31-4779-a595-304405f5ad39)<br/>POWERAPPS_PER_APP_IWTRIAL (35122886-cef5-44a3-ab36-97134eabd9ba) | CDS Per app baseline access (94a669d1-84d5-4e54-8462-53b0ae2c8be5)<br/>Flow per app baseline access (dd14867e-8d31-4779-a595-304405f5ad39)<br/>PowerApps per app baseline access (35122886-cef5-44a3-ab36-97134eabd9ba) |
+| Power Apps per app plan | POWERAPPS_PER_APP | a8ad7d2b-b8cf-49d6-b25a-69094a0be206 | CDS_PER_APP (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_APP (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Flow_Per_APP (c539fa36-a64e-479a-82e1-e40ff2aa83ee) | CDS PowerApps per app plan (9f2f00ad-21ae-4ceb-994b-d8bc7be90999)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per App Plan (b4f657ff-d83e-4053-909d-baa2b595ec97)<br/>Power Automate for Power Apps per App Plan (c539fa36-a64e-479a-82e1-e40ff2aa83ee) |
+| Power Apps per user plan | POWERAPPS_PER_USER | b30411f5-fea1-4a59-9ad9-3db7c7ead579 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWERAPPS_PER_USER (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Flow_PowerApps_PerUser (dc789ed8-0170-4b65-a415-eb77d5bb350a) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Apps per User Plan (ea2cf03b-ac60-46ae-9c1d-eeaeb63cec86)<br/>Power Automate for Power Apps per User Plan (dc789ed8-0170-4b65-a415-eb77d5bb350a) |
| Power Automate per flow plan | FLOW_BUSINESS_PROCESS | b3a42176-0a8c-4c3f-ba4e-f2b37fe5be6b | CDS_Flow_Business_Process (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_BUSINESS_PROCESS (7e017b61-a6e0-4bdc-861a-932846591f6e) | Common data service for Flow per business process plan (c84e52ae-1906-4947-ac4d-6fb3e5bf7c2e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per business process plan (7e017b61-a6e0-4bdc-861a-932846591f6e) | | Power Automate per user plan | FLOW_PER_USER | 4a51bf65-409c-4a91-b845-1121b571cc9d | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) | | Power Automate per user plan dept | FLOW_PER_USER_DEPT | d80a4c5d-8f05-4b64-9926-6574b9e6aee4 | DYN365_CDS_P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/> FLOW_PER_USER (c5002c70-f725-4367-b409-f0eff4fee6c0) | Common Data Service - P2 (6ea4c1ef-c259-46df-bce2-943342cd3cb2)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow per user plan (c5002c70-f725-4367-b409-f0eff4fee6c0) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| PROJECT ONLINE PREMIUM WITHOUT PROJECT CLIENT | PROJECTONLINE_PLAN_1 | 2db84718-652c-47a7-860c-f10d8abbdae3 | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | PROJECT ONLINE WITH PROJECT FOR OFFICE 365 | PROJECTONLINE_PLAN_2 | f82a60b8-1ee3-4cfb-a4fe-1c6a53c2656c | FORMS_PLAN_E1 (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | MICROSOFT FORMS (PLAN E1) (159f4cd6-e380-449f-a816-af1a9ef76344)<br/>PROJECT ONLINE DESKTOP CLIENT (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>SHAREPOINT ONLINE (PLAN 2) (5dbe027f-2339-4123-9542-606e4d348a72)<br/>OFFICE ONLINE (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>SWAY (a23b959c-7ce8-4e57-9140-b90eb88a9e97) | | PROJECT PLAN 1 | PROJECT_P1 | beb6439c-caad-48d3-bf46-0c82871e12be | DYN365_CDS_FOR_PROJECT_P1 (a6f677b3-62a6-4644-93e7-2a85d240845e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power_Automate_For_Project_P1 (00283e6b-2bd8-440f-a2d5-87358e4c89a1)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>PROJECT_P1 (4a12c688-56c6-461a-87b1-30d6f32136f9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1) | COMMON DATA SERVICE FOR PROJECT P1 (a6f677b3-62a6-4644-93e7-2a85d240845e)<br/>EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>POWER AUTOMATE FOR PROJECT P1 (00283e6b-2bd8-440f-a2d5-87358e4c89a1)<br/>PROJECT ONLINE ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>PROJECT P1 (4a12c688-56c6-461a-87b1-30d6f32136f9)<br/>SHAREPOINT (c7699d2e-19aa-44de-8edf-1736da088ca1) |
+| Project Plan 1 (for Department) | PROJECT_PLAN1_DEPT | 84cd610f-a3f8-4beb-84ab-d9d2c902c6c9 | DYN365_CDS_FOR_PROJECT_P1 (a6f677b3-62a6-4644-93e7-2a85d240845e)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power_Automate_For_Project_P1 (00283e6b-2bd8-440f-a2d5-87358e4c89a1)<br/>PROJECT_ESSENTIALS (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>PROJECT_P1 (4a12c688-56c6-461a-87b1-30d6f32136f9)<br/>SHAREPOINTSTANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1) | Common Data Service for Project P1 (a6f677b3-62a6-4644-93e7-2a85d240845e)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Power Automate for Project P1 (00283e6b-2bd8-440f-a2d5-87358e4c89a1)<br/>Project Online Essentials (1259157c-8581-4875-bca7-2ffb18c51bda)<br/>Project P1 (4a12c688-56c6-461a-87b1-30d6f32136f9)<br/>SHAREPOINT STANDARD (c7699d2e-19aa-44de-8edf-1736da088ca1) |
| Project Plan 3 | PROJECTPROFESSIONAL | 53818b1b-4a27-454b-8896-0dba576410e6 | DYN365_CDS_PROJECT (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_FOR_PROJECT (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>PROJECT_PROFESSIONAL (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Common Data Service for Project (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Project (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>Office for the web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>Project P3 (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
+| Project Plan 3 (for Department) | PROJECT_PLAN3_DEPT | 46102f44-d912-47e7-b0ca-1bd7b70ada3b | DYN365_CDS_PROJECT (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>FLOW_FOR_PROJECT (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>SHAREPOINTWAC (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>PROJECT_CLIENT_SUBSCRIPTION (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>SHAREPOINT_PROJECT (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>PROJECT_PROFESSIONAL (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SHAREPOINTENTERPRISE (5dbe027f-2339-4123-9542-606e4d348a72) | Common Data Service for Project (50554c47-71d9-49fd-bc54-42a2765c555c)<br/>Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Flow for Project (fa200448-008c-4acb-abd4-ea106ed2199d)<br/>Office for the Web (e95bec33-7c88-4a70-8e19-b10bd9d0c014)<br/>Project Online Desktop Client (fafd7243-e5c1-4a3a-9e40-495efcb1d3c3)<br/>Project Online Service (fe71d6c3-a2ea-4499-9778-da042bf08063)<br/>Project P3 (818523f5-016b-4355-9be8-ed6944946ea7)<br/>SharePoint (Plan 2) (5dbe027f-2339-4123-9542-606e4d348a72) |
| Project Plan 3 for GCC | PROJECTPROFESSIONAL_GOV | 074c6829-b3a0-430a-ba3d-aca365e57065 | SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_CLIENT_SUBSCRIPTION_GOV (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>SHAREPOINT_PROJECT_GOV (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Office for the web (Government) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Desktop Client for Government(45c6831b- ad74-4c7f-bd03-7c2b3fa39067)<br/>Project Online Service for Government (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) | | Project Plan 5 for GCC | PROJECTPREMIUM_GOV | f2230877-72be-4fec-b1ba-7156d6f75bd6 | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>SHAREPOINTWAC_GOV (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>PROJECT_CLIENT_SUBSCRIPTION_GOV (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>SHAREPOINT_PROJECT_GOV (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SHAREPOINTENTERPRISE_GOV (153f85dd-d912-4762-af6c-d6e0fb4f6692) | Exchange Foundation for Government (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>Office for the web (Government) (8f9f0f3b-ca90-406c-a842-95579171f8ec)<br/>Project Online Desktop Client for Government (45c6831b-ad74-4c7f-bd03-7c2b3fa39067)<br/>Project Online Service for Government (e57afa78-1f19-4542-ba13-b32cd4d8f472)<br/>SharePoint Plan 2G (153f85dd-d912-4762-af6c-d6e0fb4f6692) | | Rights Management Adhoc | RIGHTSMANAGEMENT_ADHOC | 8c4ce438-32a7-4ac5-91a6-e22ae08d9c8b | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>RMS_S_ADHOC (7a39d7dd-e456-4e09-842a-0204ee08187b) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Rights Management Adhoc (7a39d7dd-e456-4e09-842a-0204ee08187b) |
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| TELSTRA CALLING FOR O365 | MCOPSTNEAU2 | de3312e1-c7b0-46e6-a7c3-a515ff90bc86 | MCOPSTNEAU (7861360b-dc3b-4eba-a3fc-0d323a035746) | AUSTRALIA CALLING PLAN (7861360b-dc3b-4eba-a3fc-0d323a035746) | | Universal Print | UNIVERSAL_PRINT | 9f3d9c1d-25a5-4aaa-8e59-23a1e6450a67 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9) | | Visio Plan 1 | VISIO_PLAN1_DEPT | ca7f3140-d88c-455b-9a1c-7f0679e31a76 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for business Basic (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio web app (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
+| Visio Plan 2 | VISIO_PLAN2_DEPT | 38b434d2-a15e-4cde-9a98-e737c75623e1 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>OneDrive for Business (Basic) (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>Visio Desktop App (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>Visio Web App (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) |
| VISIO ONLINE PLAN 1 | VISIOONLINE_PLAN1 | 4b244418-9658-4451-a2b8-b5e2b364e9bd | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO ONLINE PLAN 2 | VISIOCLIENT | c5928f49-12ba-48f7-ada3-0d743a3601d5 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE_BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO_CLIENT_SUBSCRIPTION (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIOONLINE (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>ONEDRIVE FOR BUSINESS BASIC (da792a53-cbc0-4184-a10d-e544dd34b3c1)<br/>VISIO DESKTOP APP (663a804f-1c30-4ff0-9915-9db84f0d1cea)<br/>VISIO WEB APP (2bdbaf8f-738f-4ac7-9234-3c3ee2ce7d0f) | | VISIO PLAN 2 FOR GCC | VISIOCLIENT_GOV | 4ae99959-6b0f-43b0-b1ce-68146001bdba | EXCHANGE_S_FOUNDATION_GOV (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE_BASIC_GOV (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO_CLIENT_SUBSCRIPTION_GOV (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIOONLINE_GOV (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | EXCHANGE FOUNDATION FOR GOVERNMENT (922ba911-5694-4e99-a794-73aed9bfeec8)<br/>ONEDRIVE FOR BUSINESS BASIC FOR GOVERNMENT (98709c2e-96b5-4244-95f5-a0ebe139fb8a)<br/>VISIO DESKTOP APP FOR Government (f85945f4-7a55-4009-bc39-6a5f14a8eac1)<br/>VISIO WEB APP FOR GOVERNMENT (8a9ecb07-cfc0-48ab-866c-f83c4d911576) | |Viva Topics | TOPIC_EXPERIENCES | 4016f256-b063-4864-816e-d818aad600c9 | GRAPH_CONNECTORS_SEARCH_INDEX_TOPICEXP (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>CORTEX (c815c93d-0759-4bb8-b857-bc921a71be83) | Graph Connectors Search with Index (Viva Topics) (b74d57b2-58e9-484a-9731-aeccbba954f0)<br/>Viva Topics (c815c93d-0759-4bb8-b857-bc921a71be83) |
+| Windows 10 Enterprise A3 for faculty | WIN10_ENT_A3_FAC | 8efbe2f6-106e-442f-97d4-a59aa6037e06 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
+| Windows 10 Enterprise A3 for students | WIN10_ENT_A3_STU | d4ef921e-840b-4b48-9a90-ab6698bc7b31 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
| WINDOWS 10 ENTERPRISE E3 | WIN10_PRO_ENT_SUB | cb10e6cd-9da4-4992-867b-67546b1db821 | WIN10_PRO_ENT_SUB (21b439ba-a0ca-424f-a6cc-52f954a5b111) | WINDOWS 10 ENTERPRISE (21b439ba-a0ca-424f-a6cc-52f954a5b111) | | WINDOWS 10 ENTERPRISE E3 | WIN10_VDA_E3 | 6a0f6da5-0b87-4190-a6ae-9bb5a2b9546a | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | EXCHANGE FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>UNIVERSAL PRINT (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>WINDOWS 10 ENTERPRISE (NEW) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWS UPDATE FOR BUSINESS DEPLOYMENT SERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | | Windows 10 Enterprise E5 | WIN10_VDA_E5 | 488ba24a-39a9-4473-8ee5-19291e71b002 | EXCHANGE_S_FOUNDATION (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>WINDEFATP (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>UNIVERSAL_PRINT_01 (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Virtualization Rights for Windows 10 (E3/E5+VDA) (e7c91390-7625-45be-94e0-e16907e03118)<br/>WINDOWSUPDATEFORBUSINESS_DEPLOYMENTSERVICE (7bf960f6-2cd9-443a-8046-5dbff9558365) | Exchange Foundation (113feb6c-3fe4-4440-bddc-54d774bf0318)<br/>Microsoft Defender For Endpoint (871d91ec-ec1a-452b-a83f-bd76c7d770ef)<br/>Universal Print (795f6fe0-cc4d-4773-b050-5dde4dc704c9)<br/>Windows 10 Enterprise (New) (e7c91390-7625-45be-94e0-e16907e03118)<br/>Windows Update for Business Deployment Service (7bf960f6-2cd9-443a-8046-5dbff9558365) |
active-directory How To Connect Install Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/hybrid/how-to-connect-install-prerequisites.md
We recommend that you harden your Azure AD Connect server to decrease the securi
- Follow these [additional guidelines](/windows-server/identity/ad-ds/plan/security-best-practices/reducing-the-active-directory-attack-surface) to reduce the attack surface of your Active Directory environment. - Follow the [Monitor changes to federation configuration](how-to-connect-monitor-federation-changes.md) to setup alerts to monitor changes to the trust established between your Idp and Azure AD. - ### SQL Server used by Azure AD Connect * Azure AD Connect requires a SQL Server database to store identity data. By default, a SQL Server 2019 Express LocalDB (a light version of SQL Server Express) is installed. SQL Server Express has a 10-GB size limit that enables you to manage approximately 100,000 objects. If you need to manage a higher volume of directory objects, point the installation wizard to a different installation of SQL Server. The type of SQL Server installation can impact the [performance of Azure AD Connect](./plan-connect-performance-factors.md#sql-database-factors). * If you use a different installation of SQL Server, these requirements apply:
We recommend that you harden your Azure AD Connect server to decrease the securi
* If you have firewalls on your intranet and you need to open ports between the Azure AD Connect servers and your domain controllers, see [Azure AD Connect ports](reference-connect-ports.md) for more information. * If your proxy or firewall limit which URLs can be accessed, the URLs documented in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) must be opened. Also see [Safelist the Azure portal URLs on your firewall or proxy server](../../azure-portal/azure-portal-safelist-urls.md?tabs=public-cloud). * If you're using the Microsoft cloud in Germany or the Microsoft Azure Government cloud, see [Azure AD Connect sync service instances considerations](reference-connect-instances.md) for URLs.
-* Azure AD Connect (version 1.1.614.0 and after) by default uses TLS 1.2 for encrypting communication between the sync engine and Azure AD. If TLS 1.2 isn't available on the underlying operating system, Azure AD Connect incrementally falls back to older protocols (TLS 1.1 and TLS 1.0). From Azure AD Connect version 2.0 onwards. TLS 1.0 and 1.1 are no longer supported and installation will fail if TLS 1.2 is not available.
+* Azure AD Connect (version 1.1.614.0 and after) by default uses TLS 1.2 for encrypting communication between the sync engine and Azure AD. If TLS 1.2 isn't available on the underlying operating system, Azure AD Connect incrementally falls back to older protocols (TLS 1.1 and TLS 1.0). From Azure AD Connect version 2.0 onwards. TLS 1.0 and 1.1 are no longer supported and installation will fail if TLS 1.2 is not enabled.
* Prior to version 1.1.614.0, Azure AD Connect by default uses TLS 1.0 for encrypting communication between the sync engine and Azure AD. To change to TLS 1.2, follow the steps in [Enable TLS 1.2 for Azure AD Connect](#enable-tls-12-for-azure-ad-connect). * If you're using an outbound proxy for connecting to the internet, the following setting in the **C:\Windows\Microsoft.NET\Framework64\v4.0.30319\Config\machine.config** file must be added for the installation wizard and Azure AD Connect sync to be able to connect to the internet and Azure AD. This text must be entered at the bottom of the file. In this code, *&lt;PROXYADDRESS&gt;* represents the actual proxy IP address or host name.
active-directory Access Panel Collections https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/access-panel-collections.md
To create a collection, you must have an Azure AD Premium P1 or P2 license.
11. Select **Review + Create**. The properties for the new collection appear.
+> [!NOTE]
+> Admin collections are managed through the [Azure portal](https://portal.azure.com), not from [My Apps portal](https://myapps.microsoft.com). For example, if you assign users or groups as an owner, then they can only manage the collection through the Azure portal.
+ ## View audit logs The Audit logs record My Apps collections operations, including collection creation end-user actions. The following events are generated from My Apps:
active-directory Debug Saml Sso Issues https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/debug-saml-sso-issues.md
To download and install the My Apps Secure Sign-in Extension, use one of the fol
- [Chrome](https://go.microsoft.com/fwlink/?linkid=866367) - [Microsoft Edge](https://go.microsoft.com/fwlink/?linkid=845176)-- [Firefox](https://go.microsoft.com/fwlink/?linkid=866366) ## Test SAML-based single sign-on
active-directory Migrate Application Authentication To Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/manage-apps/migrate-application-authentication-to-azure-active-directory.md
Users can download an **Intune-managed browser**:
**Let users open their apps from a browser extension.**
-Users can [download the MyApps Secure Sign-in Extension](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) in [Chrome,](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) [FireFox,](https://addons.mozilla.org/firefox/addon/access-panel-extension/) or [Microsoft Edge](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) and can launch apps right from their browser bar to:
+Users can [download the MyApps Secure Sign-in Extension](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) in [Chrome,](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) or [Microsoft Edge](https://www.microsoft.com/p/my-apps-secure-sign-in-extension/9pc9sckkzk84?rtc=1&activetab=pivot%3Aoverviewtab) and can launch apps right from their browser bar to:
- **Search for their apps and have their most-recently-used apps appear**
active-directory Nist Authenticator Assurance Level 3 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/standards/nist-authenticator-assurance-level-3.md
Microsoft offers authentication methods that enable you to meet required NIST au
We recommend using a multifactor cryptographic hardware authenticator to achieve AAL3. Passwordless authentication eliminates the greatest attack surface, the password, and offers users a streamlined authentication method. If your organization is completely cloud based, we recommend that you use FIDO2 security keys.
-Note that FIDO2 keys and Windows Hello for Business haven't been validated at the required FIPS 140 Security Level. So federal customers need to conduct risk assessment and evaluation before accepting these authenticators as AAL3.
+Note that Windows Hello for Business has not been validated at the required FIPS 140 Security Level and as such federal customers would need to conduct risk assessment and evaluation before accepting it as AAL3. .
For detailed guidance, see [Plan a passwordless authentication deployment in Azure Active Directory](../authentication/howto-authentication-passwordless-deployment.md).
NIST allows the use of compensating controls for mitigating malware risk. Any In
[Achieving NIST AAL1 by using Azure AD](nist-authenticator-assurance-level-1.md)
-[Achieving NIST AAL2 by using Azure AD](nist-authenticator-assurance-level-2.md)
+[Achieving NIST AAL2 by using Azure AD](nist-authenticator-assurance-level-2.md)
active-directory My Apps Portal End User Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/active-directory/user-help/my-apps-portal-end-user-access.md
Download and install the extension, based on the browser you're using.
- **Google Chrome** - From the Chrome Web Store, go to the [My Apps Secure Sign-in Extension](https://chrome.google.com/webstore/detail/my-apps-secure-sign-in-ex/ggjhpefgjjfobnfoldnjipclpcfbgbhl) feature, and then select **Add to Chrome**. -- **Mozilla Firefox** - From the **Firefox Add-ons** page, go to the [My Apps Secure Sign-in Extension](https://addons.mozilla.org/firefox/addon/access-panel-extension/) feature, and then select **Add to Firefox**.
+- **Mozilla Firefox** - From the **Firefox Add-ons** page, go to the My Apps Secure Sign-in Extension feature, and then select **Add to Firefox**.
An icon is added to the right of your **Address** bar, letting you sign in and customize the extension.
After you get to the **Apps** page, you can:
- [View and update your groups-related information](my-apps-portal-end-user-groups.md) -- [Perform your own access reviews](my-apps-portal-end-user-access-reviews.md)
+- [Perform your own access reviews](my-apps-portal-end-user-access-reviews.md)
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-access-restriction-policies.md
Previously updated : 06/22/2021 Last updated : 08/20/2021
In the following example, the per subscription rate limit is 20 calls per 90 sec
| -- | -- | -- | - | | name | The name of the API for which to apply the rate limit. | Yes | N/A | | calls | The maximum total number of calls allowed during the time interval specified in `renewal-period`. | Yes | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Maximum allowed value: 300 seconds. | Yes | N/A |
| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| Name | Description | Required | Default | | - | -- | -- | - |
-| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Yes | N/A |
+| calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A |
| counter-key | The key to use for the rate limit policy. | Yes | N/A | | increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A |
-| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. | Yes | N/A |
+| renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
| retry-after-header-name | The name of a response header whose value is the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | retry-after-variable-name | The name of a policy expression variable that stores the recommended retry interval in seconds after the specified call rate is exceeded. | No | N/A | | remaining-calls-header-name | The name of a response header whose value after each policy execution is the number of remaining calls allowed for the time interval specified in the `renewal-period`. | No | N/A |
api-management Api Management Howto Disaster Recovery Backup Restore https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-disaster-recovery-backup-restore.md
Title: Implement disaster recovery using backup and restore in API Management
description: Learn how to use backup and restore to perform disaster recovery in Azure API Management. - - Previously updated : 12/05/2020 Last updated : 08/20/2021
All of the tasks that you do on resources using the Azure Resource Manager must
### Create an Azure Active Directory application 1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Using the subscription that contains your API Management service instance, navigate to the **App registrations** tab in **Azure Active Directory** (Azure Active Directory > Manage/App registrations).
-
+1. Using the subscription that contains your API Management service instance, navigate to the [Azure portal - App registrations](https://portal.azure.com/#blade/Microsoft_AAD_RegisteredApps/ApplicationsListBlade) to register an app in Active Directory.
> [!NOTE] > If the Azure Active Directory default directory isn't visible to your account, contact the administrator of the Azure subscription to grant the required permissions to your account.
+1. Select **+ New registration**.
+1. On the **Register an application** page, set the values as follows:
+
+ * Set **Name** to a meaningful name.
+ * Set **Supported account types** to **Accounts in this organizational directory only**.
+ * In **Redirect URI** enter a placeholder URL such as `https://resources`. It's a required field, but the value isn't used later.
+ * Select **Register**.
-3. Click **New application registration**.
-
- The **Create** window appears on the right. That's where you enter the AAD app relevant information.
-
-4. Enter a name for the application.
-5. For the application type, select **Native**.
-6. Enter a placeholder URL such as `http://resources` for the **Redirect URI**, as it's a required field, but the value isn't used later. Click the check box to save the application.
-7. Click **Create**.
+### Add permissions
-### Add an application
-
-1. Once the application is created, click **API permissions**.
-2. Click **+ Add a permission**.
-4. Press **Select Microsoft APIs**.
-5. Choose **Azure Service Management**.
-6. Press **Select**.
+1. Once the application is created, select **API permissions** > **+ Add a permission**.
+1. Select **Microsoft APIs**.
+1. Select **Azure Service Management**.
:::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/add-app-permission.png" alt-text="Screenshot that shows how to add app permissions.":::
-7. Click **Delegated Permissions** beside the newly added application, check the box for **Access Azure Service Management (preview)**.
+1. Click **Delegated Permissions** beside the newly added application, and check the box for **Access Azure Service Management as organization users (preview)**.
:::image type="content" source="./media/api-management-howto-disaster-recovery-backup-restore/delegated-app-permission.png" alt-text="Screenshot that shows adding delegated app permissions.":::
-8. Press **Select**.
-9. Click **Add Permissions**.
+1. Select **Add permissions**.
-### Configuring your app
+### Configure your app
-Before calling the APIs that generate the backup and restore it, you need to get a token. The following example uses the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package to retrieve the token.
+Before calling the APIs that generate the backup and restore, you need to get a token. The following example uses the [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package to retrieve the token.
```csharp using Microsoft.IdentityModel.Clients.ActiveDirectory;
Replace `{tenant id}`, `{application id}`, and `{redirect uri}` using the follow
The REST APIs are [Api Management Service - Backup](/rest/api/apimanagement/2020-12-01/api-management-service/backup) and [Api Management Service - Restore](/rest/api/apimanagement/2020-12-01/api-management-service/restore).
+> [!NOTE]
+> Backup and restore operations can also be performed with PowerShell [_Backup-AzApiManagement_](/powershell/module/az.apimanagement/backup-azapimanagement) and [_Restore-AzApiManagement_](/powershell/module/az.apimanagement/restore-azapimanagement) commands respectively.
+ Before calling the "backup and restore" operations described in the following sections, set the authorization request header for your REST call. ```csharp
where:
- `subscriptionId` - ID of the subscription that holds the API Management service you're trying to back up - `resourceGroupName` - name of the resource group of your Azure API Management service - `serviceName` - the name of the API Management service you're making a backup of specified at the time of its creation-- `api-version` - replace with `2020-12-01`
+- `api-version` - replace with a supported REST API version such as `2020-12-01`
In the body of the request, specify the target Azure storage account name, access key, blob container name, and backup name:
Restore is a long running operation that may take up to 30 or more minutes to co
> > **Changes** made to the service configuration (for example, APIs, policies, developer portal appearance) while restore operation is in progress **could be overwritten**.
-<!-- Dummy comment added to suppress markdown lint warning -->
-
-> [!NOTE]
-> Backup and restore operations can also be performed with PowerShell [_Backup-AzApiManagement_](/powershell/module/az.apimanagement/backup-azapimanagement) and [_Restore-AzApiManagement_](/powershell/module/az.apimanagement/restore-azapimanagement) commands respectively.
- ## Constraints when making backup or restore request -- **Container** specified in the request body **must exist**. - While backup is in progress, **avoid management changes in the service** such as SKU upgrade or downgrade, change in domain name, and more. - Restore of a **backup is guaranteed only for 30 days** since the moment of its creation. - **Changes** made to the service configuration (for example, APIs, policies, and developer portal appearance) while backup operation is in process **might be excluded from the backup and will be lost**.
api-management Api Management Howto Migrate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/api-management-howto-migrate.md
Title: How to migrate Azure API Management across regions
+ Title: How to migrate Azure API Management between regions
description: Learn how to migrate an API Management instance from one region to another. - -- Previously updated : 08/26/2019+ Last updated : 08/20/2021 -+
+#cusomerintent: As an Azure service administrator, I want to move my service resources to another Azure region.
-# How to migrate Azure API Management across regions
-To migrate API Management instances from one Azure region to another, you can use the [backup and restore](api-management-howto-disaster-recovery-backup-restore.md) feature. You should choose the same API Management pricing tier in the source and target regions.
+
+# How to move Azure API Management across regions
+
+This article describes how to move an API Management instance to a different Azure region. You might move your instance to another region for many reasons. For example:
+
+* Locate your instance closer to your API consumers
+* Deploy features available in specific regions only
+* Meet internal policy and governance requirements
+
+To move API Management instances from one Azure region to another, use the service's [backup and restore](api-management-howto-disaster-recovery-backup-restore.md) operations. You can use a different API Management instance name or the existing name.
> [!NOTE]
-> Backup and restore won't work while migrating between different cloud types. For that, you'll need to export the resource [as a template](../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates). Then, adapt the exported template for the target Azure region and re-create the resource.
+> API Management also supports [multi-region deployment](api-management-howto-deploy-multi-region.md), which distributes a single Azure API management service across multiple Azure regions. Multi-region deployment helps reduce request latency perceived by geographically distributed API consumers and improves service availability if one region goes offline.
-## Option 1: Use a different API Management instance name
-1. In the target region, create a new API Management instance with the same pricing tier as the source API Management instance. The new instance should have a different name.
-1. Backup existing API Management instance to a storage account.
-1. Restore the backup created in Step 2 to the new API Management instance created in the new region in Step 1.
-1. If you have a custom domain pointing to the source region API Management instance, update the custom domain CNAME to point to the new API Management instance.
+## Considerations
+* Choose the same API Management pricing tier in the source and target regions.
+* Backup and restore won't work when migrating between different cloud types. For that scenario, export the resource [as a template](../azure-resource-manager/management/manage-resource-groups-portal.md#export-resource-groups-to-templates). Then, adapt the exported template for the target Azure region and re-create the resource.
-## Option 2: Use the same API Management instance name
+## Prerequisites
-> [!NOTE]
-> This option will result in downtime during the migration.
+* Review requirements and limitations of the API Management [backup and restore](api-management-howto-disaster-recovery-backup-restore.md) operations.
+* See [What is not backed up](api-management-howto-disaster-recovery-backup-restore.md#what-is-not-backed-up). Record settings and data that you will need to recreate manually after moving the instance.
+* Create a [storage account](../storage/common/storage-account-create.md?tabs=azure-portal) in the source region. You will use this account to back up the source instance.
+
+## Prepare and move
+
+### Option 1: Use a different API Management instance name
-1. Back up the API Management instance in the source region to a storage account.
-1. Delete the API Management in the source region.
+1. In the target region, create a new API Management instance with the same pricing tier as the source API Management instance. Use a different name for the new instance.
+1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#-back-up-an-api-management-service) the existing API Management instance to the storage account.
+1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#-restore-an-api-management-service) the source instance's backup to the new API Management instance.
+1. If you have a custom domain pointing to the source region API Management instance, update the custom domain CNAME to point to the new API Management instance.
+
+### Option 2: Use the same API Management instance name
+
+> [!WARNING]
+> This option deletes the original API Management instance and results in downtime during the migration. Ensure that you have a valid backup before deleting the source instance.
+
+1. [Back up](api-management-howto-disaster-recovery-backup-restore.md#-back-up-an-api-management-service) the existing API Management instance to the storage account.
+1. Delete the API Management instance in the source region.
1. Create a new API Management instance in the target region with the same name as the one in the source region.
-1. Restore the backup created in Step 1 to the new API Management instance in the target region.
+1. [Restore](api-management-howto-disaster-recovery-backup-restore.md#-restore-an-api-management-service) the source instance's backup to the new API Management instance in the target region.
+
+## Verify
+
+1. Ensure that the restore operation completes successfully before accessing your API Management instance in the target region.
+1. Configure settings that are not automatically moved during the restore operation. Examples: virtual network configuration, managed identities, developer portal content, and custom domain and custom CA certificates.
+1. Access your API Management endpoints in the target region. For example, test your APIs, or access the developer portal.
+
+## Clean up source resources
+
+If you moved the API Management instance using Option 1, after you successfully restore and configure the target instance, you may delete the source instance.
+## Next steps
-## <a name="next-steps"> </a>Next steps
* For more information about the backup and restore feature, see [how to implement disaster recovery](api-management-howto-disaster-recovery-backup-restore.md).
-* For information on migration Azure resources, see [Azure cross-region migration guidance](https://github.com/Azure/Azure-Migration-Guidance).
+* For information on migrating Azure resources, see [Azure cross-region migration guidance](https://github.com/Azure/Azure-Migration-Guidance).
* [Optimize and save on your cloud spending](../cost-management-billing/costs/quick-acm-cost-analysis.md?WT.mc_id=costmanagementcontent_docsacmhorizontal_-inproduct-learn).
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/api-management/validation-policies.md
Previously updated : 07/12/2021 Last updated : 08/20/2021
In the following example, the JSON payload in requests and responses is validate
| Name | Description | Required | Default | | -- | - | -- | - | | unspecified-content-type-action | [Action](#actions) to perform for requests or responses with a content type that isnΓÇÖt specified in the API schema. | Yes | N/A |
-| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). | Yes | N/A |
+| max-size | Maximum length of the body of the request or response in bytes, checked against the `Content-Length` header. If the request body or response body is compressed, this value is the decompressed length. Maximum allowed value: 102,400 bytes (100 KB). (Contact [support](https://azure.microsoft.com/support/options/) if you need to increase this limit.) | Yes | N/A |
| size-exceeded-action | [Action](#actions) to perform for requests or responses whose body exceeds the size specified in `max-size`. | Yes | N/A | | errors-variable-name | Name of the variable in `context.Variables` to log validation errors to. | No | N/A | | type | Content type to execute body validation for, checked against the `Content-Type` header. This value is case insensitive. If empty, it applies to every content type specified in the API schema. | No | N/A |
app-service Configure Linux Open Ssh Session https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/configure-linux-open-ssh-session.md
Using TCP tunneling you can create a network connection between your development
To get started, you need to install [Azure CLI](/cli/azure/install-azure-cli). To see how it works without installing Azure CLI, open [Azure Cloud Shell](../cloud-shell/overview.md).
-Open a remote connection to your app using the [az webapp remote-connection create](/cli/azure/webapp/remote-connection#az_webapp_remote_connection_create) command. Specify _\<subscription-id>_, _\<group-name>_ and \_\<app-name>_ for your app.
+Open a remote connection to your app using the [az webapp remote-connection create](/cli/azure/webapp#az_webapp_create_remote_connection) command. Specify _\<subscription-id>_, _\<group-name>_ and \_\<app-name>_ for your app.
```azurecli-interactive az webapp create-remote-connection --subscription <subscription-id> --resource-group <resource-group-name> -n <app-name> &
app-service Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/quickstart-java.md
Be careful about the values of `<appName>` and `<resourceGroup>` (`helloworld-15
## Deploy the app
-The Maven plugin uses account credentials from the Azure CLI to deploy to App Services. [Sign in with the Azure CLI](/cli/azure/authenticate-azure-cli) before continuing.
-
-```azurecli-interactive
-az login
-```
-
-Then you can deploy your Java app to Azure using the following command.
+With all the configuration ready in your pom file, you can deploy your Java app to Azure with one single command.
```azurecli-interactive mvn package azure-webapp:deploy
app-service Web Sites Integrate With Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/app-service/web-sites-integrate-with-vnet.md
You can also configure Route All using CLI (*Note*: minimum `az version` require
az webapp config set --resource-group myRG --name myWebApp --vnet-route-all-enabled [true|false] ```
-The Route All configuration setting replaces and takes precedence over the legacy `WEBSITE_VNET_ROUTE_ALL` app setting.
-
+The Route All configuration setting is the recommended way of enabling routing of all traffic. Using the configuration setting will allow you to audit the behavior with [a built-in policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33228571-70a4-4fa1-8ca1-26d0aba8d6ef). The existing `WEBSITE_VNET_ROUTE_ALL` App Setting can still be used and you can enable all traffic routing with either setting.
#### Network routing
automanage Automanage Hotpatch https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-hotpatch.md
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). > [!NOTE]
-> Hotpatch capabilities can be found in one of these _Windows Server Azure Edition_ images: Windows Server 2019 Datacenter: Azure Edition (Core), Windows Server 2022 Datacenter: Azure Edition (Core)
+> Hotpatch can be evaluated on _Windows Server 2022 Datacenter: Azure Edition (Core) Preview_. Hotpatch on _Windows Server 2019 Datacenter: Azure Edition Preview_ is no longer available to evaluate.
Hotpatching is a new way to install updates on supported _Windows Server Azure Edition_ virtual machines (VMs) that doesnΓÇÖt require a reboot after installation. This article covers information about Hotpatch for supported _Windows Server Azure Edition_ VMs, which has the following benefits: * Lower workload impact with less reboots
There are some important considerations to running a supported _Windows Server A
### Can I upgrade from my existing Windows Server OS?
-* Upgrading from existing versions of Windows Server (that is, Windows Server 2016 or 2019 non-Azure editions) to _Windows Server 2022 Datacenter: Azure Edition_ is supported. Upgrading to _Windows Server 2019 Datacenter: Azure Edition_ isn't supported.
+* Yes, upgrading from existing versions of Windows Server (such as Windows Server 2016 or Windows Server 2019) to _Windows Server 2022 Datacenter: Azure Edition_ is supported.
### Can I use Hotpatch for production workloads during the preview?
automanage Automanage Windows Server Services Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server-services-overview.md
Automanage for Windows Server Services brings new capabilities specifically to _
> This preview version is provided without a service level agreement, and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. > For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+> [!NOTE]
+> Hotpatch can be evaluated on _Windows Server 2022 Datacenter: Azure Edition (Core) Preview_. Hotpatch on _Windows Server 2019 Datacenter: Azure Edition Preview_ is no longer available to evaluate.
+ Automanage for Windows Server capabilities can be found in one or more of these _Windows Server Azure Edition_ images: -- Windows Server 2019 Datacenter: Azure Edition (Core) - Windows Server 2022 Datacenter: Azure Edition (Desktop Experience) - Windows Server 2022 Datacenter: Azure Edition (Core)
Capabilities vary by image, see [getting started](#getting-started-with-windows-
Hotpatch is available in public preview on the following images: -- Windows Server 2019 Datacenter: Azure Edition (Core) - Windows Server 2022 Datacenter: Azure Edition (Core) Hotpatch gives you the ability to apply security updates on your VM without rebooting. Additionally, Automanage for Windows Server automates the onboarding, configuration, and orchestration of Hotpatching. To learn more, see [Hotpatch](automanage-hotpatch.md).
It's important to consider up front, which Automanage for Windows Server capabil
|Image|Capabilities| |--|--|
-| Windows Server 2019 Datacenter: Azure Edition (Core) | Hotpatch |
|Windows Server 2022 Datacenter: Azure Edition (Desktop experience) | SMB over QUIC, Extended Network | | Windows Server 2022 Datacenter: Azure Edition (Core) | Hotpatch, SMB over QUIC, Extended Network |
automanage Automanage Windows Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/automanage/automanage-windows-server.md
Automanage supports the following Windows Server versions:
- Windows Server 2012/R2 - Windows Server 2016 - Windows Server 2019-- Windows Server 2019 Azure Edition - Windows Server 2022 - Windows Server 2022 Azure Edition
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
Last updated 06/30/2021-+ keywords: "Kubernetes, Arc, Azure, cluster"
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
### [Azure CLI](#tab/azure-cli)
+* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0
+
+* Install the **connectedk8s** Azure CLI extension of version >= 1.0.0:
+
+ ```console
+ az extension add --name connectedk8s
+ ```
* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options: * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/)
In this quickstart, you'll learn the benefits of Azure Arc enabled Kubernetes an
* Install the [latest release of Helm 3](https://helm.sh/docs/intro/install).
-* [Install or upgrade Azure CLI](/cli/azure/install-azure-cli) to version >= 2.16.0
-* Install the `connectedk8s` Azure CLI extension of version >= 1.0.0:
+### [Azure PowerShell](#tab/azure-powershell)
- ```console
- az extension add --name connectedk8s
- ```
->[!NOTE]
-> For [**custom locations**](./custom-locations.md) on your cluster, use East US or West Europe regions. For all other Azure Arc enabled Kubernetes features, [select any region from this list](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
-### [Azure PowerShell](#tab/azure-powershell)
+* [Azure PowerShell version 5.9.0 or later](/powershell/azure/install-az-ps)
+* Install the **Az.ConnectedKubernetes** PowerShell module:
-> [!IMPORTANT]
-> While the **Az.ConnectedKubernetes** PowerShell module is in preview, you must install it separately using
-> the `Install-Module` cmdlet.
+ ```azurepowershell-interactive
+ Install-Module -Name Az.ConnectedKubernetes
+ ```
-```azurepowershell-interactive
-Install-Module -Name Az.ConnectedKubernetes
-```
+ > [!IMPORTANT]
+ > While the **Az.ConnectedKubernetes** PowerShell module is in preview, you must install it separately using
+ > the `Install-Module` cmdlet.
* An up-and-running Kubernetes cluster. If you don't have one, you can create a cluster using one of these options: * [Kubernetes in Docker (KIND)](https://kind.sigs.k8s.io/)
Install-Module -Name Az.ConnectedKubernetes
* Install the [latest release of Helm 3](https://helm.sh/docs/intro/install).
-* [Azure PowerShell version 5.9.0 or later](/powershell/azure/install-az-ps)
-
->[!NOTE]
-> For [**custom locations**](./custom-locations.md) on your cluster, use East US or West Europe regions. For all other Azure Arc enabled Kubernetes features, [select any region from this list](https://azure.microsoft.com/global-infrastructure/services/?products=azure-arc).
- ## Meet network requirements > [!IMPORTANT]
Install-Module -Name Az.ConnectedKubernetes
| -- | - | | `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. | | `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
-| `https://login.microsoftonline.com` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
+| `https://login.microsoftonline.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
| `https://mcr.microsoft.com` | Required to pull container images for Azure Arc agents. | | `https://gbl.his.arc.azure.com` | Required to get the regional endpoint for pulling system-assigned Managed Service Identity (MSI) certificates. | | `https://<region-code>.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Service Identity (MSI) certificates. `<region-code>` mapping for Azure cloud regions: `eus` (East US), `weu` (West Europe), `wcus` (West Central US), `scus` (South Central US), `sea` (South East Asia), `uks` (UK South), `wus2` (West US 2), `ae` (Australia East), `eus2` (East US 2), `ne` (North Europe), `fc` (France Central). |
+|`*.servicebus.windows.net`, `*.guestnotificationservice.azure.com`, `sts.windows.net` | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
## 1. Register providers for Azure Arc enabled Kubernetes ### [Azure CLI](#tab/azure-cli) 1. Enter the following commands:
- ```console
+ ```azurecli
az provider register --namespace Microsoft.Kubernetes az provider register --namespace Microsoft.KubernetesConfiguration az provider register --namespace Microsoft.ExtendedLocation ``` 2. Monitor the registration process. Registration may take up to 10 minutes.
- ```console
+ ```azurecli
az provider show -n Microsoft.Kubernetes -o table az provider show -n Microsoft.KubernetesConfiguration -o table az provider show -n Microsoft.ExtendedLocation -o table ```
+ Once registered, you should see the `RegistrationState` state for these namespaces change to `Registered`.
+ ### [Azure PowerShell](#tab/azure-powershell) 1. Enter the following commands:
- ```azurepowershell-interactive
+ ```azurepowershell
Register-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes Register-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration Register-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation ``` 1. Monitor the registration process. Registration may take up to 10 minutes.
- ```azurepowershell-interactive
+ ```azurepowershell
Get-AzResourceProvider -ProviderNamespace Microsoft.Kubernetes Get-AzResourceProvider -ProviderNamespace Microsoft.KubernetesConfiguration Get-AzResourceProvider -ProviderNamespace Microsoft.ExtendedLocation ```
+ Once registered, you should see the `RegistrationState` state for these namespaces change to `Registered`.
## 2. Create a resource group
Run the following command:
### [Azure CLI](#tab/azure-cli)
-```console
+```azurecli
az group create --name AzureArcTest --location EastUS --output table ```
eastus AzureArcTest
### [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
New-AzResourceGroup -Name AzureArcTest -Location EastUS ```
Run the following command:
### [Azure CLI](#tab/azure-cli)
-```console
+```azurecli
az connectedk8s connect --name AzureArcTest1 --resource-group AzureArcTest ```
Helm release deployment succeeded
### [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
New-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArcTest -Location eastus ```
Run the following command:
### [Azure CLI](#tab/azure-cli)
-```console
+```azurecli
az connectedk8s list --resource-group AzureArcTest --output table ```
AzureArcTest1 eastus AzureArcTest
### [Azure PowerShell](#tab/azure-powershell)
-```azurepowershell-interactive
+```azurepowershell
Get-AzConnectedKubernetes -ResourceGroupName AzureArcTest ```
If your cluster is behind an outbound proxy server, Azure CLI and the Azure Arc
2. Run the connect command with proxy parameters specified:
- ```console
+ ```azurecli
az connectedk8s connect --name <cluster-name> --resource-group <resource-group> --proxy-https https://<proxy-server-ip-address>:<port> --proxy-http http://<proxy-server-ip-address>:<port> --proxy-skip-range <excludedIP>,<excludedCIDR> --proxy-cert <path-to-cert-file> ```
If your cluster is behind an outbound proxy server, Azure PowerShell and the Azu
2. Run the connect command with the proxy parameter specified:
- ```azurepowershell-interactive
+ ```azurepowershell
New-AzConnectedKubernetes -ClusterName <cluster-name> -ResourceGroupName <resource-group> -Location eastus -Proxy 'https://<proxy-server-ip-address>:<port>' ```
Azure Arc enabled Kubernetes deploys a few operators into the `azure-arc` namesp
You can delete the Azure Arc enabled Kubernetes resource, any associated configuration resources, *and* any agents running on the cluster using Azure CLI using the following command:
-```console
+```azurecli
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest ```
az connectedk8s delete --name AzureArcTest1 --resource-group AzureArcTest
You can delete the Azure Arc enabled Kubernetes resource, any associated configuration resources, *and* any agents running on the cluster using Azure PowerShell using the following command:
-```azurepowershell-interactive
+```azurepowershell
Remove-AzConnectedKubernetes -ClusterName AzureArcTest1 -ResourceGroupName AzureArcTest ```
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-government/compare-azure-government-global-azure.md
ms.devlang: na
na Previously updated : 08/04/2021 Last updated : 08/20/2021 # Compare Azure Government and global Azure
The following Data Factory **features are not currently available** in Azure Gov
- Mapping data flows
-### [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks)
-
-For access to Azure Databricks in an Azure Government environment, contact your Microsoft or Databricks account representative.
- ### [HDInsight](../hdinsight/hadoop/apache-hadoop-introduction.md) The following HDInsight **features are not currently available** in Azure Government:
azure-monitor Pricing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/app/pricing.md
Previously updated : 6/24/2021 Last updated : 8/23/2021
For Application Insights resources which send their data to a Log Analytics work
## Estimating the costs to manage your application
-If you're not yet using Application Insights, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Application Insights. Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. Scroll down the page to Azure Monitor, and select Application Insights from the Type dropdown. Here you can enter the number of GB of data you expect to collect per month, so the question is how much data will Application Insights collect monitoring your application.
+If you're not yet using Application Insights, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Application Insights. Start by entering "Azure Monitor" in the Search box, and clicking on the resulting Azure Monitor tile. Scroll down the page to Azure Monitor, and expand the Application Insights section. Your estimated costs depend on the amount of log data ingested. There are two approaches to estimate data volumes:
-There are two approaches to address this: use of default monitoring and adaptive sampling, which is available in the ASP.NET SDK, or estimate your likely data ingestion based on what other similar customers have seen.
+1. estimate your likely data ingestion based on what other similar applications generate, or
+2. use of default monitoring and adaptive sampling, which is available in the ASP.NET SDK.
+
+### Learn from what similar applicatiopns collect
+
+In the Azure Monitoring Pricing calculator for Application Insights, click to enable the **Estimate data volume based on application activity**. Here you can provide inputs about your application (requests per month and page views per month, in case you will collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling.
### Data collection when using sampling With the ASP.NET SDK's [adaptive sampling](sampling.md#adaptive-sampling), the data volume is adjusted automatically to keep within a specified maximum rate of traffic for default Application Insights monitoring. If the application produces a low amount of telemetry, such as when debugging or due to low usage, items won't be dropped by the sampling processor as long as volume is below the configured events per second level. For a high volume application, with the default threshold of five events per second, adaptive sampling will limit the number of daily events to 432,000. Using a typical average event size of 1 KB, this corresponds to 13.4 GB of telemetry per 31-day month per node hosting your application since the sampling is done local to each node.
-> [!NOTE]
-> Azure Monitor log data size is calculated in GB (1 GB = 10^9 bytes).
- For SDKs that don't support adaptive sampling, you can employ [ingestion sampling](./sampling.md#ingestion-sampling), which samples when the data is received by Application Insights based on a percentage of data to retain, or [fixed-rate sampling for ASP.NET, ASP.NET Core, and Java websites](sampling.md#fixed-rate-sampling) to reduce the traffic sent from your web server and web browsers
-### Learn from what similar customers collect
-
-In the Azure Monitoring Pricing calculator for Application Insights, if you enable the "Estimate data volume based on application activity" functionality, you can provide inputs about your application (requests per month and page views per month, in case you will collect client-side telemetry), and then the calculator will tell you the median and 90th percentile amount of data collected by similar applications. These applications span the range of Application Insights configuration (e.g some have default [sampling](./sampling.md), some have no sampling etc.), so you still have the control to reduce the volume of data you ingest far below the median level using sampling. But this is a starting point to understand what other, similar customers are seeing.
- ## Understand your usage and estimate costs Application Insights makes it easy to understand what your costs are likely to be based on recent usage patterns. To get started, in the Azure portal, for the Application Insights resource, go to the **Usage and estimated costs** page:
azure-monitor Manage Cost Storage https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/manage-cost-storage.md
na Previously updated : 08/19/2021 Last updated : 08/23/2021
In cluster billing options, data retention is billed for each workspace. Cluster
## Estimating the costs to manage your environment
-If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Log Analytics. In the **Search** box, enter "Azure Monitor", and then select the resulting Azure Monitor tile. Scroll down the page to **Azure Monitor**, and then select **Log Analytics** in the **Type** dropdown list. Here you can enter the number of virtual machines and the number of gigabytes of data that you expect to collect from each VM. Typically, 1 GB to 3 GB of data per month is ingested from a typical Azure Virtual Machine. If you're already evaluating Azure Monitor Logs, you can use data statistics from your own environment. See below for how to determine the [number of monitored VMs](#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](#understanding-ingested-data-volume).
-
-If you're not yet running Log Analytics, here is some guidance for estimating data volumes:
+If you're not yet using Azure Monitor Logs, you can use the [Azure Monitor pricing calculator](https://azure.microsoft.com/pricing/calculator/?service=monitor) to estimate the cost of using Log Analytics. In the **Search** box, enter "Azure Monitor", and then select the resulting Azure Monitor tile. Scroll down the page to **Azure Monitor**, and then expand the **Log Analytics** section. Here you can enter the GB of data that you expect to collect. If you're already evaluating Azure Monitor Logs, you can use data statistics from your own environment. See below for how to determine the [number of monitored VMs](#understanding-nodes-sending-data) and the [volume of data your workspace is ingesting](#understanding-ingested-data-volume). If you're not yet running Log Analytics, here is some guidance for estimating data volumes:
1. **Monitoring VMs:** with typical monitoring enabled, 1 GB to 3 GB of data month is ingested per monitored VM. 2. **Monitoring Azure Kubernetes Service (AKS) clusters:** details on expected data volumes for monitoring a typical AKS cluster are available [here](../containers/container-insights-cost.md#estimating-costs-to-monitor-your-aks-cluster). Follow these [best practices](../containers/container-insights-cost.md#controlling-ingestion-to-reduce-cost) to control your AKS cluster monitoring costs.
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/logs/monitor-workspace.md
For custom tables, you can move to [Parsing the data](./parse-text.md) in querie
#### Operation: Field content validation "The following fields' values \<**field name**\> of type \<**table name**\> have been trimmed to the max allowed size, \<**field size limit**\> bytes. Please adjust your input accordingly."
-Field larger then the limit size was proccessed by Azure logs, the field was trimed to the allowed field limit. We donΓÇÖt recommend sending fields larger than the allowed limit as this will resualt in data loss.
+Field larger than the limit size was proccessed by Azure logs, the field was trimmed to the allowed field limit. We donΓÇÖt recommend sending fields larger than the allowed limit as this will result in data loss.
Recommended Actions: Check the source of the affected data type:
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-monitor/vm/monitor-virtual-machine.md
Any monitoring tool, such as Azure Monitor, requires an agent installed on a mac
> [!NOTE] > The Azure Monitor agent will completely replace the Log Analytics agent, diagnostic extension, and Telegraf agent once it gains required functionality. These other agents are still required for features such as VM insights, Azure Security Center, and Azure Sentinel. -- [Azure Monitor agent](../agents/agents-overview.md#log-analytics-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.
+- [Azure Monitor agent](../agents/agents-overview.md#azure-monitor-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Azure Security Center, and Azure Sentinel, then it will completely replace the Log Analytics agent and diagnostic extension.
- [Log Analytics agent](../agents/agents-overview.md#log-analytics-agent): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager. - [Dependency agent](../agents/agents-overview.md#dependency-agent): Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions. - [Azure Diagnostic extension](../agents/agents-overview.md#azure-diagnostics-extension): Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-netapp-files/whats-new.md
Azure NetApp Files is updated regularly. This article provides a summary about t
In a manual QoS capacity pool, you can assign the capacity and throughput for a volume independently. The total throughput of all volumes created with a manual QoS capacity pool is limited by the total throughput of the pool. It is determined by the combination of the pool size and the service-level throughput. Alternatively, a capacity poolΓÇÖs [QoS type](azure-netapp-files-understand-storage-hierarchy.md#qos_types) can be auto (automatic), which is the default. In an auto QoS capacity pool, throughput is assigned automatically to the volumes in the pool, proportional to the size quota assigned to the volumes.
-* [LDAP signing](azure-netapp-files-create-volumes-smb.md) (Preview)
+* [LDAP signing](create-active-directory-connections.md#create-an-active-directory-connection) (Preview)
Azure NetApp Files now supports LDAP signing for secure LDAP lookups between the Azure NetApp Files service and the user-specified Active Directory Domain Services domain controllers. This feature is currently in preview.
azure-resource-manager Deploy Github Actions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/deploy-github-actions.md
description: Describes how to deploy Bicep files by using GitHub Actions.
Previously updated : 06/01/2021 Last updated : 08/23/2021
When your resource group and repository are no longer needed, clean up the resou
## Next steps > [!div class="nextstepaction"]
-> [Learn module: Automate the deployment of ARM templates by using GitHub Actions](/learn/modules/deploy-templates-command-line-github-actions/)
+> [Learn module: Build your first Bicep deployment workflow by using GitHub Actions](/learn/modules/build-first-bicep-deployment-pipeline-using-github-actions/)
azure-resource-manager Learn Bicep https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-resource-manager/bicep/learn-bicep.md
In addition to the preceding path, the following modules contain Bicep content.
| [Structure your Bicep code for collaboration](/learn/modules/structure-bicep-code-collaboration/) | Build Bicep files that support collaborative development and follow best practices. Plan your parameters to make your templates easy to deploy. Use a consistent style, clear structure, and comments to make your Bicep code easy to understand, use, and modify. | | [Authenticate your Azure deployment pipeline by using service principals](/learn/modules/authenticate-azure-deployment-pipeline-service-principals/) | Service principals enable your deployment pipelines to authenticate securely with Azure. In this module, you'll learn what service principals are, how they work, and how to create them. You'll also learn how to grant them permission to your Azure resources so that your pipelines can deploy your Bicep files. | | [Build your first Bicep deployment pipeline by using Azure Pipelines](/learn/modules/build-first-bicep-deployment-pipeline-using-azure-pipelines/) | Build a basic deployment pipeline for Bicep code. Use a service connection to securely identify your pipeline to Azure. Configure when the pipeline runs by using triggers. |
+| [Build your first Bicep deployment workflow by using GitHub Actions](/learn/modules/build-first-bicep-deployment-pipeline-using-github-actions/) | Build a basic deployment workflow for Bicep code. Use a secret to securely identify your GitHub Actions workflow to Azure, and then set when the workflow runs by using triggers and schedules. |
## Next steps
azure-sql Data Discovery And Classification Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-sql/database/data-discovery-and-classification-overview.md
Previously updated : 08/16/2021 Last updated : 08/23/2021 tags: azure-synapse # Data Discovery & Classification
After the organization-wide policy has been defined, you can continue classifyin
An important aspect of the classification is the ability to monitor access to sensitive data. [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) has been enhanced to include a new field in the audit log called `data_sensitivity_information`. This field logs the sensitivity classifications (labels) of the data that was returned by a query. Here's an example:
-![Audit log](./media/data-discovery-and-classification-overview/11_data_classification_audit_log.png)
+[ ![Audit log](./media/data-discovery-and-classification-overview/11_data_classification_audit_log.png)](./media/data-discovery-and-classification-overview/11_data_classification_audit_log.png#lightbox)
## <a id="permissions"></a>Permissions
These built-in roles can read the data classification of a database:
- SQL Security Manager - User Access Administrator
+These are the required actions to read the data classification of a database are:
+
+- Microsoft.Sql/servers/databases/currentSensitivityLabels/*
+- Microsoft.Sql/servers/databases/recommendedSensitivityLabels/*
+- Microsoft.Sql/servers/databases/schemas/tables/columns/sensitivityLabels/*
+ These built-in roles can modify the data classification of a database: - Owner - Contributor - SQL Security Manager
+This is the required action to modify the data classification of a database are:
+
+- Microsoft.Sql/servers/databases/schemas/tables/columns/sensitivityLabels/*
+ Learn more about role-based permissions in [Azure RBAC](../../role-based-access-control/overview.md).
-## <a id="manage-classification"></a>Manage classifications
+## Manage classifications
You can use T-SQL, a REST API, or PowerShell to manage classifications.
You can use the REST API to programmatically manage classifications and recommen
- [List Current By Database](/rest/api/sql/sensitivitylabels/listcurrentbydatabase): Gets the current sensitivity labels of the specified database. - [List Recommended By Database](/rest/api/sql/sensitivitylabels/listrecommendedbydatabase): Gets the recommended sensitivity labels of the specified database.
+## Retrieve classifications metadata using SQL drivers
+
+You can use the following SQL drivers to retrieve classification metadata:
+
+- [ODBC Driver](https://docs.microsoft.com/sql/connect/odbc/data-classification)
+- [OLE DB Driver](https://docs.microsoft.com/sql/connect/oledb/features/using-data-classification)
+- [JDBC Driver](https://docs.microsoft.com/sql/connect/jdbc/data-discovery-classification-sample)
+- [Microsoft Drivers for PHP for SQL Server](https://docs.microsoft.com/sql/connect/php/release-notes-php-sql-driver)
## FAQ - Advanced classification capabilities **Question**: Will [Azure Purview](../../purview/overview.md) replace SQL Data Discovery & Classification or will SQL Data Discovery & Classification be retired soon? **Answer**: We continue to support SQL Data Discovery & Classification and encourage you to adopt [Azure Purview](../../purview/overview.md) which has richer capabilities to drive advanced classification capabilities and data governance. If we decide to retire any service, feature, API or SKU, you will receive advance notice including a migration or transition path. Learn more about Microsoft Lifecycle policies here. -
-## <a id="next-steps"></a>Next steps
+## Next steps
- Consider configuring [Azure SQL Auditing](../../azure-sql/database/auditing-overview.md) for monitoring and auditing access to your classified sensitive data. - For a presentation that includes data Discovery & Classification, see [Discovering, classifying, labeling & protecting SQL data | Data Exposed](https://www.youtube.com/watch?v=itVi9bkJUNc).
azure-vmware Attach Disk Pools To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/azure-vmware/attach-disk-pools-to-azure-vmware-solution-hosts.md
Title: Attach disk pools to Azure VMware Solution hosts (Preview) description: Learn how to attach a disk pool surfaced through an iSCSI target as the VMware datastore of an Azure VMware Solution private cloud. Once the datastore is configured, you can create volumes on it and attach them to your VMware instance. Previously updated : 07/13/2021 Last updated : 08/20/2021 #Customer intent: As an Azure service administrator, I want to scale my AVS hosts using disk pools instead of scaling clusters. So that I can use block storage for active working sets and tier less frequently accessed data from vSAN to disks. I can also replicate data from on-premises or primary VMware environment to disk storage for the secondary site.
You'll attach to a disk pool surfaced through an iSCSI target as the VMware data
>[!IMPORTANT] >While in **Public Preview**, only attach a disk pool to a test or non-production cluster.
-1. Check if the subscription is registered to `Microsoft.AVS`:
+1. Check if the subscription is registered to `Microsoft.AVS`.
```azurecli az provider show -n "Microsoft.AVS" --query registrationState ```
- If it's not already registered, then register it:
+ If it's not already registered, then register it.
```azurecli az provider register -n "Microsoft.AVS" ```
-1. Check if the subscription is registered to `CloudSanExperience` AFEC in Microsoft.AVS:
+2. Check if the subscription is registered to `CloudSanExperience` AFEC in Microsoft.AVS.
```azurecli az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" ```
- - If it's not already registered, then register it:
+ - If it's not already registered, then register it.
```azurecli az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS" ```
- The registration may take approximately 15 minutes to complete and you can check the current status it:
+ The registration may take approximately 15 minutes to complete and you can check the current status it.
```azurecli az feature show --name "CloudSanExperience" --namespace "Microsoft.AVS" --query properties.state ``` >[!TIP]
- >If the registration is stuck in an intermediate state for longer than 15 minutes to complete, unregister and then re-register the flag:
+ >If the registration is stuck in an intermediate state for longer than 15 minutes to complete, unregister and then re-register the flag.
> >```azurecli >az feature unregister --name "CloudSanExperience" --namespace "Microsoft.AVS" >az feature register --name "CloudSanExperience" --namespace "Microsoft.AVS" >```
-1. Check if the `vmware `extension is installed:
+3. Check if the `vmware `extension is installed.
```azurecli az extension show --name vmware ```
- - If the extension is already installed, check if the version is **3.0.0**. If an older version is installed, update the extension:
+ - If the extension is already installed, check if the version is **3.0.0**. If an older version is installed, update the extension.
```azurecli az extension update --name vmware ```
- - If it's not already installed, install it:
+ - If it's not already installed, install it.
```azurecli az extension add --name vmware ```
-3. Create and attach an iSCSI datastore in the Azure VMware Solution private cloud cluster using `Microsoft.StoragePool` provided iSCSI target:
+4. Create and attach an iSCSI datastore in the Azure VMware Solution private cloud cluster using `Microsoft.StoragePool` provided iSCSI target. The disk pool attaches to a vNet through a delegated subnet, which is done with the Microsoft.StoragePool/diskPools resource provider. If the subnet isn't delegated, the deployment fails.
```bash az vmware datastore disk-pool-volume create --name iSCSIDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud --target-id /subscriptions/11111111-1111-1111-1111-111111111111/resourceGroups/ResourceGroup1/providers/Microsoft.StoragePool/diskPools/mpio-diskpool/iscsiTargets/mpio-iscsi-target --lun-name lun0 ``` >[!TIP]
- >You can display the help on the datastores:
+ >You can display the help on the datastores.
> > ```azurecli > az vmware datastore -h > ```
-4. Show the details of an iSCSI datastore in a private cloud cluster:
+5. Show the details of an iSCSI datastore in a private cloud cluster.
```azurecli az vmware datastore show --name MyCloudSANDatastore1 --resource-group MyResourceGroup --cluster -Cluster-1 --private-cloud MyPrivateCloud ```
-5. List all the datastores in a private cloud cluster:
+6. List all the datastores in a private cloud cluster.
```azurecli az vmware datastore list --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud
When you delete a private cloud datastore, the disk pool resources don't get del
- Snapshots
-2. Delete the private cloud datastore:
+2. Delete the private cloud datastore.
```azurecli az vmware datastore delete --name MyCloudSANDatastore1 --resource-group MyResourceGroup --cluster Cluster-1 --private-cloud MyPrivateCloud
When you delete a private cloud datastore, the disk pool resources don't get del
Now that you've attached a disk pool to your Azure VMware Solution hosts, you may want to learn about: -- [Managing an Azure disk pool](../virtual-machines/disks-pools-manage.md ). Once you've deployed a disk pool, there are various management actions available to you. You can add or remove a disk to or from a disk pool, update iSCSI LUN mapping, or add ACLs.
+- [Managing an Azure disk pool](../virtual-machines/disks-pools-manage.md). Once you've deployed a disk pool, there are various management actions available to you. You can add or remove a disk to or from a disk pool, update iSCSI LUN mapping, or add ACLs.
- [Deleting a disk pool](../virtual-machines/disks-pools-deprovision.md#delete-a-disk-pool). When you delete a disk pool, all the resources in the managed resource group are also deleted.
backup Archive Tier Support https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/archive-tier-support.md
Title: Archive Tier support description: Learn about Archive Tier Support for Azure Backup Previously updated : 08/04/2021 Last updated : 08/23/2021
Stop protection and delete data deletes all the recovery points. For recovery po
| Workloads | Preview | Generally available | | | | |
-| SQL Server in Azure VM | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India | Australia East, Central India, North Europe, South East Asia |
+| SQL Server in Azure VM | East US, South Central US, North Central US, West Europe, UK South | Australia East, Central India, North Europe, South East Asia, East Asia, Australia South East, Canada Central, Brazil South, Canada East, France Central, France South, Japan East, Japan West, Korea Central, Korea South, South India, UK West, Central US, East US 2, West US, West US 2, West Central US |
| Azure Virtual Machines | East US, East US 2, Central US, South Central US, West US, West US 2, West Central US, North Central US, Brazil South, Canada East, Canada Central, West Europe, UK South, UK West, East Asia, Japan East, South India, South East Asia, Australia East, Central India, North Europe | None | ## Error codes and troubleshooting steps
backup Back Up File Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/back-up-file-data.md
+
+ Title: Back up file data with MABS
+description: You can back up file data on server and client computers with MABS.
+ Last updated : 08/19/2021++
+# Back up file data with MABS
+
+You can use Microsoft Azure Backup Server (MABS) to back up file data on server and client computers.
+
+## Before you start
+
+1. **Deploy MABS** - Verify that MABS is installed and deployed correctly. We recommend that you review the following articles:
+
+ - [Install MABS](backup-azure-microsoft-azure-backup.md)
+
+ - [What can MABS back up?](backup-mabs-protection-matrix.md)
+
+ - [What's new in MABS?](backup-mabs-whats-new-mabs.md)
+
+ - [MABS release notes](backup-mabs-release-notes-v3.md)
+
+1. **Set up storage** - You can store backed up data on disk, on tape, and in the cloud with Azure. Read more in [Prepare data storage](/system-center/dpm/plan-long-and-short-term-data-storage?view=sc-dpm-2019&preserve-view=trus).
+
+1. **Set up the MABS protection agent** - You'll need to install the MABS protection agent on every machine you want to back up. Read [Deploy the MABS protection agent](backup-azure-microsoft-azure-backup.md).
+
+## Back up file data
+
+After you have your MABS infrastructure set up you can enable protection machines that have file data you want to back up.
+
+1. To create a protection group, click **Protection** > **Actions** > **Create Protection Group** to open the **Create New Protection Group** wizard in the MABS console.
+
+1. In **Select Protection Group Type** select **Servers**.
+
+1. In **Select Group Members** you'll add the machines for which you want to back up file data to the protection group. On those machines you select the locations, shares, and folders you want to protect. [Deploy protection groups](backup-support-matrix-mabs-dpm.md). You can select different types of folders (such as Desktop) or different file or the entire volume. You can also exclude specific locations from protection.
+
+ >[!NOTE]
+ > If you are protecting the volume on which the deduplication is enabled, ensure that the [Data Deduplication](/windows-server/storage/data-deduplication/install-enable) server role is installed on the MABS server. See the [support matrix](backup-support-matrix-mabs-dpm.md#deduplicated-volumes-support) for detailed information on deduplication.
+
+1. In **Select data protection method** specify how you want to handle short and long-term backup. Short-term backup is always to disk first, with the option of backing up from the disk to the Azure cloud with Azure backup (for short or long-term). As an alternative to long-term backup to the cloud you can also configure long-term back up to a standalone tape device or tape library connected to the MABS server.
+
+1. In **Select short-term goals** specify how you want to back up to short-term storage on disk. In **Retention range** you specify how long you want to keep the data on disk. In **Synchronization frequency** you specify how often you want to run an incremental backup to disk. If you don't want to set a backup interval, you can check **Just before a recovery point** so that MABS will run an express full backup just before each recovery point is scheduled.
+
+1. If you want to store data on tape for long-term storage in **Specify long-term goals** indicate how long you want to keep tape data (1-99 years). In Frequency of backup specify how often backups to tape should run. The frequency is based on the retention range you've specified:
+
+ - When the retention range is 1-99 years, you can select backups to occur daily, weekly, bi-weekly, monthly, quarterly, half-yearly, or yearly.
+
+ - When the retention range is 1-11 months, you can select backups to occur daily, weekly, bi-weekly, or monthly.
+
+ - When the retention range is 1-4 weeks, you can select backups to occur daily or weekly.
+
+ On a stand-alone tape drive, for a single protection group, MABS uses the same tape for daily backups until there is insufficient space on the tape. You can also co-locate data from different protection groups on tape.
+
+ On the **Select Tape and Library Details** page specify the tape/library to use, and whether data should be compressed and encrypted on tape.
+
+1. In the **Review disk allocation** page review the storage pool disk space allocated for the protection group.
+
+ **Total Data size** is the size of the data you want to back up, and **Disk space to be provisioned on MABS** is the space that MABS recommends for the protection group. MABS chooses the ideal backup volume, based on the settings. However, you can edit the backup volume choices in the **Disk allocation details**. For the workloads, select the preferred storage in the dropdown menu. Your edits change the values for **Total Storage** and **Free Storage** in the **Available Disk Storage** pane. Underprovisioned space is the amount of storage MABS suggests you add to the volume, to continue with backups smoothly in the future.
+
+1. In **Choose replica creation method** select how you want to handle the initial full data replication. If you select to replicate over the network we recommended you choose an off-peak time. For large amounts of data or less than optimal network conditions, consider replicating the data offline using removable media.
+
+1. In **Choose consistency check options**, select how you want to automate consistency checks. You can enable a check to run only when replica data becomes inconsistent, or according to a schedule. If you don't want to configure automatic consistency checking, you can run a manual check at any time by right-clicking the protection group in the **Protection** area of the MABS console, and selecting **Perform Consistency Check**.
+
+1. If you've selected to back up to the cloud with Azure Backup, on the **Specify online protection data** page make sure the workloads you want to back up to Azure are selected.
+
+1. In **Specify online backup schedule** specify how often incremental backups to Azure should occur. You can schedule backups to run every day/week/month/year and the time/date at which they should run. Backups can occur up to twice a day. Each time a backup runs a data recovery point is created in Azure from the copy of the backed-up data stored on the MABS disk.
+
+1. In **Specify online retention policy** you can specify how the recovery points created from the daily/weekly/monthly/yearly backups are retained in Azure.
+
+1. In **Choose online replication** specify how the initial full replication of data will occur. You can replicate over the network, or do an offline backup (offline seeding). Offline backup uses the Azure Import feature. [Read more](/azure/backup/backup-azure-backup-import-export).
+
+1. On the **Summary** page review your settings. After you click **Create Group** initial replication of the data occurs. When it finishes the protection group status will show as **OK** on the **Status** page. Backup then takes place in line with the protection group settings.
+
+## Recover backed up file data
+
+You recover data using the Recovery Wizard. When you double-click a protected volume on the **Protected data** pane in the wizard, MABS displays the data that belongs to that volume in the results pane. You can filter protected server names alphabetically by clicking **Filter**. After selecting a data source to recover in the tree view, you can select a specific recovery point by clicking the bold dates in the calendar. When you click **Recover** in the **Actions** pane, MABS starts the recovery job.
+
+## Recover data
+Recover data from the MABS console as follows:
+
+1. In MABS console click **Recovery** on the navigation bar. and browse for the data you want to recover. In the results pane, select the data.
+
+1. Available recovery points are indicated in bold on the calendar in the recovery points section. Select the bold date for the recovery point you want to recover.
+
+1. In the **Recoverable item** pane, click to select the recoverable item you want to recover.
+
+1. In the **Actions** pane, click **Recover** to open the Recovery Wizard.
+
+1. You can recover data as follows:
+
+ 1. **Recover to the original location**. Note that this doesn't work if the client computer is connected over VPN. In this case use an alternate location and then copy data from that location.
+
+ 1. **Recover to an alternate location**.
+
+ 1. **Copy to tape**. This option copies the volume that contains the selected data to a tape in a MABS library. You can also choose to compress or encrypt the data on tape.
+
+1. Specify your recovery options:
+
+ 1. **Existing version recovery behavior**. Select **Create copy**, **Skip**, or **Overwrite**. This option is enabled only when you're recovering to the original location.
+
+ 1. **Restore security**. Select **Apply settings of the destination computer** or **Apply the security settings of the recovery point version**.
+
+ 1. **Network bandwidth usage throttling**. Click **Modify** to enable network bandwidth usage throttling.
+
+ 1. **Enable SAN based recovery using hardware snapshots**. Select this option to use SAN-based hardware snapshots for quicker recovery.
+
+ This option is valid only when you have a SAN where hardware snapshot functionality is enabled, the SAN has the capability to create a clone and to split a clone to make it writable, and the protected computer and the MABS server are connected to the same SAN.
+
+ 1. **Notification**. Click **Send an e-mail when the recovery completes**, and specify the recipients who will receive the notification. Separate the e-mail addresses with commas.
+
+1. Click **Next** after you have made your selections for the preceding options.
+
+1. Review your recovery settings, and click **Recover**. Note that any synchronization job for the selected recovery item will be canceled while the recovery is in progress.
+
+When using Modern Backup Storage (MBS), File Server end-user recovery (EUR) is not supported. File Server EUR has a dependency on the Volume Shadow Copy Service (VSS), which Modern Backup Storage does not use. If end-user recovery is enabled, then recover data as follows:
+
+1. Navigate to the protected data file. Right-click the file name > **Properties**.
+
+1. In **Properties** > **Previous Versions** select the version that you want to recover.
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backup description: Provides a summary of support settings and limitations when backing up Azure VMs with the Azure Backup service. Previously updated : 08/06/2021 Last updated : 08/23/2021
Shared storage| Backing up VMs using Cluster Shared Volume (CSV) or Scale-Out Fi
Ultra SSD disks | Not supported. For more information, see these [limitations](selective-disk-backup-restore.md#limitations). [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Temporary disks aren't backed up by Azure Backup. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported.
+[ReFS](/windows-server/storage/refs/refs-overview) restore | Supported. VSS supports app-consistent backups on ReFS also like NFS.
## VM network support
backup Backup Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/backup/backup-support-matrix.md
Title: Azure Backup support matrix description: Provides a summary of support settings and limitations for the Azure Backup service. Previously updated : 07/05/2021 Last updated : 08/23/2021
The resource health check functions in following conditions:
| | | | | | | **Supported Resources** | Recovery Services vault |
-| **Supported Regions** | East US 2, East Asia, and France Central. |
+| **Supported Regions** | East US 2, Central US, North Europe, France Central, East Asia, Japan East, Japan West, Australia East, South Africa North. |
| **For unsupported regions** | The resource health status is shown as "Unknown". |
batch Tutorial Batch Functions https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/batch/tutorial-batch-functions.md
Title: Tutorial - Trigger a Batch job using Azure Functions
description: Tutorial - Apply OCR to scanned documents as they're added to a storage blob ms.devlang: dotnet Previously updated : 05/30/2019 Last updated : 08/23/2021
In this tutorial, you'll learn how to trigger a Batch job using [Azure Functions
## Prerequisites
-* An Azure subscription. If you don't have one, create a [free account](https://azure.microsoft.com/free/) before you begin.
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
* An Azure Batch account and a linked Azure Storage account. See [Create a Batch account](quick-create-portal.md#create-a-batch-account) for more information on how to create and link accounts.
-* [Batch Explorer](https://azure.github.io/BatchExplorer/)
-* [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/)
+* [Batch Explorer](https://azure.github.io/BatchExplorer/).
+* [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/).
## Sign in to Azure
Sign in to the [Azure portal](https://portal.azure.com).
## Create a Batch pool and Batch job using Batch Explorer
-In this section, you'll use Batch Explorer to create the Batch pool and Batch job that will run OCR tasks.
+In this section, you'll use Batch Explorer to create the Batch pool and Batch job that will run OCR tasks.
### Create a pool
-1. Sign in to Batch Explorer using your Azure credentials.
-1. Create a pool by selecting **Pools** on the left side bar, then the **Add** button above the search form.
- 1. Choose an ID and display name. We'll use `ocr-pool` for this example.
- 1. Set the scale type to **Fixed size**, and set the dedicated node count to 3.
- 1. Select **Ubuntu 18.04-LTS** as the operating system.
- 1. Choose `Standard_f2s_v2` as the virtual machine size.
- 1. Enable the start task and add the command `/bin/bash -c "sudo update-locale LC_ALL=C.UTF-8 LANG=C.UTF-8; sudo apt-get update; sudo apt-get -y install ocrmypdf"`. Be sure to set the user identity as **Task default user (Admin)**, which allows start tasks to include commands with `sudo`.
- 1. Select **OK**.
-### Create a job
+1. Sign in to [Batch Explorer](https://azure.github.io/BatchExplorer/) using your Azure credentials.
+1. Create a pool by selecting **Pools** on the left side bar, then the **Add** button above the search form.
+ 1. Choose an ID and display name. We'll use `ocr-pool` for this example.
+ 1. Set the scale type to **Fixed size**, and set the dedicated node count to 3.
+ 1. Select **Ubuntu 18.04-LTS** as the operating system.
+ 1. Choose `Standard_f2s_v2` as the virtual machine size.
+ 1. Enable the start task and add the command `/bin/bash -c "sudo update-locale LC_ALL=C.UTF-8 LANG=C.UTF-8; sudo apt-get update; sudo apt-get -y install ocrmypdf"`. Be sure to set the user identity as **Task default user (Admin)**, which allows start tasks to include commands with `sudo`.
+ 1. Select **OK**.
-1. Create a job on the pool by selecting **Jobs** on the left side bar, then the **Add** button above the search form.
- 1. Choose an ID and display name. We'll use `ocr-job` for this example.
- 1. Set the pool to `ocr-pool`, or whatever name you chose for your pool.
- 1. Select **OK**.
+### Create a job
+1. Create a job on the pool by selecting **Jobs** on the left side bar, then the **Add** button above the search form.
+ 1. Choose an ID and display name. We'll use `ocr-job` for this example.
+ 1. Set the pool to `ocr-pool`, or whatever name you chose for your pool.
+ 1. Select **OK**.
## Create blob containers
-Here you'll create blob containers that will store your input and output files for the OCR Batch job.
+Here you'll create blob containers that will store your input and output files for the OCR Batch job. In this example, the input container is named `input` and is where all documents without OCR are initially uploaded for processing. The output container is named `output` and is where the Batch job writes processed documents with OCR.
-1. Sign in to Storage Explorer using your Azure credentials.
+1. Sign in to [Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) using your Azure credentials.
1. Using the storage account linked to your Batch account, create two blob containers (one for input files, one for output files) by following the steps at [Create a blob container](../vs-azure-tools-storage-explorer-blobs.md#create-a-blob-container).-
-In this example, the input container is named `input` and is where all documents without OCR are initially uploaded for processing. The output container is named `output` and is where the Batch job writes processed documents with OCR.
- * In this example, we'll call our input container `input`, and our output container `output`.
- * The input container is where all documents without OCR are initially uploaded.
- * The output container is where the Batch job writes documents with OCR.
-
-Create a shared access signature for your output container in Storage Explorer. Do this by right-clicking on the output container and selecting **Get Shared Access Signature...**. Under **Permissions**, check **Write**. No other permissions are necessary.
+1. Create a shared access signature for your output container in Storage Explorer by right-clicking the output container and selecting **Get Shared Access Signature...**. Under **Permissions**, select **Write**. No other permissions are necessary.
## Create an Azure Function In this section you'll create the Azure Function that triggers the OCR Batch job whenever a file is uploaded to your input container. 1. Follow the steps in [Create a function triggered by Azure Blob storage](../azure-functions/functions-create-storage-blob-triggered-function.md) to create a function.
- 1. When prompted for a storage account, use the same storage account that you linked to your Batch account.
- 1. For **runtime stack**, choose .NET. We'll write our function in C# to leverage the Batch .NET SDK.
+ 1. When prompted for a storage account, use the same storage account that you linked to your Batch account.
+1. For **runtime stack**, choose .NET. We'll write our function in C# to leverage the Batch .NET SDK.
1. Once the blob-triggered function is created, use the [`run.csx`](https://github.com/Azure-Samples/batch-functions-tutorial/blob/master/run.csx) and [`function.proj`](https://github.com/Azure-Samples/batch-functions-tutorial/blob/master/function.proj) from GitHub in the Function.
- * `run.csx` is run when a new blob is added to your input blob container.
- * `function.proj` lists the external libraries in your Function code, for example, the Batch .NET SDK.
-1. Change the placeholder values of the variables in the `Run()` function of the `run.csx` file to reflect your Batch and storage credentials. You can find your Batch and storage account credentials in the Azure portal in the **Keys** section of your Batch account.
- * Retrieve your Batch and storage account credentials in the Azure portal in the **Keys** section of your Batch account.
+ * `run.csx` is run when a new blob is added to your input blob container.
+ * `function.proj` lists the external libraries in your Function code, for example, the Batch .NET SDK.
+1. Change the placeholder values of the variables in the `Run()` function of the `run.csx` file to reflect your Batch and storage credentials. You can find these credentials in the Azure portal in the **Keys** section of your Batch account.
## Trigger the function and retrieve results
Additionally, you can watch the logs file at the bottom of the Azure Functions w
2019-05-29T19:45:26.200 [Information] Adding OCR task <taskID> for <fileName> <size of fileName>... ```
-To download the output files from Storage Explorer to your local machine, first select the files you want and then select the **Download** on the top ribbon.
+To download the output files from Storage Explorer to your local machine, first select the files you want and then select the **Download** on the top ribbon.
> [!TIP] > The downloaded files are searchable if opened in a PDF reader. ## Clean up resources
-You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it. In the account view, select **Pools** and the name of the pool. Then select **Delete**. When you delete the pool, all task output on the nodes is deleted. However, the output files remain in the storage account. When no longer needed, you can also delete the Batch account and the storage account.
+You are charged for the pool while the nodes are running, even if no jobs are scheduled. When you no longer need the pool, delete it with the following steps:
-## Next steps
+1. In the account view, select **Pools** and the name of the pool.
+1. Select **Delete**.
-In this tutorial, you learned how to:
+When you delete the pool, all task output on the nodes is deleted. However, the output files remain in the storage account. When no longer needed, you can also delete the Batch account and the storage account.
-> [!div class="checklist"]
-> * Use Batch Explorer to create pools and jobs
-> * Use Storage Explorer to create blob containers and a shared access signature (SAS)
-> * Create a blob-triggered Azure Function
-> * Upload input files to Storage
-> * Monitor task execution
-> * Retrieve output files
--
-Continue on by exploring the rendering applications available via Batch Explorer in the **Gallery** section. For each application there are several templates available, which will expand over time. For example, for Blender templates exist that split up a single image into tiles, so parts of an image can be rendered in parallel.
+## Next steps
For more examples of using the .NET API to schedule and process Batch workloads, see the samples on GitHub.
cognitive-services Releasenotes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cognitive-services/Speech-Service/releasenotes.md
Title: Release notes - Speech Service
description: A running log of Speech Service feature releases, improvements, bug fixes, and known issues. - Last updated 05/15/2021-
cosmos-db Sql Query Working With Json https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cosmos-db/sql/sql-query-working-with-json.md
For example, here's a document that shows the daily balance of a customer's bank
}, { "checkingAccount": -10,
- "savingsAccount": 5000,
+ "savingsAccount": 5000
}, { "checkingAccount": 5000,
- "savingsAccount": 5000,
+ "savingsAccount": 5000
}
- ...
+
] } ```
cost-management-billing Programmatically Create Subscription Microsoft Customer Agreement https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-microsoft-customer-agreement.md
An in-progress status is returned as an `Accepted` state under `provisioningStat
To install the latest version of the module that contains the `New-AzSubscriptionAlias` cmdlet, run `Install-Module Az.Subscription`. To install a recent version of PowerShellGet, see [Get PowerShellGet Module](/powershell/scripting/gallery/installing-psget).
-Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscription) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
+Run the following [New-AzSubscriptionAlias](/powershell/module/az.subscription/new-azsubscriptionalias) command and the billing scope `"/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx"`.
```azurepowershell New-AzSubscriptionAlias -AliasName "sampleAlias" -SubscriptionName "Dev Team Subscription" -BillingScope "/providers/Microsoft.Billing/billingAccounts/5e98e158-xxxx-xxxx-xxxx-xxxxxxxxxxxx:xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx_xxxx-xx-xx/billingProfiles/AW4F-xxxx-xxx-xxx/invoiceSections/SH3V-xxxx-xxx-xxx" -Workload "Production"
cost-management-billing Programmatically Create Subscription Preview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/cost-management-billing/manage/programmatically-create-subscription-preview.md
New-AzSubscription -OfferType MS-AZR-0017P -Name "Dev Team Subscription" -Enroll
| `OwnerSignInName` | No | String | The email address of any user to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `OwnerObjectId`.| | `OwnerApplicationId` | No | String | The application ID of any service principal to add as an Azure RBAC Owner on the subscription when it's created. You can use the parameter instead of `OwnerObjectId`. When using the parameter, the service principal must have [read access to the directory](/powershell/azure/active-directory/signing-in-service-principal#give-the-service-principal-reader-access-to-the-current-tenant-get-azureaddirectoryrole).|
-To see a full list of all parameters, see [New-AzSubscription](/powershell/module/az.subscription/New-AzSubscription).
### [Azure CLI](#tab/azure-cli)
data-factory Author Global Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/author-global-parameters.md
Set-AzDataFactoryV2 -InputObject $dataFactory -Force
## Next steps * Learn about Azure Data Factory's [continuous integration and deployment process](continuous-integration-deployment.md)
-* Learn how to use the [control flow expression language](control-flow-expression-language-functions.md)
+* Learn how to use the [control flow expression language](control-flow-expression-language-functions.md)
data-factory Concepts Linked Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/concepts-linked-services.md
Azure Data Factory and Azure Synapse Analytics can have one or more pipelines. A
Now, a **dataset** is a named view of data that simply points or references the data you want to use in your **activities** as inputs and outputs.
-Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way; the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
+Before you create a dataset, you must create a **linked service** to link your data store to the Data Factory or Synapse Workspace. Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. Think of it this way; the dataset represents the structure of the data within the linked data stores, and the linked service defines the connection to the data source. For example, an Azure Storage linked service links a storage account to the service. An Azure Blob dataset represents the blob container and the folder within that Azure Storage account that contains the input blobs to be processed.
Here is a sample scenario. To copy data from Blob storage to a SQL Database, you create two linked
data-factory Connector Azure Cosmos Db Mongodb Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/connector-azure-cosmos-db-mongodb-api.md
Previously updated : 11/20/2019 Last updated : 08/20/2021 # Copy data to or from Azure Cosmos DB's API for MongoDB by using Azure Data Factory
The following properties are supported for the Azure Cosmos DB's API for MongoDB
| Property | Description | Required | |: |: |: | | type | The **type** property must be set to **CosmosDbMongoDbApi**. | Yes |
-| connectionString |Specify the connection string for your Azure Cosmos DB's API for MongoDB. You can find it in the Azure portal -> your Cosmos DB blade -> primary or secondary connection string, with the pattern of `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb`. <br/><br />You can also put a password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details.|Yes |
+| connectionString |Specify the connection string for your Azure Cosmos DB's API for MongoDB. You can find it in the Azure portal -> your Cosmos DB blade -> primary or secondary connection string. <br/>For 3.2 server version, the string pattern is `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb`. <br/>For 3.6+ server versions, the string pattern is `mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.mongo.cosmos.azure.com:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000&appName=@<cosmosdb-name>@`.<br/><br />You can also put a password in Azure Key Vault and pull the `password` configuration out of the connection string. Refer to [Store credentials in Azure Key Vault](store-credentials-in-key-vault.md) with more details.|Yes |
| database | Name of the database that you want to access. | Yes |
+| isServerVersionAbove32 | Specify whether the server version is above 3.2. Allowed values are **true** and **false**(default). This will determine the driver to use in the service. | Yes |
| connectVia | The [Integration Runtime](concepts-integration-runtime.md) to use to connect to the data store. You can use the Azure Integration Runtime or a self-hosted integration runtime (if your data store is located in a private network). If this property isn't specified, the default Azure Integration Runtime is used. |No | **Example**
The following properties are supported for the Azure Cosmos DB's API for MongoDB
"type": "CosmosDbMongoDbApi", "typeProperties": { "connectionString": "mongodb://<cosmosdb-name>:<password>@<cosmosdb-name>.documents.azure.com:10255/?ssl=true&replicaSet=globaldb",
- "database": "myDatabase"
+ "database": "myDatabase",
+ "isServerVersionAbove32": "false"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
data-factory Copy Data Tool Metadata Driven https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/copy-data-tool-metadata-driven.md
This pipeline will copy objects from one group. The objects belonging to this gr
### Known limitations - Copy data tool does not support metadata driven ingestion for incrementally copying new files only currently. But you can bring your own parameterized pipelines to achieve that. - IR name, database type, file format type cannot be parameterized in ADF. For example, if you want to ingest data from both Oracle Server and SQL Server, you will need two different parameterized pipelines. But the single control table can be shared by two sets of pipelines. -
+- OPENJSON is used in generated SQL scripts by copy data tool. If you are using SQL Server to host control table, it must be SQL Server 2016 (13.x) and later in order to support OPENJSON function.
## Next steps
data-factory Create Self Hosted Integration Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/create-self-hosted-integration-runtime.md
If you select the **Use system proxy** option for the HTTP proxy, the self-hoste
> [!IMPORTANT] > Don't forget to update both diahost.exe.config and diawp.exe.config.
-You also need to make sure that Microsoft Azure is in your company's allowlist. You can download the list of valid Azure IP addresses from [Microsoft Download Center](https://www.microsoft.com/download/details.aspx?id=41653).
+You also need to make sure that Microsoft Azure is in your company's allowlist. You can download the list of valid Azure IP addresses. IP Ranges for each cloud, broken down by region and by the tagged services in that cloud are now available on MS Download:
+ - Public: https://www.microsoft.com/download/details.aspx?id=56519
+ - US Gov: https://www.microsoft.com/download/details.aspx?id=57063
+ - Germany: https://www.microsoft.com/download/details.aspx?id=57064
+ - China: https://www.microsoft.com/download/details.aspx?id=57062
### Possible symptoms for issues related to the firewall and proxy server
data-factory Enable Customer Managed Key https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/enable-customer-managed-key.md
This section walks through the process to add customer managed key encryption in
1. Launch Azure Data Factory portal, and using the navigation bar on the left, jump to Data Factory Management Portal
-1. Click on the __Customer manged key__ icon
+1. Click on the __Customer managed key__ icon
:::image type="content" source="media/enable-customer-managed-key/05-customer-managed-key-configuration.png" alt-text="Screenshot how to enable Customer-managed Key in Data Factory UI."::: 1. Enter the URI for customer-managed key that you copied before
-1. Click __Save__ and customer-manged key encryption is enabled for Data Factory
+1. Click __Save__ and customer-managed key encryption is enabled for Data Factory
### During factory creation in Azure portal
To change key used for Data Factory encryption, you have to manually update the
1. Locate the URI for the new key through Azure Key Vault Portal
-1. Navigate to __Customer manged key__ setting
+1. Navigate to __Customer managed key__ setting
1. Replace and paste in the URI for the new key
data-factory Quickstart Create Data Factory Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/data-factory/quickstart-create-data-factory-rest-api.md
If you don't have an Azure subscription, create a [free](https://azure.microsoft
* **Azure Storage account**. You use the blob storage as **source** and **sink** data store. If you don't have an Azure storage account, see the [Create a storage account](../storage/common/storage-account-create.md) article for steps to create one. * Create a **blob container** in Blob Storage, create an input **folder** in the container, and upload some files to the folder. You can use tools such as [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) to connect to Azure Blob storage, create a blob container, upload input file, and verify the output file. * Install **Azure PowerShell**. Follow the instructions in [How to install and configure Azure PowerShell](/powershell/azure/install-Az-ps). This quickstart uses PowerShell to invoke REST API calls.
-* **Create an application in Azure Active Directory** following [this instruction](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values that you use in later steps: **application ID**, **clientSecrets**, and **tenant ID**. Assign application to "**Contributor**" role.
+* **Create an application in Azure Active Directory** following [this instruction](../active-directory/develop/howto-create-service-principal-portal.md#register-an-application-with-azure-ad-and-create-a-service-principal). Make note of the following values that you use in later steps: **application ID**, **clientSecrets**, and **tenant ID**. Assign application to "**Contributor**" role at either subscription or resource group level.
>[!NOTE] > For Sovereign clouds, you must use the appropriate cloud-specific endpoints for ActiveDirectoryAuthority and ResourceManagerUrl (BaseUri). > You can use Powershell to easily get the endpoint Urls for various clouds by executing ΓÇ£Get-AzEnvironment | Format-ListΓÇ¥, which will return a list of endpoints for each cloud environment.
If you don't have an Azure subscription, create a [free](https://azure.microsoft
Run the following commands to authenticate with Azure Active Directory (AAD): ```powershell
-$AuthContext = [Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext]"https://login.microsoftonline.com/${tenantId}"
-$cred = New-Object -TypeName Microsoft.IdentityModel.Clients.ActiveDirectory.ClientCredential -ArgumentList ($appId, $clientSecrets)
-$result = $AuthContext.AcquireTokenAsync("https://management.core.windows.net/", $cred).GetAwaiter().GetResult()
-$authHeader = @{
-'Content-Type'='application/json'
-'Accept'='application/json'
-'Authorization'=$result.CreateAuthorizationHeader()
-}
+$credentials = Get-Credential -UserName $appId
+Connect-AzAccount -ServicePrincipal -Credential $credentials -Tenant $tenantID
+```
+You will be prompt to input the password, use the value in clientSecrets variable.
+
+If you need to get the access token
+
+```powershell
+
+GetToken
+ ``` ## Create a data factory
$authHeader = @{
Run the following commands to create a data factory: ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}?api-version=${apiVersion}"
$body = @" {
- "name": "$factoryName",
"location": "East US", "properties": {}, "identity": {
$body = @"
} } "@
-$response = Invoke-RestMethod -Method PUT -Uri $request -Header $authHeader -Body $body
-$response | ConvertTo-Json
+
+$response = Invoke-AzRestMethod -SubscriptionId ${subscriptionId} -ResourceGroupName ${resourceGroupName} -ResourceProviderName Microsoft.DataFactory -ResourceType "factories" -Name ${factoryName} -ApiVersion ${apiVersion} -Method PUT -Payload ${body}
+$response.Content
+ ``` Note the following points:
Note the following points:
``` * For a list of Azure regions in which Data Factory is currently available, select the regions that interest you on the following page, and then expand **Analytics** to locate **Data Factory**: [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). The data stores (Azure Storage, Azure SQL Database, etc.) and computes (HDInsight, etc.) used by data factory can be in other regions.
-Here is the sample response:
+Here is the sample response content:
```json+ { "name":"<dataFactoryName>", "identity":{
Run the following commands to create a linked service named **AzureStorageLinked
Replace &lt;accountName&gt; and &lt;accountKey&gt; with name and key of your Azure storage account before executing the commands. ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/linkedservices/AzureStorageLinkedService?api-version=${apiVersion}"
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/linkedservices/AzureStorageLinkedService?api-version=${apiVersion}"
+ $body = @" { "name":"AzureStorageLinkedService",
$body = @"
} "@ $response = Invoke-RestMethod -Method PUT -Uri $request -Header $authHeader -Body $body
-$response | ConvertTo-Json
+$response.content
``` Here is the sample output:
You define a dataset that represents the data to copy from a source to a sink. I
**Create InputDataset** ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/datasets/InputDataset?api-version=${apiVersion}"
+
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/datasets/InputDataset?api-version=${apiVersion}"
+ $body = @" { "name":"InputDataset",
$body = @"
} } "@
-$response = Invoke-RestMethod -Method PUT -Uri $request -Header $authHeader -Body $body
-$response | ConvertTo-Json
+
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
+$response
+ ``` Here is the sample output:
Here is the sample output:
**Create OutputDataset** ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/datasets/OutputDataset?api-version=${apiVersion}"
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/datasets/OutputDataset?api-version=${apiVersion}"
+ $body = @" { "name":"OutputDataset",
$body = @"
} } "@
-$response = Invoke-RestMethod -Method PUT -Uri $request -Header $authHeader -Body $body
-$response | ConvertTo-Json
+
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
+$response.content
``` Here is the sample output:
Here is the sample output:
"etag":"07013257-0000-0100-0000-5d6e18920000" } ```
-## Create pipeline
+## Create a pipeline
In this example, this pipeline contains one Copy activity. The Copy activity refers to the "InputDataset" and the "OutputDataset" created in the previous step as input and output. ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartPipeline?api-version=${apiVersion}"
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartPipeline?api-version=${apiVersion}"
+ $body = @" { "name": "Adfv2QuickStartPipeline",
$body = @"
} } "@
-$response = Invoke-RestMethod -Method PUT -Uri $request -Header $authHeader -Body $body
-$response | ConvertTo-Json
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
+$response.content
``` Here is the sample output:
Here is the sample output:
In this step, you trigger a pipeline run. The pipeline run ID returned in the response body is used in later monitoring API. ```powershell
-$request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartPipeline/createRun?api-version=${apiVersion}"
-$response = Invoke-RestMethod -Method POST -Uri $request -Header $authHeader -Body $body
-$response | ConvertTo-Json
-$runId = $response.runId
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartPipeline/createRun?api-version=${apiVersion}"
+
+$response = Invoke-AzRestMethod -Path ${path} -Method POST
+$response.content
``` Here is the sample output:
Here is the sample output:
} ```
-## Monitor pipeline
+You can also get the runId by using following command
+```powershell
-1. Run the following script to continuously check the pipeline run status until it finishes copying the data.
+($response.content | ConvertFrom-Json).runId
+
+```
+
+## Parameterize your pipeline
+
+You can create pipeline with parameters. In the following example, we will create an input dataset and an output dataset that can take input and output filenames as parameters given to the pipeline.
- ```powershell
- $request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelineruns/${runId}?api-version=${apiVersion}"
- while ($True) {
- $response = Invoke-RestMethod -Method GET -Uri $request -Header $authHeader
- Write-Host "Pipeline run status: " $response.Status -foregroundcolor "Yellow"
- if ( ($response.Status -eq "InProgress") -or ($response.Status -eq "Queued") ) {
- Start-Sleep -Seconds 15
+## Create parameterized input dataset
+
+Define a parameter called strInputFileName , and use it as file name for dataset.
+
+```powershell
+
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/datasets/ParamInputDataset?api-version=${apiVersion}"
+
+$body = @"
+{
+ "name": "ParamInputDataset",
+ "properties": {
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
+ },
+ "parameters": {
+ "strInputFileName": {
+ "type": "string"
+ }
+ },
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": {
+ "value": "@dataset().strInputFileName",
+ "type": "Expression"
+ },
+ "folderPath": "input",
+ "container": "adftutorial"
+ }
}
- else {
- $response | ConvertTo-Json
- break
+ },
+ "type": "Microsoft.DataFactory/factories/datasets"
+}
+"@
+
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
+$response.content
+
+```
+
+Here is the sample output:
+
+```json
+{
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/datasets/ParamInputDataset",
+ "name": "ParamInputDataset",
+ "type": "Microsoft.DataFactory/factories/datasets",
+ "properties": {
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
+ },
+ "parameters": {
+ "strInputFileName": {
+ "type": "string"
+ }
+ },
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": {
+ "value": "@dataset().strInputFileName",
+ "type": "Expression"
+ },
+ "folderPath": "input",
+ "container": "adftutorial"
+ }
}
- }
- ```
+ },
+ "etag": "00000000-0000-0000-0000-000000000000"
+}
- Here is the sample output:
+```
- ```json
- {
- "runId":"04a2bb9a-71ea-4c31-b46e-75276b61bafc",
- "debugRunId":null,
- "runGroupId":"04a2bb9a-71ea-4c31-b46e-75276b61bafc",
- "pipelineName":"Adfv2QuickStartPipeline",
- "parameters":{
+## Create parameterized output dataset
++
+Define a parameter called strOutputFileName , and use it as file name for dataset.
+
+```powershell
++
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/datasets/ParamOutputDataset?api-version=${apiVersion}"
+$body = @"
+{
+ "name": "ParamOutputDataset",
+ "properties": {
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
},
- "invokedBy":{
- "id":"2bb3938176ee43439752475aa12b2251",
- "name":"Manual",
- "invokedByType":"Manual"
+ "parameters": {
+ "strOutPutFileName": {
+ "type": "string"
+ }
},
- "runStart":"2019-09-03T07:22:47.0075159Z",
- "runEnd":"2019-09-03T07:22:57.8862692Z",
- "durationInMs":10878,
- "status":"Succeeded",
- "message":"",
- "lastUpdated":"2019-09-03T07:22:57.8862692Z",
- "annotations":[
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": {
+ "value": "@dataset().strOutPutFileName",
+ "type": "Expression"
+ },
+ "folderPath": "output",
+ "container": "adftutorial"
+ }
+ }
+ },
+ "type": "Microsoft.DataFactory/factories/datasets"
+}
- ],
- "runDimension":{
+"@
+
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
+$response.content
+```
+
+Here is the sample output:
+
+```json
+{
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/datasets/ParamOutputDataset",
+ "name": "ParamOutputDataset",
+ "type": "Microsoft.DataFactory/factories/datasets",
+ "properties": {
+ "linkedServiceName": {
+ "referenceName": "AzureStorageLinkedService",
+ "type": "LinkedServiceReference"
+ },
+ "parameters": {
+ "strOutPutFileName": {
+ "type": "string"
+ }
},
- "isLatest":true
+ "annotations": [],
+ "type": "Binary",
+ "typeProperties": {
+ "location": {
+ "type": "AzureBlobStorageLocation",
+ "fileName": {
+ "value": "@dataset().strOutPutFileName",
+ "type": "Expression"
+ },
+ "folderPath": "output",
+ "container": "adftutorial"
+ }
+ }
+ },
+ "etag": "00000000-0000-0000-0000-000000000000"
+}
+```
+
+## Create parameterized pipeline
+
+Define a pipeline with two pipeline level parameters: strParamInputFileName and strParamOutputFileName. Then link these two parameters to the strInputFileName and strOutputFileName parameters of the datasets.
+
+```powershell
+
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartParamPipeline?api-version=${apiVersion}"
+
+$body = @"
+{
+ "name": "Adfv2QuickStartParamPipeline",
+ "properties": {
+ "activities": [
+ {
+ "name": "CopyFromBlobToBlob",
+ "type": "Copy",
+ "dependsOn": [],
+ "policy": {
+ "timeout": "7.00:00:00",
+ "retry": 0,
+ "retryIntervalInSeconds": 30,
+ "secureOutput": false,
+ "secureInput": false
+ },
+ "userProperties": [],
+ "typeProperties": {
+ "source": {
+ "type": "BinarySource",
+ "storeSettings": {
+ "type": "AzureBlobStorageReadSettings",
+ "recursive": true
+ }
+ },
+ "sink": {
+ "type": "BinarySink",
+ "storeSettings": {
+ "type": "AzureBlobStorageWriteSettings"
+ }
+ },
+ "enableStaging": false
+ },
+ "inputs": [
+ {
+ "referenceName": "ParamInputDataset",
+ "type": "DatasetReference",
+ "parameters": {
+ "strInputFileName": {
+ "value": "@pipeline().parameters.strParamInputFileName",
+ "type": "Expression"
+ }
+ }
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "ParamOutputDataset",
+ "type": "DatasetReference",
+ "parameters": {
+ "strOutPutFileName": {
+ "value": "@pipeline().parameters.strParamOutputFileName",
+ "type": "Expression"
+ }
+ }
+ }
+ ]
+ }
+ ],
+
+ "parameters": {
+ "strParamInputFileName": {
+ "type": "String"
+ },
+ "strParamOutputFileName": {
+ "type": "String"
+ }
+ }
}
+}
+"@
+
+$response = Invoke-AzRestMethod -Path ${path} -Method PUT -Payload $body
+$response.content
++
+```
+
+Here is the sample output:
+
+```json
+
+{
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelines/Adfv2QuickStartParamPipeline",
+ "name": "Adfv2QuickStartParamPipeline",
+ "type": "Microsoft.DataFactory/factories/pipelines",
+ "properties": {
+ "activities": [
+ {
+ "name": "CopyFromBlobToBlob",
+ "type": "Copy",
+ "dependsOn": [],
+ "policy": {
+ "timeout": "7.00:00:00",
+ "retry": 0,
+ "retryIntervalInSeconds": 30,
+ "secureOutput": false,
+ "secureInput": false
+ },
+ "userProperties": [],
+ "typeProperties": {
+ "source": {
+ "type": "BinarySource",
+ "storeSettings": {
+ "type": "AzureBlobStorageReadSettings",
+ "recursive": true
+ }
+ },
+ "sink": {
+ "type": "BinarySink",
+ "storeSettings": {
+ "type": "AzureBlobStorageWriteSettings"
+ }
+ },
+ "enableStaging": false
+ },
+ "inputs": [
+ {
+ "referenceName": "ParamInputDataset",
+ "type": "DatasetReference",
+ "parameters": {
+ "strInputFileName": {
+ "value": "@pipeline().parameters.strParamInputFileName",
+ "type": "Expression"
+ }
+ }
+ }
+ ],
+ "outputs": [
+ {
+ "referenceName": "ParamOutputDataset",
+ "type": "DatasetReference",
+ "parameters": {
+ "strOutPutFileName": {
+ "value": "@pipeline().parameters.strParamOutputFileName",
+ "type": "Expression"
+ }
+ }
+ }
+ ]
+ }
+ ],
+ "parameters": {
+ "strParamInputFileName": {
+ "type": "String"
+ },
+ "strParamOutputFileName": {
+ "type": "String"
+ }
+ }
+ },
+ "etag": "5e01918d-0000-0100-0000-60d569a90000"
+}
+
+```
+
+## Create pipeline run with parameters
+
+You can now specify values of the parameter at the time of creating the pipeline run.
+
+```powershell
+
+$path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelines/Adfv2QuickStartParamPipeline/createRun?api-version=${apiVersion}"
+
+$body = @"
+{
+ "strParamInputFileName": "emp2.txt",
+ "strParamOutputFileName": "aloha.txt"
+}
+"@
+
+$response = Invoke-AzRestMethod -Path ${path} -Method POST -Payload $body
+$response.content
+$runId = ($response.content | ConvertFrom-Json).runId
+
+```
+Here is the sample output:
+
+```json
+{"runId":"ffc9c2a8-d86a-46d5-9208-28b3551007d8"}
+```
++
+## Monitor pipeline
+
+1. Run the following script to continuously check the pipeline run status until it finishes copying the data.
+
+ ```powershell
+ $path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelineruns/${runId}?api-version=${apiVersion}"
+
+
+ while ($True) {
+
+ $response = Invoke-AzRestMethod -Path ${path} -Method GET
+ $response = $response.content | ConvertFrom-Json
+
+ Write-Host "Pipeline run status: " $response.Status -foregroundcolor "Yellow"
+
+ if ( ($response.Status -eq "InProgress") -or ($response.Status -eq "Queued") -or ($response.Status -eq "In Progress") ) {
+ Start-Sleep -Seconds 10
+ }
+ else {
+ $response | ConvertTo-Json
+ break
+ }
+ }
+ ```
+
+ Here is the sample output:
+
+ ```json
+ {
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelineruns/ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "runId": "ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "debugRunId": null,
+ "runGroupId": "ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "pipelineName": "Adfv2QuickStartParamPipeline",
+ "parameters": {
+ "strParamInputFileName": "emp2.txt",
+ "strParamOutputFileName": "aloha.txt"
+ },
+ "invokedBy": {
+ "id": "9c0275ed99994c18932317a325276544",
+ "name": "Manual",
+ "invokedByType": "Manual"
+ },
+ "runStart": "2021-06-25T05:34:06.8424413Z",
+ "runEnd": "2021-06-25T05:34:13.2936585Z",
+ "durationInMs": 6451,
+ "status": "Succeeded",
+ "message": "",
+ "lastUpdated": "2021-06-25T05:34:13.2936585Z",
+ "annotations": [],
+ "runDimension": {},
+ "isLatest": true
+ }
``` 2. Run the following script to retrieve copy activity run details, for example, size of the data read/written. ```powershell
- $request = "https://management.azure.com/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelineruns/${runId}/queryActivityruns?api-version=${apiVersion}&startTime="+(Get-Date).ToString('yyyy-MM-dd')+"&endTime="+(Get-Date).AddDays(1).ToString('yyyy-MM-dd')+"&pipelineName=Adfv2QuickStartPipeline"
- $response = Invoke-RestMethod -Method POST -Uri $request -Header $authHeader
- $response | ConvertTo-Json
+ $path = "/subscriptions/${subscriptionId}/resourceGroups/${resourceGroupName}/providers/Microsoft.DataFactory/factories/${factoryName}/pipelineruns/${runId}/queryActivityruns?api-version=${apiVersion}"
+
+
+ while ($True) {
+
+ $response = Invoke-AzRestMethod -Path ${path} -Method POST
+ $responseContent = $response.content | ConvertFrom-Json
+ $responseContentValue = $responseContent.value
+
+ Write-Host "Activity run status: " $responseContentValue.Status -foregroundcolor "Yellow"
+
+ if ( ($responseContentValue.Status -eq "InProgress") -or ($responseContentValue.Status -eq "Queued") -or ($responseContentValue.Status -eq "In Progress") ) {
+ Start-Sleep -Seconds 10
+ }
+ else {
+ $responseContentValue | ConvertTo-Json
+ break
+ }
+ }
``` Here is the sample output: ```json
- {
- "value":[
- {
- "activityRunEnd":"2019-09-03T07:22:56.6498704Z",
- "activityName":"CopyFromBlobToBlob",
- "activityRunStart":"2019-09-03T07:22:49.0719311Z",
- "activityType":"Copy",
- "durationInMs":7577,
- "retryAttempt":null,
- "error":"@{errorCode=; message=; failureType=; target=CopyFromBlobToBlob}",
- "activityRunId":"32951886-814a-4d6b-b82b-505936e227cc",
- "iterationHash":"",
- "input":"@{source=; sink=; enableStaging=False}",
- "linkedServiceName":"",
- "output":"@{dataRead=20; dataWritten=20; filesRead=1; filesWritten=1; sourcePeakConnections=1; sinkPeakConnections=1; copyDuration=4; throughput=0.01; errors=System.Object[]; effectiveIntegrationRuntime=DefaultIntegrationRuntime (Central US); usedDataIntegrationUnits=4; usedParallelCopies=1; executionDetails=System.Object[]}",
- "userProperties":"",
- "pipelineName":"Adfv2QuickStartPipeline",
- "pipelineRunId":"04a2bb9a-71ea-4c31-b46e-75276b61bafc",
- "status":"Succeeded",
- "recoveryStatus":"None",
- "integrationRuntimeNames":"defaultintegrationruntime",
- "executionDetails":"@{integrationRuntime=System.Object[]}"
+ {
+ "activityRunEnd": "2021-06-25T05:34:11.9536764Z",
+ "activityName": "CopyFromBlobToBlob",
+ "activityRunStart": "2021-06-25T05:34:07.5161151Z",
+ "activityType": "Copy",
+ "durationInMs": 4437,
+ "retryAttempt": null,
+ "error": {
+ "errorCode": "",
+ "message": "",
+ "failureType": "",
+ "target": "CopyFromBlobToBlob",
+ "details": ""
+ },
+ "activityRunId": "40bab243-9bbf-4538-9336-b797a2f98e2b",
+ "iterationHash": "",
+ "input": {
+ "source": {
+ "type": "BinarySource",
+ "storeSettings": "@{type=AzureBlobStorageReadSettings; recursive=True}"
+ },
+ "sink": {
+ "type": "BinarySink",
+ "storeSettings": "@{type=AzureBlobStorageWriteSettings}"
+ },
+ "enableStaging": false
+ },
+ "linkedServiceName": "",
+ "output": {
+ "dataRead": 134,
+ "dataWritten": 134,
+ "filesRead": 1,
+ "filesWritten": 1,
+ "sourcePeakConnections": 1,
+ "sinkPeakConnections": 1,
+ "copyDuration": 3,
+ "throughput": 0.044,
+ "errors": [],
+ "effectiveIntegrationRuntime": "DefaultIntegrationRuntime (East US)",
+ "usedDataIntegrationUnits": 4,
+ "billingReference": {
+ "activityType": "DataMovement",
+ "billableDuration": ""
+ },
+ "usedParallelCopies": 1,
+ "executionDetails": [
+ "@{source=; sink=; status=Succeeded; start=06/25/2021 05:34:07; duration=3; usedDataIntegrationUnits=4; usedParallelCopies=1; profile=; detailedDurations=}"
+ ],
+ "dataConsistencyVerification": {
+ "VerificationResult": "NotVerified"
+ },
+ "durationInQueue": {
+ "integrationRuntimeQueue": 0
}
- ]
- }
+ },
+ "userProperties": {},
+ "pipelineName": "Adfv2QuickStartParamPipeline",
+ "pipelineRunId": "ffc9c2a8-d86a-46d5-9208-28b3551007d8",
+ "status": "Succeeded",
+ "recoveryStatus": "None",
+ "integrationRuntimeNames": [
+ "defaultintegrationruntime"
+ ],
+ "executionDetails": {
+ "integrationRuntime": [
+ "@{name=DefaultIntegrationRuntime; type=Managed; location=East US; nodes=}"
+ ]
+ },
+ "id": "/subscriptions/<subscriptionId>/resourceGroups/<resourceGroupName>/providers/Microsoft.DataFactory/factories/<factoryName>/pipelineruns/ffc9c2a8-d86a-46d5-9208-28b3551007d8/activityruns/40bab243-9bbf-4538-9336-b797a2f98e2b"
+ }
``` ## Verify the output
databox Data Box Troubleshoot Share Access https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-troubleshoot-share-access.md
+
+ Title: Troubleshoot share connection failure during data copy to Azure Data Box | Microsoft Docs
+description: Describes how to identify network issues preventing SMB share connections during data copy to an Azure Data Box.
++++++ Last updated : 08/23/2021+++
+# Troubleshoot share connection failure during data copy to Azure Data Box
+
+This article describes what to do when you can't connect to an SMB share on your Azure Data Box device because of a network issue.
+
+The most common reasons for being unable to connect to a share on your device are:
+
+- [a domain issue](#check-for-a-domain-issue)
+- [account is locked out of the share](#account-locked-out-of-share)
+- [a group policy is preventing a connection](#check-for-a-blocking-group-policy)
+- [a permissions issue](#check-for-permissions-issues)
+
+## Check for a domain issue
+
+To find out whether a domain issue is preventing a share connection:
+
+- Try to connect to the device, and use one of the following formats to enter your user name:
+
+ - `<device IP address>\<user name>`
+ - `\<user name>`
+
+If you can connect to the device, a domain issue isn't preventing your share connection.
+
+## Account locked out of share
+
+After five failed attempts to connect to a share with an incorrect password, the share will be locked, and you won't be able to connect for 15 minutes.
+
+The failed connection attempts may include background processes, such as retries, which you may not be aware of.
+
+> [!NOTE]
+> If you have an older device with Data Box version 4.0 or earlier, the account is locked for 30 minutes after 3 failed login attempts.
+
+**Error description.** You'll see one of the following errors, depending on how you're accessing the share:
+
+- If you're trying to connect from your host computer via SMB, you'll see this error: "The referenced account is currently locked out and may not be logged on to."
+
+ The following example shows the output from one such connection attempt.
+
+ ```
+ C:\Users\Databoxuser>net use \\10.100.100.10\mydbresources_BlockBlob /u:10.100.100.10\mydbresources
+ Enter the password for '10.100.100.10\mydbresources' to connect to '10.100.100.10':
+ System error 1909 has occurred.
+
+ The referenced account is currently locked out and may not be logged on to.
+ ```
+
+- If you're using the data copy service, you'll get the following notification in the local web UI of your device:
+
+ ![Screenshot of the Notifications pane in the local Web UI for a Data Box. A notification for a locked share account is highlighted.](media/data-box-troubleshoot-share-access/share-lock-01.png)
++
+**Suggested resolution.** To connect to an SMB share after a share account lockout, do these steps:
+
+1. Verify the SMB credentials for the share. In the local web UI of your device, go to **Connect and copy**, and select **SMB** for the share. You'll see the following dialog box.
+
+ ![Screenshot of Access Share And Copy Data screen for an SMB share on a Data Box. Copy icons for the account, username, and password are highlighted.](media/data-box-troubleshoot-share-access/get-share-credentials-01.png)
+
+1. After the lockout period ends (either 15 minutes or half an hour), the lock will clear. You can now connect to the share.
+
+ - To connect to the share from your host computer via SMB, run the following command. For a procedure, see [Copy data to Data Box via SMB](data-box-deploy-copy-data.md#connect-to-data-box).
+
+ `net use \\<IP address of the device>\<share name> /u:<IP address of the device>\<user name for the share>`
+
+ - To connect to a share using the data copy service, check for a notification that indicates the user account has been unlocked, as shown below. On the **Copy data** pane, you can now [copy data to the Data Box](data-box-deploy-copy-data-via-copy-service.md#copy-data-to-data-box).
+
+ ![Screenshot of Copy Data pane in Data Box local Web UI.Notification that the share user account was unlocked and the Data Copy option are highlighted.](media/data-box-troubleshoot-share-access/share-lock-02.png)
++
+## Check for a blocking group policy
+
+Check whether a group policy on your client/host computer is preventing you from connecting to the share. If possible, move your client/host computer to an organizational unit (OU) that doesn't have any Group Policy objects (GPOs) applied.
+
+To ensure that no group policies are preventing your access to shares on the Data Box:
+
+* Ensure that your client/host computer is in its own OU for Active Directory.
+
+* Make sure that no GPOs are applied to your client/host computer. You can block inheritance to ensure that the client/host computer (child node) doesn't automatically inherit any GPOs from the parent. For more information, see [block inheritance](/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc731076(v=ws.11)).
+
+## Check for permissions issues
+
+If there's no domain issue, and no group policies are blocking your access to the share, check for permissions issues on your device by reviewing audit logs and security event logs.
+
+### Review security event logs
+
+Review Windows security event logs on the device for errors that indicate an authentication failure.
+
+You can review the `Smbserver.Security` event logs in the `etw` folder or view security errors in Event Viewer.
+
+To review Windows Security event logs in Event Viewer, do these steps:
+
+1. To open the Windows Event Viewer, on the **Start screen**, type **Event Viewer**, and press Enter.
+
+1. In the Event Viewer navigation pane, expand **Windows Logs**, and select the **Security** folder.
+
+ ![Screenshot of the Windows Event Viewer with Security events displayed. The Windows folder and Security subfolder are highlighted.](media/data-box-troubleshoot-share-access/event-viewer-01.png)
+
+3. Look for one of the following errors:
+
+ Error 1:
+
+ ```xml
+ SMB Session Authentication Failure
+ Client Name: \\<ClientIP>
+ Client Address: <ClientIP:Port>
+ User Name:
+ Session ID: 0x100000000021
+ Status: The attempted logon is invalid. This is either due to a bad username or authentication information. (0xC000006D)
+ SPN: session setup failed before the SPN could be queried
+ SPN Validation Policy: SPN optional / no validation
+ ```
+
+ Error 2:
+ ```xml
+ LmCompatibilityLevel value is different from the default.
+ Configured LM Compatibility Level: 5
+ Default LM Compatibility Level: 3
+ ```
+
+ Either error indicates that you need to change the LAN Manager authentication level on your device.
+
+### Change LAN Manager authentication level
+
+To change the LAN Manager authentication level on your device, you can either [use Local Security Policy](#use-local-security-policy) or [update the registry directly](#update-the-registry).
+
+#### Use Local Security Policy
+
+To change LAN Manager authentication level using Local Security Policy, do these steps:
+
+1. To open Local Security Policy, on the **Start** screen, type `secpol.msc`, and then press Enter.
+
+1. Go to **Local Policies** > **Security Options**, and open **Network Security: LAN Manager authentication level**.
+
+ ![Screenshot showing the Security Options in the Local Security Policy editor. The "Network Security: LAN Manager authentication level" policy is highlighted.](media/data-box-troubleshoot-share-access/security-policy-01.png)
+
+1. Change the setting to **Send NTLMv2 response only. Refuse LM & NTLM**.
+
+ ![Screenshot showing "Network Security: LAN Manager authentication level" policy in the Local Security Policy editor. The "Send NTLMv2 response only. Refuse LM & NTLM" option is highlighted.](media/data-box-troubleshoot-share-access/security-policy-02.png)
+
+#### Update the registry
+
+If you can't change the LAN Manager authentication level in Local Security Policy, update the registry directly.
+
+To update the registry directly, do these steps:
+
+1. To open Registry Editor (regedit32.exe), on the **Start** screen, type `regedt32`, and then press Enter.
+
+1. Navigate to: HKEY_LOCAL_MACHINE > SYSTEM > CurrentControlSet > Control > LSA.
+
+ ![Screenshot showing the Registry Editor with the LSA folder highlighted.](media/data-box-troubleshoot-share-access/security-policy-03.png)
+
+1. In the LSA folder, open the LMCompatibilityLevel registry key, and change its value to 5.
+
+ ![Screenshot of dialog box used to change LmcompatibilityLevel key in the registry. The Value Data field is highlighted.](media/data-box-troubleshoot-share-access/security-policy-04.png)
+
+1. Restart your computer so that the registry changes take effect.
+
+## Next steps
+
+- [Copy data via SMB](data-box-deploy-copy-data.md).
+- [Troubleshoot data copy issues in Data Box](data-box-troubleshoot.md).
+- [Contact Microsoft support](data-box-disk-contact-microsoft-support.md).
databox Data Box Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/databox/data-box-troubleshoot.md
Title: Troubleshoot issues on your Azure Data Box, Azure Data Box Heavy
-description: Describes how to troubleshoot issues seen in Azure Data Box and Azure Data Box Heavy when copying data to these devices.
+ Title: Troubleshoot issues during data copies to your Azure Data Box, Azure Data Box Heavy
+description: Describes how to troubleshoot issues when copying data to Azure Data Box and Azure Data Box Heavy devices.
Previously updated : 07/14/2021 Last updated : 08/11/2021
-# Troubleshoot issues related to Azure Data Box and Azure Data Box Heavy
+# Troubleshoot data copy issues on Azure Data Box and Azure Data Box Heavy
-This article details information on how to troubleshoot issues you may see when using the Azure Data Box or Azure Data Box Heavy for import orders. The article includes the list of possible errors seen when data is copied to the Data Box or when data is uploaded from Data Box for an import order.
+This article describes how to troubleshoot issues when performing data copies or data uploads for an Azure Data Box or Azure Data Box Heavy import order. The article includes the list of possible errors seen when data is copied to the Data Box or uploaded from Data Box.
-The information in this article does not apply to export orders created for Data Box.
+For help troubleshooting issues with accessing the shares on your device, see [Troubleshoot share connection failure during data copy](data-box-troubleshoot-share-access.md).
++
+> [!NOTE]
+> The information in this article applies to import orders only.
## Error classes
ddos-protection Diagnostic Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/ddos-protection/diagnostic-logging.md
If you want to automatically enable diagnostic logging on all public IPs within
- **Stream to an event hub**: Allows a log receiver to pick up logs using an Azure Event Hub. Event hubs enable integration with Splunk or other SIEM systems. To learn more about this option, see [Stream resource logs to an event hub](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-azure-event-hubs). - **Send to Log Analytics**: Writes logs to the Azure Monitor service. To learn more about this option, see [Collect logs for use in Azure Monitor logs](../azure-monitor/essentials/resource-logs.md?toc=%2fazure%2fvirtual-network%2ftoc.json#send-to-log-analytics-workspace).
+### Query DDOS protection logs in log analytics workspace
+
+#### DDoSProtectionNotifications logs
+
+1. Under the **Log analytics workspaces** blade, select your log analytics workspace.
+
+4. Under **General**, click on **Logs**
+
+5. In Query explorer, type in the following Kusto Query and change the time range to Custom and change the time range to last 3 months. Then hit Run.
+
+ ```kusto
+ AzureDiagnostics
+ | where Category == "DDoSProtectionNotifications"
+ ```
+
+#### DDoSMitigationFlowLogs
+
+1. Now change the query to the following and keep the same time range and hit Run.
+
+ ```kusto
+ AzureDiagnostics
+ | where Category == "DDoSMitigationFlowLogs"
+ ```
+
+#### DDoSMitigationReports
+
+1. Now change the query to the following and keep the same time range and hit Run.
+
+ ```kusto
+ AzureDiagnostics
+ | where Category == "DDoSMitigationReports"
+ ```
+ ### Log schemas The following table lists the field names and descriptions:
defender-for-iot How To Configure With Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/defender-for-iot/device-builders/how-to-configure-with-sentinel.md
After connecting a **Subscription**, the hub data is available in Azure Sentinel
In this document, you learned how to connect Defender for IoT to Azure Sentinel. To learn more about threat detection and security data access, see the following articles: -- Learn how to use Azure Sentinel to [Quickstart: Get started with Azure Sentinel](/azure/defender-for-iot/device-builders/articles/sentinel/get-visibility.md).-- Learn how to [Access your IoT security data](how-to-security-data-access.md)
+- Learn how to use Azure Sentinel to [Quickstart: Get started with Azure Sentinel](/azure/sentinel/get-visibility).
+- Learn how to [Access your IoT security data](how-to-security-data-access.md)
digital-twins Concepts Data Ingress Egress https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-data-ingress-egress.md
# Data ingress and egress for Azure Digital Twins
-Azure Digital Twins is typically used together with other services to create flexible, connected solutions that use your data in a variety of ways.
+Azure Digital Twins is typically used together with other services to create flexible, connected solutions that use your data in different kinds of ways.
Using [event routes](concepts-route-events.md), Azure Digital Twins can receive data from upstream services such as [IoT Hub](../iot-hub/about-iot-hub.md) or [Logic Apps](../logic-apps/logic-apps-overview.md), which are used to deliver telemetry and notifications.
Azure Digital Twins can also route data to downstream services, such as [Azure M
## Data ingress
-Azure Digital Twins can be driven with data and events from any serviceΓÇö[IoT Hub](../iot-hub/about-iot-hub.md), [Logic Apps](../logic-apps/logic-apps-overview.md), your own custom service, and more. This allows you to collect telemetry from physical devices in your environment, and process this data using the Azure Digital Twins graph in the cloud.
+Azure Digital Twins can be driven with data and events from any serviceΓÇö[IoT Hub](../iot-hub/about-iot-hub.md), [Logic Apps](../logic-apps/logic-apps-overview.md), your own custom service, and more. This kind of data flow allows you to collect telemetry from physical devices in your environment, and process this data using the Azure Digital Twins graph in the cloud.
-Instead of having a built-in IoT Hub behind the scenes, Azure Digital Twins allows you to "bring your own" IoT Hub to use with the service. You can use an existing IoT Hub you currently have in production, or deploy a new one to be used for this purpose. This gives you full access to all of the device management capabilities of IoT Hub.
+Instead of having a built-in IoT Hub behind the scenes, Azure Digital Twins allows you to "bring your own" IoT Hub to use with the service. You can use an existing IoT Hub you currently have in production, or deploy a new one to be used for this purpose. This functionality gives you full access to all of the device management capabilities of IoT Hub.
To ingest data from any source into Azure Digital Twins, use an [Azure function](../azure-functions/functions-overview.md). Learn more about this pattern in [Ingest telemetry from IoT Hub](how-to-ingest-iot-hub-data.md), or try it out yourself in the Azure Digital Twins [Connect an end-to-end solution](tutorial-end-to-end.md).
Endpoints are attached to Azure Digital Twins using management APIs or the Azure
There are many other services where you may want to ultimately direct your data, such as [Azure Storage](../storage/common/storage-introduction.md), [Azure Maps](../azure-maps/about-azure-maps.md), [Azure Data Explorer](/azure/data-explorer/data-explorer-overview), or [Time Series Insights](../time-series-insights/overview-what-is-tsi.md). To send your data to services like these, attach the destination service to an endpoint.
-For example, if you are also using Azure Maps and want to correlate location with your Azure Digital Twins [twin graph](concepts-twins-graph.md), you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. Learn more about this in [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md)
+For example, if you're also using Azure Maps and want to correlate location with your Azure Digital Twins [twin graph](concepts-twins-graph.md), you can use Azure Functions with Event Grid to establish communication between all the services in your deployment. For more information on integrating Azure Maps, see [Use Azure Digital Twins to update an Azure Maps indoor map](how-to-integrate-maps.md)
You can also learn how to route data in a similar way to Time Series Insights, in [Integrate with Time Series Insights](how-to-integrate-time-series-insights.md).
digital-twins Concepts Event Notifications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-event-notifications.md
# Event notifications
-Different events in Azure Digital Twins produce **notifications**, which allow the solution backend to be aware when different actions are happening. These are then [routed](concepts-route-events.md) to different locations inside and outside of Azure Digital Twins that can use this information to take action.
+Different events in Azure Digital Twins produce **notifications**, which allow the solution backend to be aware when different actions are happening. These notifications are then [routed](concepts-route-events.md) to different locations inside and outside of Azure Digital Twins that can use this information to take action.
There are several types of notifications that can be generated, and notification messages may look different depending on which type of event generated them. This article gives detail about different types of messages, and what they might look like.
Services have to add a sequence number on all the notifications to indicate thei
Notifications emitted by Azure Digital Twins to Event Grid will be automatically formatted to either the CloudEvents schema or EventGridEvent schema, depending on the schema type defined in the event grid topic.
-Extension attributes on headers will be added as properties on the Event Grid schema inside of the payload.
+Extension attributes on headers will be added as properties on the Event Grid schema in the payload.
### Event notification bodies
-The bodies of notification messages are described here in JSON. Depending on the serialization desired for the message body (such as with JSON, CBOR, Protobuf, etc.), the message body may be serialized differently.
+The bodies of notification messages are described here in JSON. Depending on the wanted serialization type for the message body (such as with JSON, CBOR, Protobuf, and so on), the message body may be serialized differently.
The set of fields that the body contains vary with different notification types.
-The following sections go into more detail about the different types of notifications emitted by IoT Hub and Azure Digital Twins (or other Azure IoT services). You will read about the things that trigger each notification type, and the set of fields included with each type of notification body.
+The following sections go into more detail about the different types of notifications emitted by IoT Hub and Azure Digital Twins (or other Azure IoT services). You'll read about the things that trigger each notification type, and the set of fields included with each type of notification body.
## Digital twin change notifications
The data in the corresponding notification (if synchronously executed by the ser
} ```
-This is the information that will go in the `data` field of the lifecycle notification message.
+This data is the information that will go in the `data` field of the lifecycle notification message.
## Digital twin lifecycle notifications
-All [digital twins](concepts-twins-graph.md) emit notifications, regardless of whether they represent [IoT Hub devices in Azure Digital Twins](how-to-ingest-iot-hub-data.md) or not. This is because of **lifecycle notifications**, which are about the digital twin itself.
+Whether [digital twins](concepts-twins-graph.md) represent [IoT Hub devices in Azure Digital Twins](how-to-ingest-iot-hub-data.md) or not, they will all emit notifications. They do so because of **lifecycle notifications**, which are about the digital twin itself.
Lifecycle notifications are triggered when: * A digital twin is created
Here are the fields in the body of a lifecycle notification.
### Body details
-Here is an example of a lifecycle notification message:
+Here's an example of a lifecycle notification message:
```json {
Here is an example of a lifecycle notification message:
} ```
-Inside the message, the `data` field contains the data of the affected digital twin, represented in JSON format. The schema for this is *Digital Twins Resource 7.1*.
+Inside the message, the `data` field contains the data of the affected digital twin, represented in JSON format. The schema for this `data` field is *Digital Twins Resource 7.1*.
For creation events, the `data` payload reflects the state of the twin after the resource is created, so it should include all system generated-elements just like a `GET` call.
-Here is an example of a the data for an [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) device, with components and no top-level properties. Properties that do not make sense for devices (such as reported properties) should be omitted. This is the information that will go in the `data` field of the lifecycle notification message.
+Here's an example of the data for an [IoT Plug and Play](../iot-develop/overview-iot-plug-and-play.md) device, with components and no top-level properties. Properties that don't make sense for devices (such as reported properties) should be omitted. The following JSON object is the information that will go in the `data` field of the lifecycle notification message:
```json {
Here is an example of a the data for an [IoT Plug and Play](../iot-develop/overv
} ```
-Here is another example of digital twin data. This one is based on a [model](concepts-models.md), and does not support components:
+Here's another example of digital twin data. This one is based on a [model](concepts-models.md), and doesn't support components:
```json {
Here are the fields in the body of a relationship change notification.
Inside the message, the `data` field contains the payload of a relationship, in JSON format. It uses the same format as a `GET` request for a relationship via the [DigitalTwins API](/rest/api/digital-twins/dataplane/twins).
-Here is an example of the data for an update relationship notification. "Updating a relationship" means properties of the relationship have changed, so the data shows the updated property and its new value. This is the information that will go in the `data` field of the digital twin relationship notification message.
+Here's an example of the data for an update relationship notification. "Updating a relationship" means properties of the relationship have changed, so the data shows the updated property and its new value. The following JSON object is the information that will go in the `data` field of the digital twin relationship notification message:
```json {
Here is an example of the data for an update relationship notification. "Updatin
} ```
-Here is an example of the data for a create or delete relationship notification. For `Relationship.Delete`, the body is the same as the `GET` request, and it gets the latest state before deletion.
+Here's an example of the data for a create or delete relationship notification. For `Relationship.Delete`, the body is the same as the `GET` request, and it gets the latest state before deletion.
```json {
Here are the fields in the body of a telemetry message.
The body contains the telemetry measurement along with some contextual information about the device.
-Here is an example telemetry message body:
+Here's an example telemetry message body:
```json {
digital-twins Concepts High Availability Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-high-availability-disaster-recovery.md
This article discusses the HA and DR features offered specifically by the Azure
Azure Digital Twins supports these feature options: * *Intra-region HA* ΓÇô Built-in redundancy to deliver on uptime of the service
-* *Cross region DR* ΓÇô Failover to a geo-paired Azure region in the case of an unexpected data center failure
+* *Cross region DR* ΓÇô Failover to a geo-paired Azure region if there's an unexpected data center failure
You can also see the [Best practices](#best-practices) section for general Azure guidance about designing for HA/DR. ## Intra-region HA
-Azure Digital Twins provides intra-region HA by implementing redundancies within the service. This is reflected in the [service SLA](https://azure.microsoft.com/support/legal/sla/digital-twins) for uptime. **No additional work is required by the developers of an Azure Digital Twins solution to take advantage of these HA features.** Although Azure Digital Twins offers a reasonably high uptime guarantee, transient failures can still be expected, as with any distributed computing platform. Appropriate retry policies should be built in to the components interacting with a cloud application to deal with transient failures.
+Azure Digital Twins provides intra-region HA by implementing redundancies within the service. This functionality is reflected in the [service SLA](https://azure.microsoft.com/support/legal/sla/digital-twins) for uptime. **No additional work is required by the developers of an Azure Digital Twins solution to take advantage of these HA features.** Although Azure Digital Twins offers a reasonably high uptime guarantee, transient failures can still be expected, as with any distributed computing platform. Appropriate retry policies should be built in to the components interacting with a cloud application to deal with transient failures.
## Cross region DR
-There could be some rare situations when a data center experiences extended outages due to power failures or other events in the region. Such events are rare, and during such failures, the intra region HA capability described above may not help. Azure Digital Twins addresses this with Microsoft-initiated failover.
+There could be some rare situations when a data center experiences extended outages because of power failures or other events in the region. Such events are rare, and during such failures, the intra region HA capability described above may not help. Azure Digital Twins addresses this scenario with Microsoft-initiated failover.
**Microsoft-initiated failover** is exercised in rare situations to failover all the Azure Digital Twins instances from an affected region to the corresponding [geo-paired region](../best-practices-availability-paired-regions.md). This process is a default option (with no way for users to opt out), and requires no intervention from the user. Microsoft reserves the right to make a determination of when this option will be exercised. This mechanism doesn't involve user consent before the user's instance is failed over. >[!NOTE] > Some Azure services provide an additional option called **customer-initiated failover**, which enables customers to initiate a failover just for their instance, such as to run a DR drill. This mechanism is currently **not supported** by Azure Digital Twins.
-If it's important for you to keep all data within certain geographical areas, please check the location of the [geo-paired region](../best-practices-availability-paired-regions.md#azure-regional-pairs) for the region where you're creating your instance, to ensure that it meets your data residency requirements.
+If it's important for you to keep all data within certain geographical areas, check the location of the [geo-paired region](../best-practices-availability-paired-regions.md#azure-regional-pairs) for the region where you're creating your instance, to ensure that it meets your data residency requirements.
>[!NOTE] > Some Azure services provide an option for users to configure a different region for failover, as a way to meet data residency requirements. This capability is currently **not supported** by Azure Digital Twins.
To view Service Health events...
:::image type="content" source="media/concepts-high-availability-disaster-recovery/issue-updates.png" alt-text="Screenshot of the Azure portal showing the 'Health History' page with the 'Issue updates' tab highlighted. The tab displays the status of entries." lightbox="media/concepts-high-availability-disaster-recovery/issue-updates.png":::
-Note that the information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using the [Resource health tool](troubleshoot-resource-health.md) to drill down into specific instances and see whether they're impacted.
+The information displayed in this tool isn't specific to one Azure Digital instance. After using Service Health to understand what's going with the Azure Digital Twins service in a certain region or subscription, you can take monitoring a step further by using the [Resource health tool](troubleshoot-resource-health.md) to drill down into specific instances and see whether they're affected.
## Best practices
digital-twins Concepts Ontologies Adopt https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-adopt.md
# Adopting an industry ontology
-Because it can be easier to start with an open-source DTDL ontology than from a blank page, Microsoft is partnering with domain experts to publish ontologies. These ontologies represent widely accepted industry conventions and support various customer use cases.
+Because it can be easier to start with an open-source Digital Twins Definition Language (DTDL) ontology than from a blank page, Microsoft is partnering with domain experts to publish ontologies. These ontologies represent widely accepted industry conventions and support various customer use cases.
The result is a set of open-source DTDL-based ontologies, which learn from, build on, or directly use industry standards. The ontologies are designed to meet the needs of downstream developers, with the potential to be widely adopted and extended by the industry.
-At this time, Microsoft has worked with partners to develop ontologies for [smart buildings](#realestatecore-smart-building-ontology), [smart cities](#smart-cities-ontology), and [energy grids](#energy-grid-ontology), which provide common ground for modeling based on standards in these industries to avoid the need for reinvention.
+At this time, Microsoft has worked with partners to develop ontologies for [smart buildings](#realestatecore-smart-building-ontology), [smart cities](#smart-cities-ontology), and [energy grids](#energy-grid-ontology). These ontologies provide common ground for modeling based on standards in these industries to avoid the need for reinvention.
Each ontology is focused on an initial set of models. The ontology authors welcome you to contribute to extend the initial set of use cases and improve the existing models.
Each ontology is focused on an initial set of models. The ontology authors welco
*Get the ontology from the following repository:* [Digital Twins Definition Language-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building).
-Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/), a Swedish consortium of real estate owners, software vendors, and research institutions, to deliver this open-source DTDL ontology for the real estate industry.
+Microsoft has partnered with [RealEstateCore](https://www.realestatecore.io/) to deliver this open-source DTDL ontology for the real estate industry. [RealEstateCore](https://www.realestatecore.io/) is a Swedish consortium of real estate owners, software vendors, and research institutions.
This smart buildings ontology provides common ground for modeling smart buildings, using industry standards (likeΓÇ»[BRICK Schema](https://brickschema.org/ontology/) orΓÇ»[W3C Building Topology Ontology](https://w3c-lbd-cg.github.io/bot/https://docsupdatetracker.net/index.html)) to avoid reinvention. The ontology also comes with best practices for how to consume and properly extend it. To learn more about the ontology's structure and modeling conventions, how to use it, how to extend it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-building](https://github.com/Azure/opendigitaltwins-building).
-You can also read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and accompanying video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794).
+You can also read more about the partnership with RealEstateCore and goals for this initiative in the following blog post and embedded video: [RealEstateCore, a smart building ontology for digital twins, is now available](https://techcommunity.microsoft.com/t5/internet-of-things/realestatecore-a-smart-building-ontology-for-digital-twins-is/ba-p/1914794).
## Smart cities ontology *Get the ontology from the following repository:* [Digital Twins Definition Language (DTDL) ontology for Smart Cities](https://github.com/Azure/opendigitaltwins-smartcities).
-Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). In addition to ETSI NGSI-LD, weΓÇÖve also evaluated Saref4City, CityGML, ISO, and others.
+Microsoft has collaborated with [Open Agile Smart Cities (OASC)](https://oascities.org/) and [Sirus](https://sirus.be/) to provide a DTDL-based ontology for smart cities, starting with [ETSI CIM NGSI-LD](https://www.etsi.org/committee/cim). Apart from ETSI NGSI-LD, we've also evaluated Saref4City, CityGML, ISO, and others.
To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-smartcities](https://github.com/Azure/opendigitaltwins-smartcities).
-You can also read more about the partnerships and approach for smart cities in the following blog post and accompanying video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585).
+You can also read more about the partnerships and approach for smart cities in the following blog post and embedded video: [Smart Cities Ontology for Digital Twins](https://techcommunity.microsoft.com/t5/internet-of-things/smart-cities-ontology-for-digital-twins/ba-p/2166585).
## Energy grid ontology *Get the ontology from the following repository:* [Digital Twins Definition Language (DTDL) ontology for Energy Grid](https://github.com/Azure/opendigitaltwins-energygrid/).
-This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases (monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance) and enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market.
+This ontology was created to help solution providers accelerate development of digital twin solutions for energy use cases like monitoring grid assets, outage and impact analysis, simulation, and predictive maintenance. Additionally, the ontology can be used to enable the digital transformation and modernization of the energy grid. It's adapted from the [Common Information Model (CIM)](https://cimug.ucaiug.org/), a global standard for energy grid assets management, power system operations modeling, and physical energy commodity market.
To learn more about the ontology, how to use it, and how to contribute, visit the ontology's repository on GitHub: [Azure/opendigitaltwins-energygrid](https://github.com/Azure/opendigitaltwins-energygrid/).
digital-twins Concepts Ontologies Convert https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-convert.md
Most [ontologies](concepts-ontologies.md) are based on semantic web standards such as [OWL](https://www.w3.org/OWL/), [RDF](https://www.w3.org/2001/sw/wiki/RDF), and [RDFS](https://www.w3.org/2001/sw/wiki/RDFS).
-To use a model with Azure Digital Twins, it must be in DTDL format. This articles describes general design guidance in the form of a **conversion pattern** for converting RDF-based models to DTDL so that they can be used with Azure Digital Twins.
+To use a model with Azure Digital Twins, it must be in DTDL format. This article describes general design guidance in the form of a **conversion pattern** for converting RDF-based models to DTDL so that they can be used with Azure Digital Twins.
The article also contains [sample converter code](#converter-samples) for RDF and OWL converters, which can be extended for other schemas in the building industry. ## Conversion pattern
-There are several third-party libraries that can be used when converting RDF-based models to DTDL. Some of these libraries allow you to load your model file into a graph. You can loop through the graph looking for specific RDFS and OWL constructs, and convert these to DTDL.
+There are several third-party libraries that can be used when converting RDF-based models to DTDL. Some of these libraries allow you to load your model file into a graph. You can loop through the graph looking for specific RDFS and OWL constructs, and convert them to DTDL.
The following table is an example of how RDFS and OWL constructs can be mapped to DTDL.
The following C# code snippet shows how an RDF model file is loaded into a graph
### RDF converter application
-There is a sample application available that converts an RDF-based model file to [DTDL (version 2)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for use by the Azure Digital Twins service. It has been validated for the [Brick](https://brickschema.org/ontology/) schema, and can be extended for other schemas in the building industry (such as [Building Topology Ontology (BOT)](https://w3c-lbd-cg.github.io/bot/), [Semantic Sensor Network](https://www.w3.org/TR/vocab-ssn/), or [buildingSmart Industry Foundation Classes (IFC)](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)).
+There's a sample application available that converts an RDF-based model file to [DTDL (version 2)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md) for use by the Azure Digital Twins service. It has been validated for the [Brick](https://brickschema.org/ontology/) schema, and can be extended for other schemas in the building industry (such as [Building Topology Ontology (BOT)](https://w3c-lbd-cg.github.io/bot/), [Semantic Sensor Network](https://www.w3.org/TR/vocab-ssn/), or [buildingSmart Industry Foundation Classes (IFC)](https://technical.buildingsmart.org/standards/ifc/ifc-schema-specifications/)).
The sample is a [.NET Core command-line application called RdfToDtdlConverter](/samples/azure-samples/rdftodtdlconverter/digital-twins-model-conversion-samples/).
To download the code to your machine, select the **Browse code** button undernea
:::image type="content" source="media/concepts-ontologies-convert/download-repo-zip.png" alt-text="Screenshot of the RdfToDtdlConverter repo on GitHub. The Code button is selected, producing a dialog box where the Download ZIP button is highlighted." lightbox="media/concepts-ontologies-convert/download-repo-zip.png":::
-You can use this sample to see the conversion patterns in context, and to have as a building block for your own applications performing model conversions according to your own specific needs.
+You can use this sample to see the conversion patterns in context, and to have as a building block for your own applications doing model conversions according to your own specific needs.
### OWL2DTDL converter
digital-twins Concepts Ontologies Extend https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies-extend.md
An industry-standard [ontology](concepts-ontologies.md), such as the [DTDL-based RealEstateCore ontology for smart buildings](https://github.com/Azure/opendigitaltwins-building), is a great way to start building your IoT solution. Industry ontologies provide a rich set of base interfaces that are designed for your domain and engineered to work out of the box in Azure IoT services, such as Azure Digital Twins.
-However, it's possible that your solution may have specific needs that are not covered by the industry ontology. For example, you may want to link your digital twins to 3D models stored in a separate system. In this case, you can extend one of these ontologies to add your own capabilities while retaining all the benefits of the original ontology.
+However, it's possible that your solution may have specific needs that aren't covered by the industry ontology. For example, you may want to link your digital twins to 3D models stored in a separate system. In this case, you can extend one of these ontologies to add your own capabilities while keeping all the benefits of the original ontology.
This article uses the DTDL-based [RealEstateCore](https://www.realestatecore.io/) ontology as the basis for examples of extending ontologies with new DTDL properties. The techniques described here are general, however, and can be applied to any part of a DTDL-based ontology with any DTDL capability (Telemetry, Property, Relationship, Component, or Command). ## RealEstateCore space hierarchy
-In the DTDL-based RealEstateCore ontology, the Space hierarchy is used to define various kinds of spaces: Rooms, Buildings, Zone, etc. The hierarchy extends from each of these models to define various kinds of Rooms, Buildings, and Zones.
+In the DTDL-based RealEstateCore ontology, the Space hierarchy is used to define various kinds of spaces: Rooms, Buildings, Zone, and so on. The hierarchy extends from each of these models to define various kinds of Rooms, Buildings, and Zones.
A portion of the hierarchy looks like the diagram below.
For more information about the RealEstateCore ontology, see [Adopting industry-s
## Extending the RealEstateCore space hierarchy
-Sometimes your solution has specific needs that are not covered by the industry ontology. In this case, extending the hierarchy enables you to continue to use the industry ontology while customizing it for your needs.
+Sometimes your solution has specific needs that aren't covered by the industry ontology. In this case, extending the hierarchy enables you to continue to use the industry ontology while customizing it for your needs.
In this article, we discuss two different cases where extending the ontology's hierarchy is useful: * Adding new interfaces for concepts not in the industry ontology.
-* Adding additional properties (or relationships, components, telemetry, or commands) to existing interfaces.
+* Adding extra properties (or relationships, components, telemetry, or commands) to existing interfaces.
### Add new interfaces for new concepts
-In this case, you want to add interfaces for concepts needed for your solution but that are not present in the industry ontology. For example, if your solution has other types of rooms that aren't represented in the DTDL-based RealEstateCore ontology, then you can add them by extending directly from the RealEstateCore interfaces.
+In this case, you want to add interfaces for concepts needed for your solution but that aren't present in the industry ontology. For example, if your solution has other types of rooms that aren't represented in the DTDL-based RealEstateCore ontology, then you can add them by extending directly from the RealEstateCore interfaces.
-The example below presents a solution that needs to represent "focus rooms," which are not present in the RealEstateCore ontology. A focus room is a small space designed for people to focus on a task for a couple hours at a time.
+The example below presents a solution that needs to represent "focus rooms," which aren't present in the RealEstateCore ontology. A focus room is a small space designed for people to focus on a task for a couple hours at a time.
To extend the industry ontology with this new concept, create a new interface that [extends from](concepts-models.md#model-inheritance) the interfaces in the industry ontology.
After adding the focus room interface, the extended hierarchy shows the new room
:::image type="content" source="media/concepts-ontologies-extend/real-estate-core-extended-1.png" alt-text="Diagram illustrating part of the RealEstateCore space hierarchy, including the new addition.":::
-### Add additional capabilities to existing interfaces
+### Add extra capabilities to existing interfaces
In this case, you want to add more properties (or relationships, components, telemetry, or commands) to interfaces that are in the industry ontology.
In this section, you'll see two examples:
Both examples can be implemented with new properties: a `drawingId` property that associates the 3D drawing with the digital twin and an "online" property that indicates whether the conference room is online or not.
-Typically, you don't want to modify the industry ontology directly because you want to be able to incorporate updates to it in your solution in the future (which would overwrite your additions). Instead, these kinds of additions can be made in your own interface hierarchy that extends from the DTDL-based RealEstateCore ontology. Each interface you create uses multiple interface inheritance to extend its parent RealEstateCore interface and its parent interface from your extended interface hierarchy. This approach enables you to make use of the industry ontology and your additions together.
+Typically, you don't want to modify the industry ontology directly because you want to be able to incorporate updates to it in your solution in the future (which would overwrite your additions). Instead, these kinds of additions can be made in your own interface hierarchy that extends from the DTDL-based RealEstateCore ontology. Each interface you create uses multiple interface inheritances to extend its parent RealEstateCore interface and its parent interface from your extended interface hierarchy. This approach enables you to make use of the industry ontology and your additions together.
To extend the industry ontology, you create your own interfaces that extend from the interfaces in the industry ontology and add the new capabilities to your extended interfaces. For each interface that you want to extend, you create a new interface. The extended interfaces are written in DTDL (see the DTDL for Extended Interfaces section later in this document).
-After extending the portion of the hierarchy shown above, the extended hierarchy looks like the diagram below. Here the extended Space interface adds the `drawingId` property that will contain an ID that associates the digital twin with the 3D drawing. Additionally, the ConferenceRoom interface adds an "online" property that will contain the online status of the conference room. Through inheritance, the ConferenceRoom interface contains all capabilities from the RealEstateCore ConferenceRoom interface, as well as all capabilities from the extended Space interface.
+After extending the portion of the hierarchy shown above, the extended hierarchy looks like the diagram below. Here the extended Space interface adds the `drawingId` property that will contain an ID that associates the digital twin with the 3D drawing. Additionally, the ConferenceRoom interface adds an "online" property that will contain the online status of the conference room. Through inheritance, the ConferenceRoom interface contains all capabilities from the RealEstateCore ConferenceRoom interface and all capabilities from the extended Space interface.
:::image type="content" source="media/concepts-ontologies-extend/real-estate-core-extended-2.png" alt-text="Diagram illustrating the extended RealEstateCore space hierarchy, with more new additions as described.":::
When querying for digital twins using the model ID (the `IS_OF_MODEL` operator),
## Contributing back to the original ontology
-In some cases, you will extend the industry ontology in a way that is broadly useful to most users of the ontology. In this case, you should consider contributing your extensions back to the original ontology. Each ontology has a different process for contributing, so check the ontology's GitHub repository for contribution details.
+In some cases, you'll extend the industry ontology in a way that is broadly useful to most users of the ontology. In this case, you should consider contributing your extensions back to the original ontology. Each ontology has a different process for contributing, so check the ontology's GitHub repository for contribution details.
## DTDL for new interfaces
digital-twins Concepts Ontologies https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-ontologies.md
The vocabulary of an Azure Digital Twins solution is defined using [models](conc
Sometimes, when your solution is tied to a particular industry, it can be easier and more effective to start with a set of models for that industry that already exist, instead of authoring your own model set from scratch. These pre-existing model sets are called **ontologies**.
-In general, an ontology is a set of models for a given domainΓÇölike a building structure, IoT system, smart city, the energy grid, web content, etc. Ontologies are often used as schemas for twin graphs, as they can enable:
-* Harmonization of software components, documentation, query libraries, etc.
+In general, an ontology is a set of models for a given domainΓÇölike a building structure, IoT system, smart city, the energy grid, web content, and so on. Ontologies are often used as schemas for twin graphs, as they can enable:
+* Harmonization of software components, documentation, query libraries, and so on.
* Reduced investment in conceptual modeling and system development * Easier data interoperability on a semantic level * Best practice reuse, rather than starting from scratch or "reinventing the wheel"
-This article explains why and how to use ontologies for your Azure Digital Twins models, as well as what ontologies and tools for them are available today.
+This article explains why to use ontologies for your Azure Digital Twins models and how to do so. It also explains what ontologies and tools for them are available today.
## Using ontologies for Azure Digital Twins Ontologies provide a great starting point for digital twin solutions. They encompass a set of domain-specific models and relationships between entities for designing, creating, and parsing a digital twin graph. Ontologies enable solution developers to begin a digital twins solution from a proven starting point, and focus on solving business problems. The ontologies provided by Microsoft are also designed to be easily extensible, so that you can customize them for your solution.
-In addition, using these ontologies in your solutions can set them up for more seamless integration between different partners and vendors, because ontologies can provide a common vocabulary across solutions.
+Also, using these ontologies in your solutions can set them up for more seamless integration between different partners and vendors, because ontologies can provide a common vocabulary across solutions.
Because models in Azure Digital Twins are represented in [Digital Twins Definition Language (DTDL)](https://github.com/Azure/opendigitaltwins-dtdl/blob/master/DTDL/v2/dtdlv2.md), ontologies for use with Azure Digital Twins are also written in DTDL.
There are three possible strategies for integrating industry-standard ontologies
| Strategy | Description | Resources | | | | |
-| **Adopt** | You can start your solution with an open-source DTDL ontology that has been built on widely adopted industry standards. You can either use these model sets out-of-the-box, or extend them with your own additions for a customized solution. | [Adopting&nbsp;industry&nbsp;standard ontologies](concepts-ontologies-adopt.md)<br><br>[Extending&nbsp;ontologies](concepts-ontologies-extend.md) |
+| **Adopt** | You can start your solution with an open-source DTDL ontology that has been built on widely adopted industry standards. You can either use these model-sets out-of-the-box, or extend them with your own additions for a customized solution. | [Adopting&nbsp;industry&nbsp;standard ontologies](concepts-ontologies-adopt.md)<br><br>[Extending&nbsp;ontologies](concepts-ontologies-extend.md) |
| **Convert** | If you already have existing models represented in another standard format, you can convert them to DTDL to use them with Azure Digital Twins. | [Converting&nbsp;ontologies](concepts-ontologies-convert.md)<br><br>[Extending&nbsp;ontologies](concepts-ontologies-extend.md) | | **Author** | You can always develop your own custom DTDL models from scratch, using any applicable industry standards as inspiration. | [DTDL models](concepts-models.md) |
There are three possible strategies for integrating industry-standard ontologies
No matter which strategy you choose for integrating an ontology into Azure Digital Twins, you can follow the complete path below to guide you through creating and uploading your ontology as DTDL models. 1. Start by reviewing and understand [DTDL modeling in Azure Digital Twins](concepts-models.md).
-1. Proceed with your chosen ontology integration strategy from above: [Adopt](concepts-ontologies-adopt.md), [Convert](concepts-ontologies-convert.md), or [Author](concepts-models.md) your models based on your ontology.
+1. Continue with your chosen ontology integration strategy from above: [Adopt](concepts-ontologies-adopt.md), [Convert](concepts-ontologies-convert.md), or [Author](concepts-models.md) your models based on your ontology.
1. If necessary, [extend](concepts-ontologies-extend.md) your ontology to customize it to your needs.
-1. [Validate](how-to-parse-models.md) your models to verify they are working DTDL documents.
+1. [Validate](how-to-parse-models.md) your models to verify they're working DTDL documents.
1. Upload your finished models to Azure Digital Twins, using the [APIs](how-to-manage-model.md#upload-models) or a sample like the [Azure Digital Twins model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/master/ADTTools#uploadmodels).
-After this, you should be able to use your models in your Azure Digital Twins instance.
+After reading this series of articles, you should be able to use your models in your Azure Digital Twins instance.
>[!TIP] > You can visualize the models in your ontology using the [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) or [Azure Digital Twins Model Visualizer](https://github.com/Azure/opendigitaltwins-building-tools/tree/master/AdtModelVisualizer).
digital-twins Concepts Route Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/concepts-route-events.md
# Route events within and outside of Azure Digital Twins
-Azure Digital twins uses **event routes** to send data to consumers outside the service.
+Azure Digital Twins uses **event routes** to send data to consumers outside the service.
There are two major cases for sending Azure Digital Twins data:
-* Sending data from one twin in the Azure Digital Twins graph to another. For instance, when a property on one digital twin changes, you may want to notify and update another digital twin accordingly.
-* Sending data to downstream data services for additional storage or processing (also known as *data egress*). For instance,
+* Sending data from one twin in the Azure Digital Twins graph to another. For instance, when a property on one digital twin changes, you may want to notify and update another digital twin based on the updated data.
+* Sending data to downstream data services for more storage or processing (also known as *data egress*). For instance,
- A hospital may want to send Azure Digital Twins event data to [Time Series Insights (TSI)](../time-series-insights/overview-what-is-tsi.md), to record time series data of handwashing-related events for bulk analytics.
- - A business that is already using [Azure Maps](../azure-maps/about-azure-maps.md) may want to use Azure Digital Twins to enhance their solution. They can quickly enable an Azure Map after setting up Azure Digital Twins, bring Azure Map entities into Azure Digital Twins as [digital twins](concepts-twins-graph.md) in the twin graph, or run powerful queries leveraging their Azure Maps and Azure Digital Twins data together.
+ - A business that is already using [Azure Maps](../azure-maps/about-azure-maps.md) may want to use Azure Digital Twins to enhance their solution. They can quickly enable an Azure Map after setting up Azure Digital Twins, bring Azure Map entities into Azure Digital Twins as [digital twins](concepts-twins-graph.md) in the twin graph, or run powerful queries using their Azure Maps and Azure Digital Twins data together.
Event routes are used for both of these scenarios.
Typical downstream targets for event routes are resources like TSI, Azure Maps,
### Event routes for internal digital twin events
-Event routes are also used to handle events within the twin graph and send data from digital twin to digital twin. This is done by connecting event routes through Event Grid to compute resources, such as [Azure Functions](../azure-functions/functions-overview.md). These functions then define how twins should receive and respond to events.
+Event routes are also used to handle events within the twin graph and send data from digital twin to digital twin. This sort of event handling is done by connecting event routes through Event Grid to compute resources, such as [Azure Functions](../azure-functions/functions-overview.md). These functions then define how twins should receive and respond to events.
-When a compute resource wants to modify the twin graph based on an event that it received via event route, it is helpful for it to know which twin it wants to modify ahead of time.
+When a compute resource wants to modify the twin graph based on an event that it received via event route, it's helpful for it to know which twin it wants to modify ahead of time.
-Alternatively, the event message also contains the ID of the source twin that sent the message, so the compute resource can use queries or traverse relationships to find a target twin for the desired operation.
+The event message also contains the ID of the source twin that sent the message, so the compute resource can use queries or traverse relationships to find a target twin for the desired operation.
The compute resource also needs to establish security and access permissions independently.
The endpoint APIs that are available in control plane are:
To [create an event route](how-to-manage-routes.md#create-an-event-route), you can use the Azure Digital Twins REST APIs, CLI commands, or the Azure portal.
-Here is an example of creating an event route within a client application, using the `CreateOrReplaceEventRouteAsync` [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true) call:
+Here's an example of creating an event route within a client application, using the `CreateOrReplaceEventRouteAsync` [.NET (C#) SDK](/dotnet/api/overview/azure/digitaltwins/client?view=azure-dotnet&preserve-view=true) call:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/eventRoute_operations.cs" id="CreateEventRoute"::: 1. First, a `DigitalTwinsEventRoute` object is created, and the constructor takes the name of an endpoint. This `endpointName` field identifies an endpoint such as an Event Hub, Event Grid, or Service Bus. These endpoints must be created in your subscription and attached to Azure Digital Twins using control plane APIs before making this registration call.
-2. The event route object also has a [Filter](how-to-manage-routes.md#filter-events) field, which can be used to restrict the types of events that follow this route. A filter of `true` enables the route with no additional filtering (a filter of `false` disables the route).
+2. The event route object also has a [Filter](how-to-manage-routes.md#filter-events) field, which can be used to restrict the types of events that follow this route. A filter of `true` enables the route with no extra filtering (a filter of `false` disables the route).
3. This event route object is then passed to `CreateOrReplaceEventRouteAsync`, along with a name for the route.
digital-twins How To Integrate Azure Signalr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-integrate-azure-signalr.md
Here are the prerequisites you should complete before proceeding:
* You'll need [Node.js](https://nodejs.org/) installed on your machine.
-You should also go ahead and sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
+Be sure to sign in to the [Azure portal](https://portal.azure.com/) with your Azure account, as you'll need to use it in this guide.
## Solution architecture
-You'll be attaching Azure SignalR Service to Azure Digital Twins through the path below. Sections A, B, and C in the diagram are taken from the architecture diagram of the [end-to-end tutorial prerequisite](tutorial-end-to-end.md). In this how-to article, you will build section D on the existing architecture, which includes two new Azure functions that communicate with SignalR and client apps.
+You'll be attaching Azure SignalR Service to Azure Digital Twins through the path below. Sections A, B, and C in the diagram are taken from the architecture diagram of the [end-to-end tutorial prerequisite](tutorial-end-to-end.md). In this how-to article, you'll build section D on the existing architecture, which includes two new Azure functions that communicate with SignalR and client apps.
:::image type="content" source="media/how-to-integrate-azure-signalr/signalr-integration-topology.png" alt-text="Diagram of Azure services in an end-to-end scenario, which depicts data flowing in and out of Azure Digital Twins." lightbox="media/how-to-integrate-azure-signalr/signalr-integration-topology.png"::: ## Download the sample applications
-First, download the required sample apps. You will need both of the following:
+First, download the required sample apps. You'll need both of the following samples:
* [Azure Digital Twins end-to-end samples](/samples/azure-samples/digital-twins-samples/digital-twins-samples/): This sample contains an *AdtSampleApp* that holds two Azure functions for moving data around an Azure Digital Twins instance (you can learn about this scenario in more detail in [Connect an end-to-end solution](tutorial-end-to-end.md)). It also contains a *DeviceSimulator* sample application that simulates an IoT device, generating a new temperature value every second.
- - If you haven't already downloaded the sample as part of the tutorial in [Prerequisites](#prerequisites), [navigate to the sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) and select the *Browse code* button underneath the title. This will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the *Code* button and *Download ZIP*.
+ - If you haven't already downloaded the sample as part of the tutorial in [Prerequisites](#prerequisites), [navigate to the sample](/samples/azure-samples/digital-twins-samples/digital-twins-samples/) and select the *Browse code* button underneath the title. Doing so will take you to the GitHub repo for the samples, which you can download as a .zip by selecting the *Code* button and *Download ZIP*.
:::image type="content" source="media/includes/download-repo-zip.png" alt-text="Screenshot of the digital-twins-samples repo on GitHub and the steps for downloading it as a zip." lightbox="media/includes/download-repo-zip.png":::
- This will download a copy of the sample repo to your machine, as **digital-twins-samples-master.zip**. Unzip the folder.
-* [SignalR integration web app sample](/samples/azure-samples/digitaltwins-signalr-webapp-sample/digital-twins-samples/): This is a sample React web app that will consume Azure Digital Twins telemetry data from an Azure SignalR Service.
+ This button will download a copy of the sample repo in your machine, as **digital-twins-samples-master.zip**. Unzip the folder.
+* [SignalR integration web app sample](/samples/azure-samples/digitaltwins-signalr-webapp-sample/digital-twins-samples/): This sample React web app will consume Azure Digital Twins telemetry data from an Azure SignalR Service.
- Navigate to the sample link and use the same download process to download a copy of the sample to your machine, as _**digitaltwins-signalr-webapp-sample-main.zip**_. Unzip the folder. [!INCLUDE [Create instance](../azure-signalr/includes/signalr-quickstart-create-instance.md)]
Leave the browser window open to the Azure portal, as you'll use it again in the
## Publish and configure the Azure Functions app
-In this section, you will set up two Azure functions:
+In this section, you'll set up two Azure functions:
* **negotiate** - A HTTP trigger function. It uses the *SignalRConnectionInfo* input binding to generate and return valid connection information. * **broadcast** - An [Event Grid](../event-grid/overview.md) trigger function. It receives Azure Digital Twins telemetry data through the event grid, and uses the output binding of the *SignalR* instance you created in the previous step to broadcast the message to all connected client applications. Start Visual Studio (or another code editor of your choice), and open the code solution in the *digital-twins-samples-master > ADTSampleApp* folder. Then do the following steps to create the functions:
-1. In the *SampleFunctionsApp* project, create a new C# class called **SignalRFunctions.cs**. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
+1. In the *SampleFunctionsApp* project, create a new C# class called **SignalRFunctions.cs**. For instructions on how to create a new class, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
1. Replace the contents of the class file with the following code:
Start Visual Studio (or another code editor of your choice), and open the code s
dotnet add package Microsoft.Azure.WebJobs.Extensions.SignalRService --version 1.2.0 ```
- This should resolve any dependency issues in the class.
+ Running this command should resolve any dependency issues in the class.
1. Publish your function to Azure. You can publish it to the same app service/function app that you used in the end-to-end tutorial [prerequisite](#prerequisites), or create a new oneΓÇöbut you may want to use the same one to minimize duplication. For instructions on how to publish a function using Visual Studio, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
Next, configure the function to communicate with your Azure SignalR instance. Yo
## Connect the function to Event Grid
-Next, subscribe the *broadcast* Azure function to the **event grid topic** you created during the [tutorial prerequisite](how-to-integrate-azure-signalr.md#prerequisites). This will allow telemetry data to flow from the thermostat67 twin through the event grid topic and to the function. From here, the function can broadcast the data to all the clients.
+Next, subscribe the *broadcast* Azure function to the **event grid topic** you created during the [tutorial prerequisite](how-to-integrate-azure-signalr.md#prerequisites). This action will allow telemetry data to flow from the thermostat67 twin through the event grid topic and to the function. From here, the function can broadcast the data to all the clients.
-To do this, you'll create an **Event subscription** from your event grid topic to your *broadcast* Azure function as an endpoint.
+To broadcast the data, you'll create an **Event subscription** from your event grid topic to your *broadcast* Azure function as an endpoint.
In the [Azure portal](https://portal.azure.com/), navigate to your event grid topic by searching for its name in the top search bar. Select *+ Event Subscription*. :::image type="content" source="media/how-to-integrate-azure-signalr/event-subscription-1b.png" alt-text="Screenshot of how to create an event subscription in the Azure portal.":::
-On the *Create Event Subscription* page, fill in the fields as follows (fields filled by default are not mentioned):
+On the *Create Event Subscription* page, fill in the fields as follows (fields filled by default aren't mentioned):
* *EVENT SUBSCRIPTION DETAILS* > **Name**: Give a name to your event subscription. * *ENDPOINT DETAILS* > **Endpoint Type**: Select *Azure Function* from the menu options.
-* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link. This will open a *Select Azure Function* window:
- - Fill in your **Subscription**, **Resource group**, **Function app** and **Function** (*broadcast*). Some of these may auto-populate after selecting the subscription.
+* *ENDPOINT DETAILS* > **Endpoint**: Select the *Select an endpoint* link, which will open a *Select Azure Function* window:
+ - Fill in your **Subscription**, **Resource group**, **Function app**, and **Function** (*broadcast*). Some of these fields may auto-populate after selecting the subscription.
- Select **Confirm Selection**. :::image type="content" source="media/how-to-integrate-azure-signalr/create-event-subscription.png" alt-text="Screenshot of the form for creating an event subscription in the Azure portal.":::
At this point, you should see two event subscriptions in the *Event Grid Topic*
## Configure and run the web app
-In this section, you will see the result in action. First, configure the **sample client web app** to connect to the Azure SignalR flow you've set up. Next, you'll start up the **simulated device sample app** that sends telemetry data through your Azure Digital Twins instance. After that, you will view the sample web app to see the simulated device data updating the sample web app in real time.
+In this section, you'll see the result in action. First, configure the **sample client web app** to connect to the Azure SignalR flow you've set up. Next, you'll start up the **simulated device sample app** that sends telemetry data through your Azure Digital Twins instance. After that, you'll view the sample web app to see the simulated device data updating the sample web app in real time.
### Configure the sample client web app
Next, you'll configure the sample client web app. Start by gathering the **HTTP
:::image type="content" source="media/how-to-integrate-azure-signalr/functions-negotiate.png" alt-text="Screenshot of the Azure portal function apps, with 'Functions' highlighted in the menu and 'negotiate' highlighted in the list of functions.":::
-1. Select *Get function URL* and copy the value **up through _/api_ (don't include the last _/negotiate?_)**. You'll use this in the next step.
+1. Select *Get function URL* and copy the value **up through _/api_ (don't include the last _/negotiate?_)**. You'll use this value in the next step.
:::image type="content" source="media/how-to-integrate-azure-signalr/get-function-url.png" alt-text="Screenshot of the Azure portal showing the 'negotiate' function with the 'Get function URL' button and the function URL highlighted.":::
Now, all you have to do is start the simulator project, located in *digital-twin
:::image type="content" source="media/how-to-integrate-azure-signalr/start-button-simulator.png" alt-text="Screenshot of the Visual Studio start button with the DeviceSimulator project open.":::
-A console window will open and display simulated temperature telemetry messages. These are being sent through your Azure Digital Twins instance, where they are then picked up by the Azure functions and SignalR.
+A console window will open and display simulated temperature telemetry messages. These messages are being sent through your Azure Digital Twins instance, where they're then picked up by the Azure functions and SignalR.
You don't need to do anything else in this console, but leave it running while you complete the next step. ### See the results
-To see the results in action, start the **SignalR integration web app sample**. You can do this from any console window at the *digitaltwins-signalr-webapp-sample-main\src* location, by running this command:
+To see the results in action, start the **SignalR integration web app sample**. You can do so from any console window at the *digitaltwins-signalr-webapp-sample-main\src* location, by running this command:
```cmd npm start ```
-This will open a browser window running the sample app, which displays a visual temperature gauge. Once the app is running, you should start seeing the temperature telemetry values from the device simulator that propagate through Azure Digital Twins being reflected by the web app in real time.
+Running this command will open a browser window running the sample app, which displays a visual temperature gauge. Once the app is running, you should start seeing the temperature telemetry values from the device simulator that propagate through Azure Digital Twins being reflected by the web app in real time.
:::image type="content" source="media/how-to-integrate-azure-signalr/signalr-webapp-output.png" alt-text="Screenshot of the sample client web app, showing a visual temperature gauge. The temperature reflected is 67.52.":::
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-manage-model.md
Even if a model meets the requirements to delete it immediately, you may want to
5. Wait for another few minutes to make sure the changes have percolated through 6. Delete the model
-To delete a model, you can use the [DeleteModel]/dotnet/api/azure.digitaltwins.core.digitaltwinsclient.deletemodel?view=azure-dotnet&preserve-view=true) SDK call:
+To delete a model, you can use the [DeleteModel](/dotnet/api/azure.digitaltwins.core.digitaltwinsclient.deletemodel?view=azure-dotnet&preserve-view=true) SDK call:
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="DeleteModel":::
digital-twins How To Provision Using Device Provisioning Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-provision-using-device-provisioning-service.md
For more information about the _provision_ and _retire_ stages, and to better un
## Prerequisites
-Before you can set up the provisioning, you'll need to set up the following:
+Before you can set up the provisioning, you'll need to set up the following resources:
* an **Azure Digital Twins instance**. Follow the instructions in [Set up an instance and authentication](how-to-set-up-instance-portal.md) to create an Azure digital twins instance. Gather the instance's **_host name_** in the Azure portal ([instructions](how-to-set-up-instance-portal.md#verify-success-and-collect-important-values)). * an **IoT hub**. For instructions, see the "Create an IoT Hub" section of [the IoT Hub quickstart](../iot-hub/quickstart-send-telemetry-cli.md). * an [Azure function](../azure-functions/functions-overview.md) that updates digital twin information based on IoT Hub data. Follow the instructions in [Ingest IoT hub data](how-to-ingest-iot-hub-data.md) to create this Azure function. Gather the function **_name_** to use it in this article.
-This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This will take you to the GitHub repo for the sample, which you can download as a .zip file by selecting the **Code** button and **Download ZIP**.
+This sample also uses a **device simulator** that includes provisioning using the Device Provisioning Service. The device simulator is located here: [Azure Digital Twins and IoT Hub Integration Sample](/samples/azure-samples/digital-twins-iothub-integration/adt-iothub-provision-sample/). Get the sample project on your machine by navigating to the sample link and selecting the **Browse code** button underneath the title. This button will take you to the GitHub repo for the sample, which you can download as a .zip file by selecting the **Code** button and **Download ZIP**.
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/download-repo-zip.png" alt-text="Screenshot of the digital-twins-iothub-integration repo on GitHub, highlighting the steps to download it as a zip." lightbox="media/how-to-provision-using-device-provisioning-service/download-repo-zip.png":::
This article is divided into two sections, each focused on a portion of this ful
## Auto-provision device using Device Provisioning Service
-In this section, you'll be attaching Device Provisioning Service to Azure Digital Twins to auto-provision devices through the path below. This is an excerpt from the full architecture shown [earlier](#solution-architecture).
+In this section, you'll be attaching Device Provisioning Service to Azure Digital Twins to auto-provision devices through the path below. This diagram is an excerpt from the full architecture shown [earlier](#solution-architecture).
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/provision.png" alt-text="Diagram of Provision flowΓÇöan excerpt of the solution architecture diagram following data from a thermostat into Azure Digital Twins." lightbox="media/how-to-provision-using-device-provisioning-service/provision.png":::
-Here is a description of the process flow:
+Here's a description of the process flow:
1. Device contacts the DPS endpoint, passing identifying information to prove its identity. 2. DPS validates device identity by validating the registration ID and key against the enrollment list, and calls an [Azure function](../azure-functions/functions-overview.md) to do the allocation. 3. The Azure function creates a new [twin](concepts-twins-graph.md) in Azure Digital Twins for the device. The twin will have the same name as the device's **registration ID**.
Inside your function app project that you created in the [Prerequisites section]
Start by opening the function app project in Visual Studio on your machine and follow the steps below.
-1. First, create a new function of type *HTTP-trigger* in the function app project in Visual Studio. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
+1. First, create a new function of type *HTTP-trigger* in the function app project in Visual Studio. For instructions on how to create this function, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
2. Add a new NuGet package to the project: [Microsoft.Azure.Devices.Provisioning.Service](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/). You might need to add more packages to your project as well, if the packages used in the code aren't part of the project already.
Start by opening the function app project in Visual Studio on your machine and f
:::code language="csharp" source="~/digital-twins-docs-samples-dps/functions/DpsAdtAllocationFunc.cs":::
-4. Publish the project with the *DpsAdtAllocationFunc.cs* function to a function app in Azure. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+4. Publish the project with the *DpsAdtAllocationFunc.cs* function to a function app in Azure. For instructions on how to publish the project, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
> [!IMPORTANT] > When creating the function app for the first time in the [Prerequisites section](#prerequisites), you may have already assigned an access role for the function and configured the application settings for it to access your Azure Digital Twins instance. These need to be done once for the entire function app, so verify they've been completed in your app before continuing. You can find instructions in the [Configure published app](how-to-authenticate-client.md#configure-published-app) section of the *Write app authentication code* article. ### Create Device Provisioning enrollment
-Next, you'll need to create an enrollment in Device Provisioning Service using a **custom allocation function**. Follow the instructions to do this in the [Create the enrollment](../iot-dps/how-to-use-custom-allocation-policies.md#create-the-enrollment) section of the custom allocation policies article in the Device Provisioning Service documentation.
+Next, you'll need to create an enrollment in Device Provisioning Service using a **custom allocation function**. To create an enrollment, follow the instructions in the [Create the enrollment](../iot-dps/how-to-use-custom-allocation-policies.md#create-the-enrollment) section of the custom allocation policies article in the Device Provisioning Service documentation.
-While going through that flow, make sure you select the following options to link the enrollment to the function you just created.
+While going through that flow, make sure you select the following options to link the enrollment to the function you created.
* **Select how you want to assign devices to hubs**: Custom (Use Azure Function). * **Select the IoT hubs this group can be assigned to:** Choose your IoT hub name or select the *Link a new IoT hub* button, and choose your IoT hub from the dropdown. Next, choose the *Select a new function* button to link your function app to the enrollment group. Then, fill in the following values:
-* **Subscription**: Your Azure subscription is auto-populated. Make sure it is the right subscription.
+* **Subscription**: Your Azure subscription is auto-populated. Make sure it's the right subscription.
* **Function App**: Choose your function app name. * **Function**: Choose DpsAdtAllocationFunc.
The device simulator is a thermostat-type device that uses the model with this I
[!INCLUDE [digital-twins-thermostat-model-upload.md](../../includes/digital-twins-thermostat-model-upload.md)]
-For more information about models, refer to [Manage models](how-to-manage-model.md#upload-models).
+For more information about models, see [Manage models](how-to-manage-model.md#upload-models).
#### Configure and run the simulator
Next, in your device simulator directory, copy the .env.template file to a new f
* PROVISIONING_REGISTRATION_ID: You can choose a registration ID for your device. * ADT_MODEL_ID: `dtmi:contosocom:DigitalTwins:Thermostat;1`
-* PROVISIONING_SYMMETRIC_KEY: This is the primary key for the enrollment you set up earlier. To get this value again, navigate to your device provisioning service in the Azure portal, select *Manage enrollments*, then select the enrollment group that you created earlier and copy the *Primary Key*.
+* PROVISIONING_SYMMETRIC_KEY: This environment variable is the primary key for the enrollment you set up earlier. To get this value again, navigate to your device provisioning service in the Azure portal, select *Manage enrollments*, then select the enrollment group that you created earlier and copy the *Primary Key*.
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/sas-primary-key.png" alt-text="Screenshot of the Azure portal view of the device provisioning service manage enrollments page highlighting the SAS primary key value." lightbox="media/how-to-provision-using-device-provisioning-service/sas-primary-key.png":::
You should see the device being registered and connected to IoT Hub, and then st
### Validate
-As a result of the flow you've set up in this article, the device will be automatically registered in Azure Digital Twins. Use the following [Azure Digital Twins CLI](/cli/azure/dt/twin?view=azure-cli-latest&preserve-view=true#az_dt_twin_show) command to find the twin of the device in the Azure Digital Twins instance you created.
+The flow you've set up in this article will result in the device automatically being registered in Azure Digital Twins. Use the following [Azure Digital Twins CLI](/cli/azure/dt/twin?view=azure-cli-latest&preserve-view=true#az_dt_twin_show) command to find the twin of the device in the Azure Digital Twins instance you created.
```azurecli-interactive az dt twin show --dt-name <Digital-Twins-instance-name> --twin-id "<Device-Registration-ID>"
You should see the twin of the device being found in the Azure Digital Twins ins
## Auto-retire device using IoT Hub lifecycle events
-In this section, you will be attaching IoT Hub lifecycle events to Azure Digital Twins to auto-retire devices through the path below. This is an excerpt from the full architecture shown [earlier](#solution-architecture).
+In this section, you'll be attaching IoT Hub lifecycle events to Azure Digital Twins to auto-retire devices through the path below. This diagram is an excerpt from the full architecture shown [earlier](#solution-architecture).
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/retire.png" alt-text="Diagram of the Retire device flowΓÇöan excerpt of the solution architecture diagram, following data from a device deletion into Azure Digital Twins." lightbox="media/how-to-provision-using-device-provisioning-service/retire.png":::
-Here is a description of the process flow:
+Here's a description of the process flow:
1. An external or manual process triggers the deletion of a device in IoT Hub. 2. IoT Hub deletes the device and generates a [device lifecycle](../iot-hub/iot-hub-device-management-overview.md#device-lifecycle) event that will be routed to an [event hub](../event-hubs/event-hubs-about.md). 3. An Azure function deletes the twin of the device in Azure Digital Twins.
The screenshot below illustrates the creation of the event hub.
#### Create SAS policy for your event hub Next, you'll need to create a [shared access signature (SAS) policy](../event-hubs/authorize-access-shared-access-signature.md) to configure the event hub with your function app.
-To do this,
-1. Navigate to the event hub you just created in the Azure portal and select **Shared access policies** in the menu options on the left.
+To create the SAS policy:
+1. Navigate to the event hub you created in the Azure portal and select **Shared access policies** in the menu options on the left.
2. Select **Add**. In the *Add SAS Policy* window that opens, enter a policy name of your choice and select the *Listen* checkbox. 3. Select **Create**.
To do this,
#### Configure event hub with function app
-Next, configure the Azure function app that you set up in the [Prerequisites section](#prerequisites) to work with your new event hub. You'll do this by setting an environment variable inside the function app with the event hub's connection string.
+Next, configure the Azure function app that you set up in the [Prerequisites section](#prerequisites) to work with your new event hub. You'll configure the function by setting an environment variable inside the function app with the event hub's connection string.
-1. Open the policy that you just created and copy the **Connection string-primary key** value.
+1. Open the policy that you created and copy the **Connection string-primary key** value.
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/event-hub-sas-policy-connection-string.png" alt-text="Screenshot of the Azure portal showing how to copy the connection string-primary key." lightbox="media/how-to-provision-using-device-provisioning-service/event-hub-sas-policy-connection-string.png":::
For more about lifecycle events, see [IoT Hub Non-telemetry events](../iot-hub/i
Start by opening the function app project in Visual Studio on your machine and follow the steps below.
-1. First, create a new function of type *Event Hub Trigger* in the function app project in Visual Studio. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
+1. First, create a new function of type *Event Hub Trigger* in the function app project in Visual Studio. For instructions on how to create this function, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#add-a-function-to-your-project).
2. Add a new NuGet package to the project: [Microsoft.Azure.Devices.Provisioning.Service](https://www.nuget.org/packages/Microsoft.Azure.Devices.Provisioning.Service/). You might need to add more packages to your project as well, if the packages used in the code aren't part of the project already.
Start by opening the function app project in Visual Studio on your machine and f
:::code language="csharp" source="~/digital-twins-docs-samples-dps/functions/DeleteDeviceInTwinFunc.cs":::
-4. Publish the project with the *DeleteDeviceInTwinFunc.cs* function to a function app in Azure. For instructions on how to do this, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
+4. Publish the project with the *DeleteDeviceInTwinFunc.cs* function to a function app in Azure. For instructions on how to publish the project, see [Develop Azure Functions using Visual Studio](../azure-functions/functions-develop-vs.md#publish-to-azure).
> [!IMPORTANT] > When creating the function app for the first time in the [Prerequisites section](#prerequisites), you may have already assigned an access role for the function and configured the application settings for it to access your Azure Digital Twins instance. These need to be done once for the entire function app, so verify they've been completed in your app before continuing. You can find instructions in the [Configure published app](how-to-authenticate-client.md#configure-published-app) section of the *Write app authentication code* article. ### Create an IoT Hub route for lifecycle events
-Now you'll set up an IoT Hub route, to route device lifecycle events. In this case, you will specifically listen to device delete events, identified by `if (opType == "deleteDeviceIdentity")`. This will trigger the delete of the digital twin item, finalizing the retirement of a device and its digital twin.
+Now you'll set up an IoT Hub route, to route device lifecycle events. In this case, you'll specifically listen to device delete events, identified by `if (opType == "deleteDeviceIdentity")`. This event will trigger the delete of the digital twin item, completing the retirement process of a device and its digital twin.
First, you'll need to create an event hub endpoint in your IoT hub. Then, you'll add a route in IoT hub to send lifecycle events to this event hub endpoint. Follow these steps to create an event hub endpoint:
Next, you'll add a route that connects to the endpoint you created in the above
:::image type="content" source="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png" alt-text="Screenshot of the Azure portal window showing how to add a route to send lifecycle events." lightbox="media/how-to-provision-using-device-provisioning-service/lifecycle-route.png":::
-Once you have gone through this flow, everything is set to retire devices end-to-end.
+Once you've gone through this flow, everything is set to retire devices end-to-end.
### Validate To trigger the process of retirement, you need to manually delete the device from IoT Hub.
-You can do this with an [Azure CLI command](/cli/azure/iot/hub/module-identity#az_iot_hub_module_identity_delete) or in the Azure portal.
+You can manually delete the device from IoT Hub with an [Azure CLI command](/cli/azure/iot/hub/module-identity#az_iot_hub_module_identity_delete) or in the Azure portal.
Follow the steps below to delete the device in the Azure portal: 1. Navigate to your IoT hub, and choose **IoT devices** in the menu options on the left.
-2. You'll see a device with the device registration ID you chose in the [first half of this article](#auto-provision-device-using-device-provisioning-service). Alternatively, you can choose any other device to delete, as long as it has a twin in Azure Digital Twins so you can verify that the twin is automatically deleted after the device is deleted.
+2. You'll see a device with the device registration ID you chose in the [first half of this article](#auto-provision-device-using-device-provisioning-service). You can also choose any other device to delete, as long as it has a twin in Azure Digital Twins so you can verify that the twin is automatically deleted after the device is deleted.
3. Select the device and choose **Delete**. :::image type="content" source="media/how-to-provision-using-device-provisioning-service/delete-device-twin.png" alt-text="Screenshot of the Azure portal showing how to delete device twin from the IoT devices." lightbox="media/how-to-provision-using-device-provisioning-service/delete-device-twin.png":::
You should see that the twin of the device cannot be found in the Azure Digital
If you no longer need the resources created in this article, follow these steps to delete them.
-Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_delete) command. This removes the resource group; the Azure Digital Twins instance; the IoT hub and the hub device registration; the event grid topic and associated subscriptions; the event hubs namespace and both Azure Functions apps, including associated resources like storage.
+Using the Azure Cloud Shell or local Azure CLI, you can delete all Azure resources in a resource group with the [az group delete](/cli/azure/group?view=azure-cli-latest&preserve-view=true#az_group_delete) command. This command removes the resource group; the Azure Digital Twins instance; the IoT hub and the hub device registration; the event grid topic and associated subscriptions; the event hubs namespace and both Azure Functions apps, including associated resources like storage.
> [!IMPORTANT] > Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
For more information about using HTTP requests with Azure functions, see:
* [Azure Http request trigger for Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md)
-You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following:
+You can write custom logic to automatically provide this information using the model and graph data already stored in Azure Digital Twins. To read more about managing, upgrading, and retrieving information from the twins graph, see the following how-to guides:
* [Manage a digital twin](how-to-manage-twin.md) * [Query the twin graph](how-to-query-graph.md)
digital-twins How To Use Tags https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/how-to-use-tags.md
description: See how to implement tags on digital twins Previously updated : 7/22/2020 Last updated : 8/19/2021
digital-twins Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/overview.md
description: Overview of what can be done with Azure Digital Twins. Previously updated : 3/12/2020 Last updated : 8/19/2021
You can view a list of **common IoT terms** and their uses across the Azure IoT
* Dive into working with Azure Digital Twins in [Get started with Azure Digital Twins Explorer](quickstart-azure-digital-twins-explorer.md).
-* Or, start reading about Azure Digital Twins concepts with [Custom models](concepts-models.md).
+* Or, start reading about Azure Digital Twins concepts with [DTDL models](concepts-models.md).
digital-twins Troubleshoot Error 403 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-error-403.md
Previously updated : 7/20/2020 Last updated : 8/20/2021 # Service request failed. Status: 403 (Forbidden)
This error may occur on many types of service requests that require authenticati
### Cause #1
-Most often, this error indicates that your Azure role-based access control (Azure RBAC) permissions for the service are not set up correctly. Many actions for an Azure Digital Twins instance require you to have the *Azure Digital Twins Data Owner* role **on the instance you are trying to manage**.
+Most often, this error indicates that your Azure role-based access control (Azure RBAC) permissions for the service aren't set up correctly. Many actions for an Azure Digital Twins instance require you to have the Azure Digital Twins Data Owner role **on the instance you are trying to manage**.
### Cause #2
-If you are using a client app to communicate with Azure Digital Twins that is authenticating with an [app registration](./how-to-create-app-registration-portal.md), this error may happen because your app registration does not have permissions set up for the Azure Digital Twins service.
+If you're using a client app to communicate with Azure Digital Twins that's authenticating with an [app registration](./how-to-create-app-registration-portal.md), this error may happen because your app registration doesn't have permissions set up for the Azure Digital Twins service.
The app registration must have access permissions configured for the Azure Digital Twins APIs. Then, when your client app authenticates against the app registration, it will be granted the permissions that the app registration has configured.
The app registration must have access permissions configured for the Azure Digit
### Solution #1
-The first solution is to verify that your Azure user has the _**Azure Digital Twins Data Owner**_ role on the instance you are trying to manage. If you do not have this role, set it up.
+The first solution is to verify that your Azure user has the Azure Digital Twins Data Owner role on the instance you're trying to manage. If you don't have this role, set it up.
-Note that this role is different from...
-* the former name for this role during preview, *Azure Digital Twins Owner (Preview)* (the role is the same, but the name has changed)
-* the *Owner* role on the entire Azure subscription. *Azure Digital Twins Data Owner* is a role within Azure Digital Twins and is scoped to this individual Azure Digital Twins instance.
-* the *Owner* role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and *Azure Digital Twins Data Owner* is the role that should be used for management.
+This role is different from...
+* the former name for this role during preview, Azure Digital Twins Owner (Preview). In this case, the role is the same, but the name has changed.
+* the Owner role on the entire Azure subscription. Azure Digital Twins Data Owner is a role within Azure Digital Twins and is scoped to this individual Azure Digital Twins instance.
+* the Owner role in Azure Digital Twins. These are two distinct Azure Digital Twins management roles, and Azure Digital Twins Data Owner is the role that should be used for management.
#### Check current setup
Note that this role is different from...
#### Fix issues
-If you do not have this role assignment, someone with an Owner role in your **Azure subscription** should run the following command to give your Azure user the *Azure Digital Twins Data Owner* role on the **Azure Digital Twins instance**.
+If you don't have this role assignment, someone with an Owner role in your **Azure subscription** should run the following command to give your Azure user the Azure Digital Twins Data Owner role on the **Azure Digital Twins instance**.
If you're an Owner on the subscription, you can run this command yourself. If you're not, contact an Owner to run this command on your behalf.
If you're an Owner on the subscription, you can run this command yourself. If yo
az dt role-assignment create --dt-name <your-Azure-Digital-Twins-instance> --assignee "<your-Azure-AD-email>" --role "Azure Digital Twins Data Owner" ```
-For more details about this role requirement and the assignment process, see the [Set up your user's access permissions section](how-to-set-up-instance-CLI.md#set-up-user-access-permissions) of *How-to: Set up an instance and authentication (CLI or portal)*.
+For more information about this role requirement and the assignment process, see the [Set up your user's access permissions section](how-to-set-up-instance-CLI.md#set-up-user-access-permissions) of *How-to: Set up an instance and authentication (CLI or portal)*.
-If you have this role assignment already *and* you're using an Azure AD app registration to authenticate a client app, you can continue to the next solution if this solution did not resolve the 403 issue.
+If you have this role assignment already **and** you're using an Azure AD app registration to authenticate a client app, you can continue to the next solution if this solution didn't resolve the 403 issue.
### Solution #2
-If you're using an Azure AD app registration to authenticate a client app, the second possible solution is to verify that the app registration has permissions configured for the Azure Digital Twins service. If these are not configured, set them up.
+If you're using an Azure AD app registration to authenticate a client app, the second possible solution is to verify that the app registration has permissions configured for the Azure Digital Twins service. If these aren't configured, set them up.
#### Check current setup
-To check whether the permissions have been configured correctly, navigate to the [Azure AD app registration overview page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. You can get to this page yourself by searching for *App registrations* in the portal search bar.
+To check whether the permissions have been configured correctly, navigate to the [Azure AD app registration overview page](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. You can get to this page yourself by searching for **App registrations** in the portal search bar.
-Switch to the *All applications* tab to see all the app registrations that have been created in your subscription.
+Switch to the **All applications** tab to see all the app registrations that have been created in your subscription.
-You should see the app registration you just created in the list. Select it to open up its details.
+You should see the app registration you created in the list. Select it to open up its details.
:::image type="content" source="media/troubleshoot-error-403/app-registrations.png" alt-text="Screenshot of the app registrations page in the Azure portal.":::
-First, verify that the Azure Digital Twins permissions settings were properly set on the registration. To do this, select *Manifest* from the menu bar to view the app registration's manifest code. Scroll to the bottom of the code window and look for these fields under `requiredResourceAccess`. The values should match those in the screenshot below:
+First, verify that the Azure Digital Twins permissions settings were properly set on the registration: Select **Manifest** from the menu bar to view the app registration's manifest code. Scroll to the bottom of the code window and look for these fields under `requiredResourceAccess`. The values should match the ones in the screenshot below:
:::image type="content" source="media/troubleshoot-error-403/verify-manifest.png" alt-text="Screenshot of the manifest for the Azure AD app registration in the Azure portal.":::
-Next, select *API permissions* from the menu bar to verify that this app registration contains Read/Write permissions for Azure Digital Twins. You should see an entry like this:
+Next, select **API permissions** from the menu bar to verify that this app registration contains Read/Write permissions for Azure Digital Twins. You should see an entry like this:
:::image type="content" source="media/troubleshoot-error-403/verify-api-permissions.png" alt-text="Screenshot of the API permissions for the Azure AD app registration in the Azure portal, showing 'Read/Write Access' for Azure Digital Twins.":::
digital-twins Troubleshoot Metrics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/digital-twins/troubleshoot-metrics.md
# Troubleshooting Azure Digital Twins: Metrics
-The metrics described in this article give you information about the state of Azure Digital Twins resources in your Azure subscription. Azure Digital Twins metrics help you assess the overall health of Azure Digital Twins service and the resources connected to it. These user-facing statistics help you see what is going on with your Azure Digital Twins and help perform root-cause analysis on issues without needing to contact Azure support.
+The metrics described in this article give you information about the state of Azure Digital Twins resources in your Azure subscription. Azure Digital Twins metrics help you assess the overall health of Azure Digital Twins service and the resources connected to it. These user-facing statistics help you see what is going on with your Azure Digital Twins and help analyze the root causes of issues without needing to contact Azure support.
Metrics are enabled by default. You can view Azure Digital Twins metrics from the [Azure portal](https://portal.azure.com).
The following tables describe the metrics tracked by each Azure Digital Twins in
You can configure these metrics to track when you're approaching a [published service limit](reference-service-limits.md#functional-limits) for some aspect of your solution.
-To set this up, use the [alerts](troubleshoot-alerts.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
+To set up tracking, use the [alerts](troubleshoot-alerts.md) feature in Azure Monitor. You can define thresholds for these metrics so that you receive an alert when a metric reaches a certain percentage of its published limit.
| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions | | | | | | | |
-| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you are approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
-| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you are approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
+| TwinCount | Twin Count (Preview) | Count | Total | Total number of twins in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of twins allowed per instance. | None |
+| ModelCount | Model Count (Preview) | Count | Total | Total number of models in the Azure Digital Twins instance. Use this metric to determine if you're approaching the [service limit](reference-service-limits.md#functional-limits) for max number of models allowed per instance. | None |
#### API request metrics
Metrics having to do with API requests:
| | | | | | | | ApiRequests | API Requests | Count | Total | The number of API Requests made for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text | | ApiRequestsFailureRate | API Requests Failure Rate | Percent | Average | The percentage of API requests that the service receives for your instance that give an internal error (500) response code for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol, <br>Status Code, <br>Status Code Class, <br>Status Text
-| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
+| ApiRequestsLatency | API Requests Latency | Milliseconds | Average | The response time for API requests. This value refers to the time from when the request is received by Azure Digital Twins until the service sends a success/fail result for Digital Twins read, write, delete, and query operations. | Authentication, <br>Operation, <br>Protocol |
#### Billing metrics
Metrics having to do with billing:
| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions | | | | | | | | | BillingApiOperations | Billing API Operations | Count | Total | Billing metric for the count of all API requests made against the Azure Digital Twins service. | Meter ID |
-| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this will be counted as additional messages in 1 KB increments (so a message between 1 and 2 KB will be counted as 2 messages, between 2 and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5KB in the response body, for example, will be billed as 2 operations. | Meter ID |
-| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There is also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
+| BillingMessagesProcessed | Billing Messages Processed | Count | Total | Billing metric for the number of messages sent out from Azure Digital Twins to external endpoints.<br><br>To be considered a single message for billing purposes, a payload must be no larger than 1 KB. Payloads larger than this limit will be counted as additional messages in 1 KB increments (so a message between 1 KB and 2 KB will be counted as 2 messages, between 2 KB and 3 KB will be 3 messages, and so on).<br>This restriction also applies to responsesΓÇöso a call that returns 1.5 KB in the response body, for example, will be billed as 2 operations. | Meter ID |
+| BillingQueryUnits | Billing Query Units | Count | Total | The number of Query Units, an internally computed measure of service resource usage, consumed to execute queries. There's also a helper API available for measuring Query Units: [QueryChargeHelper Class](/dotnet/api/azure.digitaltwins.core.querychargehelper?view=azure-dotnet&preserve-view=true) | Meter ID |
-For more details on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
+For more information on the way Azure Digital Twins is billed, see [Azure Digital Twins pricing](https://azure.microsoft.com/pricing/details/digital-twins/).
#### Ingress metrics
Metrics having to do with data ingress:
| | | | | | | | IngressEvents | Ingress Events | Count | Total | The number of incoming telemetry events into Azure Digital Twins. | Result | | IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result |
-| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it is ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
+| IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
#### Routing metrics
Metrics having to do with routing:
| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions | | | | | | | | | MessagesRouted | Messages Routed | Count | Total | The number of messages routed to an endpoint Azure service such as Event Hub, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they are routed from Azure Digital Twins to an endpoint Azure service such as Event Hub, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
-| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it is posted to the endpoint Azure service such as Event Hub, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingFailureRate | Routing Failure Rate | Percent | Average | The percentage of events that result in an error as they're routed from Azure Digital Twins to an endpoint Azure service such as Event Hub, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
+| RoutingLatency | Routing Latency | Milliseconds | Average | Time elapsed between an event getting routed from Azure Digital Twins to when it's posted to the endpoint Azure service such as Event Hub, Service Bus, or Event Grid. | Endpoint Type, <br>Result |
## Dimensions
event-grid Event Handlers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/event-handlers.md
# Event handlers destinations in Event Grid on Kubernetes An event handler is any system that exposes an endpoint and is the destination for events sent by Event Grid. An event handler receiving an event acts upon it and uses the event payload to execute some logic, which might lead to the occurrence of new events.
-The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate) version.
+The way to configure Event Grid to send events to a destination is through the creation of an event subscription. It can be done through [Azure CLI](/cli/azure/eventgrid/event-subscription#az_eventgrid_event_subscription_create), [management SDK](../sdk-overview.md#management-sdks), or using direct HTTPs calls using the [2020-10-15-preview API](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update) version.
In general, Event Grid on Kubernetes can send events to any destination via **Webhooks**. Webhooks are HTTP(s) endpoints exposed by a service or workload to which Event Grid has access. The webhook can be a workload hosted in the same cluster, in the same network space, on the cloud, on-prem or anywhere that Event Grid can reach.
In addition to Webhooks, Event Grid on Kubernetes can send events to the followi
## Feature parity
-Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
+Event Grid on Kubernetes offers a good level of feature parity with Azure Event Grid's support for event subscriptions. The following list enumerates the main differences in event subscription functionality. Apart from those differences, you can use Azure Event Grid's [REST api version 2020-10-15-preview](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions) as a reference when managing event subscriptions on Event Grid on Kubernetes.
-1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions).
+1. Use [REST api version 2020-10-15-preview](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions).
2. [Azure Event Grid trigger for Azure Functions](../../azure-functions/functions-bindings-event-grid-trigger.md?tabs=csharp%2Cconsole) isn't supported. You can use a WebHook destination type to deliver events to Azure Functions. 3. There's no [dead letter location](../manage-event-delivery.md#set-dead-letter-location) support. That means that you cannot use ``properties.deadLetterDestination`` in your event subscription payload. 4. Azure Relay's Hybrid Connections as a destination isn't supported yet.
-5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
-6. Labels ([properties.labels](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they are not available.
-7. [Delivery with resource identity](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate#eventsubscriptionidentity) aren't supported.
+5. Only CloudEvents schema is supported. The supported schema value is "[CloudEventSchemaV1_0](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#eventdeliveryschema)". Cloud Events schema is extensible and based on open standards.
+6. Labels ([properties.labels](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#request-body)) aren't applicable to Event Grid on Kubernetes. Hence, they are not available.
+7. [Delivery with resource identity](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#deliverywithresourceidentity) isn't supported. So, all properties for [Event Subscription Identity](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update#eventsubscriptionidentity) aren't supported.
8. [Destination endpoint validation](../webhook-event-delivery.md#endpoint-validation-with-event-grid-events) isn't supported yet. ## Event filtering in event subscriptions
To publish to a Storage Queue, set the `endpointType` to `storageQueue` and pro
## Next steps * Add [filter configuration](filter-events.md) to your event subscription to select the events to be delivered.
-* To learn about schemas supported by Event Grid on Azure Arc for Kubernetes, see [Event Grid on Kubernetes - Event schemas](event-schemas.md).
+* To learn about schemas supported by Event Grid on Azure Arc for Kubernetes, see [Event Grid on Kubernetes - Event schemas](event-schemas.md).
event-grid Features https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/features.md
# Event Grid on Kubernetes with Azure Arc features
-Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/version2020-10-15-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
+Event Grid on Kubernetes offers a rich set of features that help you integrate your Kubernetes workloads and realize hybrid architectures. It shares the same [rest API](/rest/api/eventgrid/version2021-06-01-preview/topics) (starting with version 2020-10-15-preview), [Event Grid CLI](/cli/azure/eventgrid), Azure portal experience, [management SDKs](../sdk-overview.md#management-sdks), and [data plane SDKs](../sdk-overview.md#data-plane-sdks) with Azure Event Grid, the other edition of the same service. When you're ready to publish events, you can use the [data plane SDK examples provided in different languages](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) that work for both editions of Event Grid.
Although Event Grid on Kubernetes and Azure Event Grid share many features and the goal is to provide the same user experience, there are some differences given the unique requirements they seek to meet and the stage in which they are on their software lifecycle. For example, the only type of topic available in Event Grid on Kubernetes are Event Grid Topics that sometimes are also referred as custom topics. Other types of topics (see below) are either not applicable or support for them is not yet available. The main differences between the two editions of Event Grid are presented in the table below.
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Feature | Event Grid on Kubernetes | Azure Event Grid | |:--|:-:|:-:|
-| [Event Grid Topics](/rest/api/eventgrid/version2020-10-15-preview/topics) | Γ£ö | Γ£ö |
+| [Event Grid Topics](/rest/api/eventgrid/version2021-06-01-preview/topics) | Γ£ö | Γ£ö |
| [CNCF Cloud Events schema](https://github.com/cloudevents/spec/blob/master/spec.md) | Γ£ö | Γ£ö | | Event Grid and custom schemas | Γ£ÿ* | Γ£ö | | Reliable delivery | Γ£ö | Γ£ö |
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
| Azure Relay's Hybrid Connections as a destination | Γ£ÿ | Γ£ö | | [Advanced filtering](filter-events.md) | Γ£ö*** | Γ£ö | | [Webhook AuthN/AuthZ with AAD](../secure-webhook-delivery.md) | Γ£ÿ | Γ£ö |
-| [Event delivery with resource identity](/rest/api/eventgrid/version2020-10-15-preview/eventsubscriptions/createorupdate#deliverywithresourceidentity) | Γ£ÿ | Γ£ö |
+| [Event delivery with resource identity](/rest/api/eventgrid/version2021-06-01-preview/event-subscriptions/create-or-update) | Γ£ÿ | Γ£ö |
| Same set of data plane SDKs | Γ£ö | Γ£ö | | Same set of management SDKs | Γ£ö | Γ£ö | | Same Event Grid CLI | Γ£ö | Γ£ö |
Although Event Grid on Kubernetes and Azure Event Grid share many features and t
\*** Event Grid on Kubernetes supports advanced filtering of events based on values in event data as Event Grid on Azure does, but there are a few features and operators that Event Grid on Kubernetes doesn't support. For more information, see [Advanced filtering](filter-events.md#filter-by-values-in-event-data). ## Next steps
-To learn more about Event Grid on Kubernetes, see [Event Grid on Kubernetes with Azure Arc (Preview) - overview](overview.md).
+To learn more about Event Grid on Kubernetes, see [Event Grid on Kubernetes with Azure Arc (Preview) - overview](overview.md).
event-grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/event-grid/kubernetes/overview.md
Event Grid on Kubernetes supports various event-driven integration scenarios. Ho
"As an owner of a system deployed to a Kubernetes cluster, I want to communicate my system's state changes by publishing events and configuring routing of those events so that event handlers, under my control or otherwise, can process my system's events in a way they see fit."
-**Feature** that helps you realize above requirement: [Event Grid Topics](/rest/api/eventgrid/version2020-10-15-preview/topics).
+**Feature** that helps you realize above requirement: [Event Grid Topics](/rest/api/eventgrid/version2021-06-01-preview/topics).
### Event Grid on Kubernetes at a glance From the user perspective, Event Grid on Kubernetes is composed of the following resources in blue:
With Event Grid on Kubernetes, you can forward events to Azure for further proce
Event handler destinations can be any HTTPS or HTTP endpoint to which Event Grid can reach through the network, public or private, and has access (not protected with some authentication mechanism). You define event delivery destinations when you create an event subscription. For more information, see [event handlers](event-handlers.md). ## Features
-Event Grid on Kubernetes supports [Event Grid Topics](/rest/api/eventgrid/version2020-10-15-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
+Event Grid on Kubernetes supports [Event Grid Topics](/rest/api/eventgrid/version2021-06-01-preview/topics), which is a feature also offered by [Azure Event Grid](../custom-topics.md). Event Grid topics help you realize the [primary integration use case](#use-case) where your requirements call for integrating your system with another workload that you own or otherwise is made accessible to your system.
Some of the capabilities you get with Azure Event Grid on Kubernetes are:
Here are more resources that you can use:
* [Data plane SDKs](../sdk-overview.md#data-plane-sdks). * [Publish events examples using the Data plane SDKs](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/). * [Event Grid CLI](/cli/azure/eventgrid).
-* [Management SDKs](../sdk-overview.md#management-sdks).
+* [Management SDKs](../sdk-overview.md#management-sdks).
hdinsight Hdinsight Apps Install Applications https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hdinsight/hdinsight-apps-install-applications.md
The following list shows the published applications:
|[H2O SparklingWater for HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/h2o-ai.h2o-sparklingwater) |Spark |H2O Sparkling Water supports the following distributed algorithms: GLM, Naïve Bayes, Distributed Random Forest, Gradient Boosting Machine, Deep Neural Networks, Deep learning, K-means, PCA, Generalized Low Rank Models, Anomaly Detection, Autoencoders. | |[Striim for Real-Time Data Integration to HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/striim.striimbyol) |Hadoop,HBase,Storm,Spark,Kafka |Striim (pronounced "stream") is an end-to-end streaming data integration + intelligence platform, enabling continuous ingestion, processing, and analytics of disparate data streams. | |[Jumbune Enterprise-Accelerating BigData Analytics](https://azuremarketplace.microsoft.com/marketplace/apps/impetus-infotech-india-pvt-ltd.impetus_jumbune) |Hadoop, Spark |At a high level, Jumbune assists enterprises by, 1. Accelerating Tez, MapReduce & Spark engine based Hive, Java, Scala workload performance. 2. Proactive Hadoop Cluster Monitoring, 3. Establishing Data Quality management on distributed file system. |
-|[Kyligence Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence) |Hadoop,HBase,Spark |Powered by Apache Kylin, Kyligence Enterprise Enables BI on Big Data. As an enterprise OLAP engine on Hadoop, Kyligence Enterprise empowers business analyst to architect BI on Hadoop with industry-standard data warehouse and BI methodology. |
+|[Kyligence Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/kyligence.kyligence-cloud-saas) |Hadoop,HBase,Spark |Powered by Apache Kylin, Kyligence Enterprise Enables BI on Big Data. As an enterprise OLAP engine on Hadoop, Kyligence Enterprise empowers business analyst to architect BI on Hadoop with industry-standard data warehouse and BI methodology. |
|[Starburst Presto for Azure HDInsight](https://azuremarketplace.microsoft.com/marketplace/apps/starburstdatainc1582306810515.starburst-enterprise-presto?tab=Overview) |Hadoop |Presto is a fast and scalable distributed SQL query engine. Architected for the separation of storage and compute, Presto is perfect for querying data in Azure Data Lake Storage, Azure Blob Storage, SQL and NoSQL databases, and other data sources. | |[StreamSets Data Collector for HDInsight Cloud](https://azuremarketplace.microsoft.com/marketplace/apps/streamsets.streamsets-data-collector-hdinsight) |Hadoop,HBase,Spark,Kafka |StreamSets Data Collector is a lightweight, powerful engine that streams data in real time. Use Data Collector to route and process data in your data streams. It comes with a 30 day trial license. | |[Trifacta Wrangler Enterprise](https://azuremarketplace.microsoft.com/marketplace/apps/trifacta.trifacta-db?tab=Overview) |Hadoop, Spark,HBase |Trifacta Wrangler Enterprise for HDInsight supports enterprise-wide data wrangling for any scale of data. The cost of running Trifacta on Azure is a combination of Trifacta subscription costs plus the Azure infrastructure costs for the virtual machines. |
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/fhir-features-supported.md
Previous versions also currently supported include: `3.0.2`
## REST API
-| API | Supported - PaaS | Supported - OSS (SQL) | Supported - OSS (Cosmos DB) | Comment |
-|--|--|--|--|--|
-| read | Yes | Yes | Yes | |
-| vread | Yes | Yes | Yes | |
-| update | Yes | Yes | Yes | |
-| update with optimistic locking | Yes | Yes | Yes | |
-| update (conditional) | Yes | Yes | Yes | |
-| patch | No | No | No | |
-| delete | Yes | Yes | Yes | See Note below. |
-| delete (conditional) | Yes | Yes | Yes | |
-| history | Yes | Yes | Yes | |
-| create | Yes | Yes | Yes | Support both POST/PUT |
-| create (conditional) | Yes | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
-| search | Partial | Partial | Partial | See [Overview of FHIR Search](overview-of-search.md). |
-| chained search | Partial | Yes | Partial | See Note 2 below. |
-| reverse chained search | Partial | Yes | Partial | See Note 2 below. |
-| capabilities | Yes | Yes | Yes | |
-| batch | Yes | Yes | Yes | |
-| transaction | No | Yes | No | |
-| paging | Partial | Partial | Partial | `self` and `next` are supported |
-| intermediaries | No | No | No | |
-
-> [!Note]
-> Delete defined by the FHIR spec requires that after deleting, subsequent non-version specific reads of a resource returns a 410 HTTP status code and the resource is no longer found through searching. The Azure API for FHIR also enables you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you do not pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
->
-> In the Azure API for FHIR and the open-source FHIR server backed by Cosmos, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 100 results, an error will be thrown. By default, chained search is behind a feature flag. To use the chained searching on Cosmos DB, use the header `x-ms-enable-chained-search: true`. For more details, see [PR 1695](https://github.com/microsoft/fhir-server/pull/1695).
+| API | Azure API for FHIR | FHIR service in Healthcare APIs | Comment |
+|--|--|||
+| read | Yes | Yes | |
+| vread | Yes | Yes | |
+| update | Yes | Yes | |
+| update with optimistic locking | Yes | Yes |
+| update (conditional) | Yes | Yes |
+| patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
+| patch (conditional) | No | No |
+| delete | Yes | Yes | See details in the delete section below |
+| delete (conditional) | Yes | Yes | See details in the delete section below |
+| history | Yes | Yes |
+| create | Yes | Yes | Support both POST/PUT |
+| create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
+| search | Partial | Partial | See [Overview of FHIR Search](overview-of-search.md). |
+| chained search | Yes | Yes | See Note below. |
+| reverse chained search | Yes | Yes | See Note below. |
+| batch | Yes | Yes |
+| transaction | No | Yes |
+| paging | Partial | Partial | `self` and `next` are supported |
+| intermediaries | No | No |
+
+> [!Note]
+> In the Azure API for FHIR and the open-source FHIR server backed by Cosmos, the chained search and reverse chained search is an MVP implementation. To accomplish chained search on Cosmos DB, the implementation walks down the search expression and issues sub-queries to resolve the matched resources. This is done for each level of the expression. If any query returns more than 1000 results, an error will be thrown.
+
+### Delete and conditional delete
+
+Delete defined by the FHIR spec requires that after deleting, subsequent non-version specific reads of a resource return a 410 HTTP status code and the resource is no longer found through searching. The Azure API for FHIR and the FHIR service also enable you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you do not pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+
+In addition to delete, the Azure API for FHIR and the FHIR service support conditional delete, which allow you to pass a search criteria to delete a resource. By default, the conditional delete will allow you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using conditional delete.
+
+To delete a single item using conditional delete, you must specify search criteria that returns a single item.
+``` JSON
+DELETE https://{{hostname}}/Patient?identifier=1032704
+```
+
+You can do the same search but include hardDelete=true to also delete all history.
+```JSON
+DELETE https://{{hostname}}/Patient?identifier=1032704&hardDelete=true
+```
+
+If you want to delete multiple resources, you can include `_count=100`, which will delete up to 100 resources that match the search criteria.
+``` JSON
+DELETE https://{{hostname}}/Patient?identifier=1032704&_count=100
+```
## Extended Operations All the operations that are supported that extend the RESTful API.
-| Search parameter type | Supported - PaaS | Supported - OSS (SQL) | Supported - OSS (Cosmos DB) | Comment |
-||--|--|--||
-| $export (whole system) | Yes | Yes | Yes | |
-| Patient/$export | Yes | Yes | Yes | |
-| Group/$export | Yes | Yes | Yes | |
-| $convert-data | Yes | Yes | Yes | |
-| $validate | Yes | Yes | Yes | |
-| $member-match | Yes | Yes | Yes | |
-| $patient-everything | Yes | Yes | Yes | |
+| Search parameter type | Azure API for FHIR | FHIR service in Healthcare APIs| Comment |
+||--|--||
+| $export (whole system) | Yes | Yes | |
+| Patient/$export | Yes | Yes | |
+| Group/$export | Yes | Yes | |
+| $convert-data | Yes | Yes | |
+| $validate | Yes | Yes | |
+| $member-match | Yes | Yes | |
+| $patient-everything | Yes | Yes | |
+| $purge-history | Yes | Yes | |
## Persistence
Currently, the allowed actions for a given role are applied *globally* on the AP
## Service limits
-* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have this increased. The maximum available is 1,000,000.
+* [**Request Units (RUs)**](../../cosmos-db/concepts-limits.md) - You can configure up to 10,000 RUs in the portal for Azure API for FHIR. You will need a minimum of 400 RUs or 40 RUs/GB, whichever is larger. If you need more than 10,000 RUs, you can put in a support ticket to have the RUs increased. The maximum available is 1,000,000.
* **Bundle size** - Each bundle is limited to 500 items.
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/azure-api-for-fhir/how-to-run-a-reindex.md
Previously updated : 4/23/2021 Last updated : 8/23/2021 # Running a reindex job in Azure API for FHIR
-There are scenarios where you may have search or sort parameters in the Azure API for FHIR that haven't yet been indexed. This is particularly relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
+There are scenarios where you may have search or sort parameters in the Azure API for FHIR that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
> [!Warning] > It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article includes options for how to throttle and control the reindex job.
Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b8
## How to check the status of a reindex job
-Once youΓÇÖve started a reindex job, you can check the status of the job using the following:
+Once youΓÇÖve started a reindex job, you can check the status of the job using the following call:
`GET {{FHIR URL}}/_operations/reindex/{{reindexJobId}`
The following information is shown in the reindex job result:
* **progress**: Reindex job percent complete. Equals resourcesSuccessfullyReindexed/totalResourcesToReindex x 100.
-* **status**: This will state if the reindex job is queued, running, complete, failed, or canceled.
+* **status**: States if the reindex job is queued, running, complete, failed, or canceled.
-* **resources**: This lists all the resource types impacted by the reindex job.
+* **resources**: Lists all the resource types impacted by the reindex job.
## Delete a reindex job
A reindex job can be quite performance intensive. WeΓÇÖve implemented some throt
> [!NOTE] > It is not uncommon on large datasets for a reindex job to run for days. For a database with 30,000,000 million resources, we noticed that it took 4-5 days at 100K RUs to reindex the entire database.
-Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speed up the process (use more compute) or slow down the process (use less compute). For example, you could run the reindex job on a low traffic time and increase your compute to get it done quicker. Instead, you could use the settings to ensure a very low usage of compute and have it run for days in the background.
+Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speedup the process (use more compute) or slow down the process (use less compute). For example, you could run the reindex job on a low traffic time and increase your compute to get it done quicker. Instead, you could use the settings to ensure a low usage of compute and have it run for days in the background.
-| **Parameter** | **Description** | **Default** | **Recommended Range** |
+| **Parameter** | **Description** | **Default** | **Available Range** |
| | - | | - |
-| QueryDelayIntervalInMilliseconds | This is the delay between each batch of resources being kicked off during the reindex job. | 500 MS (.5 seconds) | 50 to 5000: 50 will speed up the reindex job and 5000 will slow it down from the default. |
-| MaximumResourcesPerQuery | This is the maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-500 |
-| MaximumConcurrency | This is the number of batches done at a time. | 1 | 1-5 |
-| targetDataStoreUsagePercentage | This allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Cosmos DB. | No present, which means that up to 100% can be used. | 1-100 |
+| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speed up the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50-500000 |
+| MaximumResourcesPerQuery | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 |
+| MaximumConcurrency | The number of batches done at a time. | 1 | 1-10 |
+| targetDataStoreUsagePercentage | Allows you to specify what percent of your data store to use for the reindex job. For example, you could specify 50% and that would ensure that at most the reindex job would use 50% of available RUs on Cosmos DB. | Not present, which means that up to 100% can be used. | 0-100 |
If you want to use any of the parameters above, you can pass them into the Parameters resource when you start the reindex job.
In this article, youΓÇÖve learned how to start a reindex job. To learn how to de
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)-
-
-
healthcare-apis Fhir Features Supported https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/fhir-features-supported.md
Previous versions also currently supported include: `3.0.2`
| update | Yes | Yes | | | update with optimistic locking | Yes | Yes | | update (conditional) | Yes | Yes |
-| patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only |
+| patch | Yes | Yes | Support for [JSON Patch](https://www.hl7.org/fhir/http.html#patch) only. We have included a workaround to use JSON Patch in a bundle in [this PR](https://github.com/microsoft/fhir-server/pull/2143).|
| patch (conditional) | No | No |
-| delete | Yes | Yes | See Note below.|
-| delete (conditional) | Yes | Yes |
+| delete | Yes | Yes | See details in the delete section below |
+| delete (conditional) | Yes | Yes | See details in the delete section below |
| history | Yes | Yes | | create | Yes | Yes | Support both POST/PUT | | create (conditional) | Yes | Yes | Issue [#1382](https://github.com/microsoft/fhir-server/issues/1382) |
Previous versions also currently supported include: `3.0.2`
| paging | Partial | Partial | `self` and `next` are supported | | intermediaries | No | No |
-> [!Note]
-> Delete defined by the FHIR spec requires that after deleting, subsequent non-version specific reads of a resource returns a 410 HTTP status code and the resource is no longer found through searching. The FHIR service in the Azure Healthcare APIs and the Azure API for FHIR also enable you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you do not pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+### Delete and conditional delete
+
+Delete defined by the FHIR spec requires that after deleting, subsequent non-version specific reads of a resource returns a 410 HTTP status code and the resource is no longer found through searching. The Azure API for FHIR and the FHIR service also enable you to fully delete (including all history) the resource. To fully delete the resource, you can pass a parameter settings `hardDelete` to true (`DELETE {server}/{resource}/{id}?hardDelete=true`). If you do not pass this parameter or set `hardDelete` to false, the historic versions of the resource will still be available.
+
+In addition to delete, the Azure API for FHIR and the FHIR service support conditional delete, which allow you to pass a search criteria to delete a resource. By default, the conditional delete will allow you to delete one item at a time. You can also specify the `_count` parameter to delete up to 100 items at a time. Below are some examples of using conditional delete.
+
+To delete a single item using conditional delete, you must specify search criteria that returns a single item.
+``` JSON
+DELETE https://{{hostname}}/Patient?identifier=1032704
+```
+
+You can do the same search but include hardDelete=true to also delete all history.
+```JSON
+DELETE https://{{hostname}}/Patient?identifier=1032704&hardDelete=true
+```
+
+If you want to delete multiple resources, you can include `_count=100`, which will delete up to 100 resources that match the search criteria.
+``` JSON
+DELETE https://{{hostname}}/Patient?identifier=1032704&_count=100
+```
## Extended Operations
The FHIR service uses [Azure Active Directory](https://azure.microsoft.com/servi
* **Bundle size** - Each bundle is limited to 500 items.
-* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR services. This can be in one or many workspaces.
+* **Subscription Limit** - By default, each subscription is limited to a maximum of 10 FHIR services. The limit can be used in one or many workspaces.
## Next steps
healthcare-apis How To Run A Reindex https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/healthcare-apis/fhir/how-to-run-a-reindex.md
Previously updated : 08/03/2021 Last updated : 08/23/2021 # Running a reindex job
> [!IMPORTANT] > Azure Healthcare APIs is currently in PREVIEW. The [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-There are scenarios where you may have search or sort parameters in the FHIR service in the Azure Healthcare APIs (hear by called the FHIR service) that haven't yet been indexed. This is particularly relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
+There are scenarios where you may have search or sort parameters in the FHIR service in the Azure Healthcare APIs (hear by called the FHIR service) that haven't yet been indexed. This scenario is relevant when you define your own search parameters. Until the search parameter is indexed, it can't be used in search. This article covers an overview of how to run a reindex job to index any search or sort parameters that have not yet been indexed in your database.
> [!Warning] > It's important that you read this entire article before getting started. A reindex job can be very performance intensive. This article includes options for how to throttle and control the reindex job.
Content-Location: https://{{FHIR URL}}/_operations/reindex/560c7c61-2c70-4c54-b8
## How to check the status of a reindex job
-Once youΓÇÖve started a reindex job, you can check the status of the job using the following:
+Once youΓÇÖve started a reindex job, you can check the status of the job using the following call:
`GET {{FHIR URL}}/_operations/reindex/{{reindexJobId}`
The following information is shown in the reindex job result:
* **progress**: Reindex job percent complete. Equals resourcesSuccessfullyReindexed/totalResourcesToReindex x 100.
-* **status**: This will state if the reindex job is queued, running, complete, failed, or canceled.
+* **status**: States if the reindex job is queued, running, complete, failed, or canceled.
-* **resources**: This lists all the resource types impacted by the reindex job.
+* **resources**: Lists all the resource types impacted by the reindex job.
## Delete a reindex job
A reindex job can be quite performance intensive. WeΓÇÖve implemented some throt
Below is a table outlining the available parameters, defaults, and recommended ranges. You can use these parameters to either speed up the process (use more compute) or slow down the process (use less compute).
-| **Parameter** | **Description** | **Default** | **Recommended Range** |
+| **Parameter** | **Description** | **Default** | **Available Range** |
| | - | | - |
-| QueryDelayIntervalInMilliseconds | This is the delay between each batch of resources being kicked off during the reindex job. | 500 MS (.5 seconds) | 50 to 5000: 50 will speed up the reindex job and 5000 will slow it down from the default. |
-| MaximumResourcesPerQuery | This is the maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-500 |
-| MaximumConcurrency | This is the number of batches done at a time. | 1 | 1-5 |
+| QueryDelayIntervalInMilliseconds | The delay between each batch of resources being kicked off during the reindex job. A smaller number will speedup the job while a higher number will slow it down. | 500 MS (.5 seconds) | 50 to 500000 |
+| MaximumResourcesPerQuery | The maximum number of resources included in the batch of resources to be reindexed. | 100 | 1-5000 |
+| MaximumConcurrency | The number of batches done at a time. | 1 | 1-10 |
If you want to use any of the parameters above, you can pass them into the Parameters resource when you start the reindex job.
In this article, youΓÇÖve learned how to start a reindex job. To learn how to de
>[!div class="nextstepaction"] >[Defining custom search parameters](how-to-do-custom-search.md)-
-
-
hpc-cache Hpc Cache Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/hpc-cache-support-ticket.md
description: How to open a help request for Azure HPC Cache
Previously updated : 11/13/2019 Last updated : 08/19/2021
-# Open a support ticket for Azure HPC Cache
+# Contact support for help with Azure HPC Cache
-Use the Azure portal to open a support ticket. Navigate to your cache instance, then click the **New support request** link that appears at the bottom of the sidebar.
+The best way to get help with Azure HPC Cache is to use the Azure portal to open a support ticket.
+
+Navigate to your cache instance, then click the **New support request** link that appears at the bottom of the sidebar.
To open a ticket when you do not have an active cache, use the main Help + support page from the Azure portal. Open the portal menu from the control at the top left of the screen, then scroll to the bottom and click **Help + support**.
hpc-cache Troubleshoot Nas https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/hpc-cache/troubleshoot-nas.md
For systems that use ACLs, Azure HPC Cache needs to track additional user-specif
## Next steps
-If you have a problem that was not addressed in this article, [open a support ticket](hpc-cache-support-ticket.md) to get expert help.
+If you have a problem that was not addressed in this article, [contact support](hpc-cache-support-ticket.md) to get expert help.
iot-central Howto Manage Dashboards https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-central/core/howto-manage-dashboards.md
Title: Create and manage Azure IoT Central dashboards | Microsoft Docs
-description: Learn how to create and manage application and personal dashboards.
+description: Learn how to create and manage application and personal dashboards in Azure IoT Central.
Last updated 10/17/2019
# Create and manage dashboards
-The default *application dashboard* is the page that loads when you first navigate to your application. As an administrator, you can create up to 10 application dashboards that are visible to all application users. Only administrators can create, edit, and delete application level dashboards.
+The default *application dashboard* is the page that loads when you first go to your application. As an administrator, you can create up to 10 application dashboards that are visible to all application users. Only administrators can create, edit, and delete application-level dashboards.
All users can create their own *personal dashboards*. Users can switch between application dashboards and personal dashboards.
-## Create dashboard
+## Create a dashboard
-The following screenshot shows the dashboard in an application created from the **Custom Application** template. You can replace the default application dashboard with a personal dashboard, or if you're an administrator, another application level dashboard. To do so, select **+ New dashboard** at the top left of the page:
+The following screenshot shows the dashboard in an application created from the Custom Application template. You can replace the default application dashboard with a personal dashboard. If you're an administrator, you can also replace it with another application-level dashboard. To do so, select **New dashboard** in the upper-left corner of the page:
-Select **+ New dashboard** to open the dashboard editor. In the editor, give your dashboard a name and chose items from the library. The library contains the tiles and dashboard primitives you can use to customize the dashboard:
+Select **New dashboard** to open the dashboard editor. In the editor, give your dashboard a name and choose items from the library. The library contains the tiles and dashboard primitives you can use to customize the dashboard:
:::image type="content" source="media/howto-manage-dashboards/dashboard-library.png" alt-text="Screenshot that shows the dashboard library.":::
-If you're an administrator, you can create a personal dashboard or an application dashboard. All application users can see the application dashboards the administrator creates. All users can create personal dashboards, that only they can see.
+If you're an administrator, you can create a personal dashboard or an application dashboard. All application users can see the application dashboards the administrator creates. All users can create personal dashboards that only they can see.
Enter a title and select the type of dashboard you want to create. [Add tiles](#add-tiles) to customize your dashboard. > [!TIP]
-> You need at least one device template in your application to be able to add tiles that show device information.
+> You need to have at least one device template in your application to be able to add tiles that show device information.
## Manage dashboards
-You can have several personal dashboards and switch between them or choose from one of the application dashboards:
+You can have several personal dashboards and switch among them or choose from one of the application dashboards:
:::image type="content" source="media/howto-manage-dashboards/switch-dashboards.png" alt-text="Screenshot that shows how to switch between dashboards.":::
-You can edit your personal dashboards and delete any dashboards you no longer need. If you're an **admin**, you can edit or delete application level dashboards as well.
+You can edit your personal dashboards and delete dashboards you don't need. If you're an admin, you can edit and delete application-level dashboards as well.
## Add tiles
-The following screenshot shows the dashboard in an application created from the **Custom application** template. To customize the current dashboard, select **Edit**. To add a personal or application dashboard, select **New**:
+The following screenshot shows the dashboard in an application created from the Custom Application template. To customize the current dashboard, select **Edit**. To add a personal or application dashboard, select **New dashboard**:
-After you select **Edit** or **New**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard, and customize and remove tiles on the dashboard itself. For example, to add a **Line Chart** tile to track telemetry values over time reported by one or more devices:
+After you select **Edit** or **New dashboard**, the dashboard is in *edit* mode. You can use the tools in the **Edit dashboard** panel to add tiles to the dashboard. You can customize and remove tiles on the dashboard itself. For example, to add a line chart tile to track telemetry values reported by one or more devices over time:
-1. Select **Start with a Visual**, then choose **Line chart**, and then select **Add tile** or just drag and drop it on to the canvas.
+1. Select **Start with a Visual**, **Line chart**, and then **Add tile**, or just drag the tile onto the canvas.
-1. To configure the tile, select its gear icon. Enter a **Title** and select a **Device Group** and then choose your devices in the **Devices** dropdown to show on the tile.
+1. To configure the tile, select its **gear** button. Enter a **Title** and select a **Device Group**. In the **Devices** list, select the devices to show on the tile.
+ :::image type="content" source="media/howto-manage-dashboards/device-details.png" alt-text="Screenshot that shows adding a tile to a dashboard.":::
-When you've selected all the values to show on the tile, click **Update**
+1. After you select all the devices to show on the tile, select **Update**.
-When you've finished adding and customizing tiles on the dashboard, select **Save** to save the changes to the dashboard, which takes you out of edit mode.
+1. After you finish adding and customizing tiles on the dashboard, select **Save**. Doing so takes you out of edit mode.
## Customize tiles
-To edit a tile, you must be in edit mode. The available customization options depend on the [tile type](#tile-types):
+To edit a tile, you need to be in edit mode. The different [tile types](#tile-types) have different options for customization:
-* The ruler icon on a tile lets you change the visualization. Visualizations include line chart, bar chart, pie chart, last known value (LKV), key performance indicator (KPI), heatmap, and map.
+* The **ruler** button on a tile lets you change the visualization. Visualizations include line chart, bar chart, pie chart, last known value (LKV), key performance indicator (KPI), heat map, and map.
-* The square icon lets you resize the tile.
+* The **square** button lets you resize the tile.
-* The gear icon lets you configure the visualization. For example, for a line chart you can choose to show the legend and axes, and choose the time range to plot.
+* The **gear** button lets you configure the visualization. For example, for a line chart you can choose to show the legend and axes and choose the time range to plot.
## Tile types
-The following table describes the different types of tile you can add to a dashboard:
+This table describes the types of tiles you can add to a dashboard:
| Tile | Description | | - | -- |
-| Markdown | Markdown tiles are clickable tiles that display a heading and description text formatted using markdown. The URL can be a relative link to another page in the application, or an absolute link to an external site.|
-| Image | Image tiles display a custom image and can be clickable. The URL can be a relative link to another page in the application, or an absolute link to an external site.|
-| Label | Label tiles display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard such descriptions, contact details, or help.|
+| Markdown | Markdown tiles are clickable tiles that display a heading and description text formatted in Markdown. The URL can be a relative link to another page in the application or an absolute link to an external site.|
+| Image | Image tiles display a custom image and can be clickable. The URL can be a relative link to another page in the application or an absolute link to an external site.|
+| Label | Label tiles display custom text on a dashboard. You can choose the size of the text. Use a label tile to add relevant information to the dashboard, like descriptions, contact details, or Help.|
| Count | Count tiles display the number of devices in a device group.|
-| Map(telemetry) | Map tiles display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display sampled route of where a device has been on the past week.|
-| Map(property) | Map tiles display the location of one or more devices on a map.|
-| KPI | KPI tiles display aggregate telemetry values for one or more devices over a time period. For example, you can use it to show the maximum temperature and pressure reached for one or more devices during the last hour.|
-| Line chart | Line chart tiles plot one or more aggregate telemetry values for one or more devices for a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices for the last hour.|
-| Bar chart | Bar chart tiles plot one or more aggregate telemetry values for one or more devices for a time period. For example, you can display a bar chart to show the average temperature and pressure of one or more devices over the last hour.|
-| Pie chart | Pie chart tiles display one or more aggregate telemetry values for one or more devices for a time period.|
-| Heat map | Heat map tiles display information about one or more devices, represented as colors.|
-| Last Known Value | Last known value tiles display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices. |
-| Event History | Event History tiles display the events for a device over a time period. For example, you can use it to show all the valve open and close events for one or more devices during the last hour.|
-| Property | Property tiles display the current value for properties and cloud properties of one or more devices. For example, you can use this tile to display device properties such as the manufacturer or firmware version for a device. |
-| State Chart | State chart plot changes for one or more devices over a set time range. For example, you can use this tile to display device properties such as the temperature changes for a device. |
-| Event Chart | Event chart displays telemetry events for one or more devices over a set time range. For example, you can use this tile to display the properties such as the temperature changes for a device. |
-| State History | State history lists and displays status changes for State telemetry.|
-| External Content | External content tile allows you to load external content from an external source. |
+| Map (telemetry) | Map tiles display the location of one or more devices on a map. You can also display up to 100 points of a device's location history. For example, you can display a sampled route of where a device has been in the past week.|
+| Map (property) | Map tiles display the location of one or more devices on a map.|
+| KPI | KPI tiles display aggregate telemetry values for one or more devices over a time period. For example, you can use them to show the maximum temperature and pressure reached for one or more devices during the past hour.|
+| Line chart | Line chart tiles plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a line chart to plot the average temperature and pressure of one or more devices during the past hour.|
+| Bar chart | Bar chart tiles plot one or more aggregate telemetry values for one or more devices over a time period. For example, you can display a bar chart to show the average temperature and pressure of one or more devices during the past hour.|
+| Pie chart | Pie chart tiles display one or more aggregate telemetry values for one or more devices over a time period.|
+| Heat map | Heat map tiles display information, represented in colors, about one or more devices.|
+| Last known value | Last known value tiles display the latest telemetry values for one or more devices. For example, you can use this tile to display the most recent temperature, pressure, and humidity values for one or more devices. |
+| Event history | Event history tiles display the events for a device over a time period. For example, you can use them to show all the valve open and valve close events for one or more devices during the past hour.|
+| Property | Property tiles display the current values for properties and cloud properties for one or more devices. For example, you can use this tile to display device properties like the manufacturer or firmware version. |
+| State chart | State chart tiles plot changes for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. |
+| Event chart | Event chart tiles display telemetry events for one or more devices over a time period. For example, you can use this tile to display properties like the temperature changes for a device. |
+| State history | State history tiles list and display status changes for state telemetry.|
+| External content | External content tiles allow you to load content from an external source. |
Currently, you can add up to 10 devices to tiles that support multiple devices.
-### Customizing visualizations
+### Customize visualizations
-By default, line charts show data over a range of time. The selected time range is split into 50 equal-sized buckets and the device data is then aggregated per bucket to give 50 data points over the selected time range. If you wish to view raw data, you can change your selection to view the last 100 values. To change the time range or to select raw data visualization, use the Display Range dropdown in the **Configure chart** panel.
+By default, line charts show data over a range of time. The selected time range is split into 50 equally sized partitions. The device data is then aggregated per partition to give 50 data points over the selected time range. If you want to view raw data, you can change your selection to view the last 100 values. To change the time range or to select raw data visualization, use the **Display range** dropdown list in the **Configure chart** panel:
-For tiles that display aggregate values, select the gear icon next to the telemetry type in the **Configure chart** panel to choose the aggregation. You can choose from average, sum, maximum, minimum, and count.
+For tiles that display aggregate values, select the **gear** button next to the telemetry type in the **Configure chart** panel to choose the aggregation. You can choose average, sum, maximum, minimum, or count.
-For line charts, bar charts, and pie charts, you can customize the color of the different telemetry values. Select the palette icon next to the telemetry you want to customize:
+For line charts, bar charts, and pie charts, you can customize the colors of the various telemetry values. Select the **palette** button next to the telemetry you want to customize:
-For tiles that show string properties or telemetry values, you can choose how to display the text. For example, if the device stores a URL in a string property, you can display it as a clickable link. If the URL references an image, you can render the image in a last known value or property tile. To change how a string displays, in the tile configuration select the gear icon next to the telemetry type or property:
+For tiles that show string properties or telemetry values, you can choose how to display the text. For example, if the device stores a URL in a string property, you can display it as a clickable link. If the URL references an image, you can render the image in a last known value or property tile. To change how a string displays, select the **gear** button next to the telemetry type or property in the tile configuration.
-For numeric KPI, LKV, and property tiles, you can use conditional formatting to customize the color of the tile based on its current value. To add conditional formatting, select **Configure** on the tile and then select the **Conditional formatting** icon next to the value to customize:
+For numeric KPI, LKV, and property tiles, you can use conditional formatting to customize the color of the tile based on its value. To add conditional formatting, select **Configure** on the tile and then select the **Conditional formatting** button next to the value you want to customize:
-Add your conditional formatting rules:
+Next, add your conditional formatting rules:
-The following screenshot shows the effect of the conditional formatting rule:
+The following screenshot shows the effect of those conditional formatting rules:
### Tile formatting
-This feature, available in KPI, LKV, and Property tiles, lets users adjust font size, choose decimal precision, abbreviate numeric values (for example format 1,700 as 1.7K), or wrap string values in their tiles.
+This feature is available on the KPI, LKV, and property tiles. It lets you adjust font size, choose decimal precision, abbreviate numeric values (for example, format 1,700 as 1.7K), or wrap string values on their tiles.
## Next steps
-Now that you've learned how to create and manage personal dashboards, you can [Learn how to manage your application preferences](howto-manage-preferences.md).
+Now that you've learned how to create and manage personal dashboards, you can [learn how to manage your application preferences](howto-manage-preferences.md).
iot-fundamentals Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/iot-fundamentals/security-recommendations.md
This article contains security recommendations for IoT. Implementing these recommendations will help you fulfill your security obligations as described in our shared responsibility model. For more information on what Microsoft does to fulfill service provider responsibilities, read [Shared responsibilities for cloud computing](https://gallery.technet.microsoft.com/Shared-Responsibilities-81d0ff91).
-Some of the recommendations included in this article can be automatically monitored by Azure Security Center. Azure Security Center is the first line of defense in protecting your resources in Azure. It periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then provides you with recommendations on how to address them.
+Some of the recommendations included in this article can be automatically monitored by Azure Defender for IoT. Azure Defender for IoT is the first line of defense in protecting your resources in Azure. It periodically analyzes the security state of your Azure resources to identify potential security vulnerabilities. It then provides you with recommendations on how to address them.
-- For more information on Azure Security Center recommendations, see [Security recommendations in Azure Security Center](../security-center/security-center-recommendations.md).-- For information on Azure Security Center see the [What is Azure Security Center?](../security-center/security-center-introduction.md)
+- For more information on Azure Defender for IoT recommendations, see [Security recommendations in Azure Defender for IoT](../security-center/security-center-recommendations.md).
+- For information on Azure Defender for IoT see the [What is Azure Defender for IoT?](../security-center/security-center-introduction.md)
## General
-| Recommendation | Comments | Supported by ASC |
-|-|-|--|
-| Stay up-to-date | Use the latest versions of supported platforms, programming languages, protocols, and frameworks. | - |
-| Keep authentication keys safe | Keep the device IDs and their authentication keys physically safe after deployment. This will avoid a malicious device masquerade as a registered device. | - |
-| Use device SDKs when possible | Device SDKs implement a variety of security features, such as, encryption, authentication, and so on, to assist you in developing a robust and secure device application. See [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) for more information. | - |
+| Recommendation | Comments |
+|-|-|
+| Stay up-to-date | Use the latest versions of supported platforms, programming languages, protocols, and frameworks. |
+| Keep authentication keys safe | Keep the device IDs and their authentication keys physically safe after deployment. This will avoid a malicious device masquerade as a registered device. |
+| Use device SDKs when possible | Device SDKs implement a variety of security features, such as, encryption, authentication, and so on, to assist you in developing a robust and secure device application. See [Understand and use Azure IoT Hub SDKs](../iot-hub/iot-hub-devguide-sdks.md) for more information. |
## Identity and access management
-| Recommendation | Comments | Supported by ASC |
-|-|-|--|
-| Define access control for the hub | [Understand and define the type of access](iot-security-deployment.md#securing-the-cloud) each component will have in your IoT Hub solution, based on the functionality. The allowed permissions are *Registry Read*, *RegistryReadWrite*, *ServiceConnect*, and *DeviceConnect*. Default [shared access policies in your IoT hub](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions) can also help define the permissions for each component based on its role. | - |
-| Define access control for backend services | Data ingested by your IoT Hub solution can be consumed by other Azure services such as [Cosmos DB](../cosmos-db/index.yml), [Stream Analytics](../stream-analytics/index.yml), [App Service](../app-service/index.yml), [Logic Apps](../logic-apps/index.yml), and [Blob storage](../storage/blobs/storage-blobs-introduction.md). Make sure to understand and allow appropriate access permissions as documented for these services. | - |
+| Recommendation | Comments |
+|-|-|
+| Define access control for the hub | [Understand and define the type of access](iot-security-deployment.md#securing-the-cloud) each component will have in your IoT Hub solution, based on the functionality. The allowed permissions are *Registry Read*, *RegistryReadWrite*, *ServiceConnect*, and *DeviceConnect*. Default [shared access policies in your IoT hub](../iot-hub/iot-hub-dev-guide-sas.md#access-control-and-permissions) can also help define the permissions for each component based on its role. |
+| Define access control for backend services | Data ingested by your IoT Hub solution can be consumed by other Azure services such as [Cosmos DB](../cosmos-db/index.yml), [Stream Analytics](../stream-analytics/index.yml), [App Service](../app-service/index.yml), [Logic Apps](../logic-apps/index.yml), and [Blob storage](../storage/blobs/storage-blobs-introduction.md). Make sure to understand and allow appropriate access permissions as documented for these services. |
## Data protection
-| Recommendation | Comments | Supported by ASC |
-|-|-|--|
-| Secure device authentication | Ensure secure communication between your devices and your IoT hub, by using either [a unique identity key or security token](iot-security-deployment.md#iot-hub-security-tokens), or [an on-device X.509 certificate](iot-security-deployment.md#x509-certificate-based-device-authentication) for each device. Use the appropriate method to [use security tokens based on the chosen protocol (MQTT, AMQP, or HTTPS)](../iot-hub/iot-hub-dev-guide-sas.md). | - |
-| Secure device communication | IoT Hub secures the connection to the devices using Transport Layer Security (TLS) standard, supporting versions 1.2 and 1.0. Use [TLS 1.2](https://tools.ietf.org/html/rfc5246) to ensure maximum security. | - |
-| Secure service communication | IoT Hub provides endpoints to connect to backend services such as [Azure Storage](../storage/index.yml) or [Event Hubs](../event-hubs/index.yml) using only the TLS protocol, and no endpoint is exposed on an unencrypted channel. Once this data reaches these backend services for storage or analysis, make sure to employ appropriate security and encryption methods for that service, and protect sensitive information at the backend. | - |
+| Recommendation | Comments |
+|-|-|
+| Secure device authentication | Ensure secure communication between your devices and your IoT hub, by using either [a unique identity key or security token](iot-security-deployment.md#iot-hub-security-tokens), or [an on-device X.509 certificate](iot-security-deployment.md#x509-certificate-based-device-authentication) for each device. Use the appropriate method to [use security tokens based on the chosen protocol (MQTT, AMQP, or HTTPS)](../iot-hub/iot-hub-dev-guide-sas.md). |
+| Secure device communication | IoT Hub secures the connection to the devices using Transport Layer Security (TLS) standard, supporting versions 1.2 and 1.0. Use [TLS 1.2](https://tools.ietf.org/html/rfc5246) to ensure maximum security. |
+| Secure service communication | IoT Hub provides endpoints to connect to backend services such as [Azure Storage](../storage/index.yml) or [Event Hubs](../event-hubs/index.yml) using only the TLS protocol, and no endpoint is exposed on an unencrypted channel. Once this data reaches these backend services for storage or analysis, make sure to employ appropriate security and encryption methods for that service, and protect sensitive information at the backend. |
## Networking
-| Recommendation | Comments | Supported by ASC |
-|-|-|--|
-| Protect access to your devices | Keep hardware ports in your devices to a bare minimum to avoid unwanted access. Additionally, build mechanisms to prevent or detect physical tampering of the device. Read [IoT security best practices](iot-security-best-practices.md) for details. | - |
-| Build secure hardware | Incorporate security features such as encrypted storage, or Trusted Platform Module (TPM), to keep devices and infrastructure more secure. Keep the device operating system and drivers upgraded to latest versions, and if space permits, install antivirus and antimalware capabilities. Read [IoT security architecture](iot-security-architecture.md) to understand how this can help mitigate several security threats. | - |
+| Recommendation | Comments |
+|-|-|
+| Protect access to your devices | Keep hardware ports in your devices to a bare minimum to avoid unwanted access. Additionally, build mechanisms to prevent or detect physical tampering of the device. Read [IoT security best practices](iot-security-best-practices.md) for details. |
+| Build secure hardware | Incorporate security features such as encrypted storage, or Trusted Platform Module (TPM), to keep devices and infrastructure more secure. Keep the device operating system and drivers upgraded to latest versions, and if space permits, install antivirus and antimalware capabilities. Read [IoT security architecture](iot-security-architecture.md) to understand how this can help mitigate several security threats. |
## Monitoring
-| Recommendation | Comments | Supported by ASC |
+| Recommendation | Comments | Supported by Azure Defender |
|-|-|--|
-| Monitor unauthorized access to your devices | Use your device operating system's logging feature to monitor any security breaches or physical tampering of the device or its ports. | - |
-| Monitor your IoT solution from the cloud | Monitor the overall health of your IoT Hub solution using the [metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md). | - |
-| Set up diagnostics | Closely watch your operations by logging events in your solution, and then sending the diagnostic logs to Azure Monitor to get visibility into the performance. Read [Monitor and diagnose problems in your IoT hub](../iot-hub/monitor-iot-hub.md) for more information. | - |
+| Monitor unauthorized access to your devices | Use your device operating system's logging feature to monitor any security breaches or physical tampering of the device or its ports. | Yes |
+| Monitor your IoT solution from the cloud | Monitor the overall health of your IoT Hub solution using the [metrics in Azure Monitor](../iot-hub/monitor-iot-hub.md). | Yes |
+| Set up diagnostics | Closely watch your operations by logging events in your solution, and then sending the diagnostic logs to Azure Monitor to get visibility into the performance. Read [Monitor and diagnose problems in your IoT hub](../iot-hub/monitor-iot-hub.md) for more information. | Yes |
## Next steps
load-balancer Egress Only https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/load-balancer/egress-only.md
This configuration provides outbound NAT for an internal load balancer scenario,
*Figure: Egress only load balancer configuration*
-In this how-to article, you'll:
+## Prerequisites
-1. Create a virtual network with a bastion host.
-
-2. Create both internal and public standard load balancers with backend pools.
-
-3. Create a virtual machine with only a private IP and add to the internal load balancer backend pool.
-
-4. Add virtual machine to public load balancer backend pool.
-
-5. Connect to your VM through the bastion host and:
-
- 1. Test outbound connectivity,
-
- 2. Configure an outbound rule on the public load balancer.
-
- 3. Retest outbound connectivity.
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
## Create virtual network and load balancers
In this section, you'll add the virtual machine you created previously to the ba
## Clean up resources
-When no longer needed, delete the resource group, load Balancers, VM, and all related resources.
+When no longer needed, delete the resource group, load balancers, VM, and all related resources.
To do so, select the resource group **myResourceGroupLB** and then select **Delete**. ## Next steps
-In this tutorial, you created an "egress only" configuration with a combination of public and internal load balancers.
+In this article, you created an "egress only" configuration with a combination of public and internal load balancers.
This configuration allows you to load balance incoming internal traffic to your backend pool while still preventing any public inbound connections. -- Learn about [Azure Load Balancer](load-balancer-overview.md).-- Learn about [outbound connections in Azure](load-balancer-outbound-connections.md).-- Load balancer [FAQs](load-balancer-faqs.yml).-- Learn about [Azure Bastion](../bastion/bastion-overview.md)
+For more information about Azure Load Balancer and Azure Bastion, see [What is Azure Load Balancer?](load-balancer-overview.md) and [What is Azure Bastion?](../bastion/bastion-overview.md).
+
machine-learning Ai Gallery Control Personal Data Dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/ai-gallery-control-personal-data-dsr.md
# View and delete in-product user data from Azure AI Gallery - You can view and delete your in-product user data from Azure AI Gallery using the interface or AI Gallery Catalog API. This article tells you how.
machine-learning Algorithm Parameters Optimize https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/algorithm-parameters-optimize.md
Last updated 11/29/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio)
-This topic describes how to choose the right hyperparameter set for an algorithm in Machine Learning Studio (classic). Most machine learning algorithms have parameters to set. When you train a model, you need to provide values for those parameters. The efficacy of the trained model depends on the model parameters that you choose. The process of finding the optimal set of parameters is known as *model selection*.
-
+This topic describes how to choose the right hyperparameter set for an algorithm in Machine Learning Studio (classic). Most machine learning algorithms have parameters to set. When you train a model, you need to provide values for those parameters. The efficacy of the trained model depends on the model parameters that you choose. The process of finding the optimal set of parameters is known as *model selection*.
There are various ways to do model selection. In machine learning, cross-validation is one of the most widely used methods for model selection, and it is the default model selection mechanism in Machine Learning Studio (classic). Because Machine Learning Studio (classic) supports both R and Python, you can always implement their own model selection mechanisms by using either R or Python.
machine-learning Azure Ml Netsharp Reference Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/azure-ml-netsharp-reference-guide.md
Last updated 03/01/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + Net# is a language developed by Microsoft that is used to define complex neural network architectures such as deep neural networks or convolutions of arbitrary dimensions. You can use complex structures to improve learning on data such as image, video, or audio. You can use a Net# architecture specification in all neural network modules in Machine Learning Studio (classic):
machine-learning Consume Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/consume-web-services.md
Last updated 05/29/2020
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Once you deploy an Machine Learning Studio (classic) predictive model as a Web service, you can use a REST API to send it data and get predictions. You can send the data in real-time or in batch mode.
machine-learning Consuming From Excel https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/consuming-from-excel.md
Last updated 02/01/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + Machine Learning Studio (classic) makes it easy to call web services directly from Excel without the need to write any code. If you are using Excel 2013 (or later) or Excel Online, then we recommend that you use the Excel [Excel add-in](excel-add-in-for-web-services.md). -- ## Steps Publish a web service. [Tutorial 3: Deploy credit risk model](tutorial-part3-credit-risk-deploy.md) explains how to do it. Currently the Excel workbook feature is only supported for Request/Response services that have a single output (that is, a single scoring label).
machine-learning Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-endpoint.md
Last updated 02/15/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) -
-> [!NOTE]
-> This topic describes techniques applicable to a **Classic** Machine Learning web service.
After a web service is deployed, a default endpoint is created for that service. The default endpoint can be called by using its API key. You can add more endpoints with their own keys from the Web Services portal. Each endpoint in the web service is independently addressed, throttled, and managed. Each endpoint is a unique URL with an authorization key that you can distribute to your customers.
machine-learning Create Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-experiment.md
Title: 'ML Studio (classic): Quickstart: Create a data science experiment - Azure'
+ Title: 'ML Studio (classic): Create a data science experiment - Azure'
description: This machine learning quickstart walks you through an easy data science experiment. We'll predict the price of a car using a regression algorithm.
Last updated 02/06/2019
#Customer intent: As a citizen data scientist, I want to learn how to create a data science experiment so that I can do the same process to answer my own data science questions.
-# Quickstart: Create your first data science experiment in Machine Learning Studio (classic)
+# Create your first data science experiment in Machine Learning Studio (classic)
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) -
-In this quickstart, you create a machine learning experiment in [Machine Learning Studio (classic)](../overview-what-is-machine-learning-studio.md#ml-studio-classic-vs-azure-machine-learning-studio) that predicts the price of a car based on different variables such as make and technical specifications.
+In this article, you create a machine learning experiment in [Machine Learning Studio (classic)](../overview-what-is-machine-learning-studio.md#ml-studio-classic-vs-azure-machine-learning-studio) that predicts the price of a car based on different variables such as make and technical specifications.
If you're brand new to machine learning, the video series [Data Science for Beginners](data-science-for-beginners-the-5-questions-data-science-answers.md) is a great introduction to machine learning using everyday language and concepts.
machine-learning Create Models And Endpoints With Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-models-and-endpoints-with-powershell.md
Last updated 04/04/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + Here's a common machine learning problem: You want to create many models that have the same training workflow and use the same algorithm. But you want them to have different training datasets as input. This article shows you how to do this at scale in Machine Learning Studio (classic) using just a single experiment. For example, let's say you own a global bike rental franchise business. You want to build a regression model to predict the rental demand based on historic data. You have 1,000 rental locations across the world and you've collected a dataset for each location. They include important features such as date, time, weather, and traffic that are specific to each location.
machine-learning Create Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/create-workspace.md
Last updated 12/07/2017
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + To use Machine Learning Studio (classic), you need to have a Machine Learning Studio (classic) workspace. This workspace contains the tools you need to create, manage, and publish experiments. ## Create a Studio (classic) workspace
machine-learning Custom R Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/custom-r-modules.md
Last updated 11/29/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + This topic describes how to author and deploy a custom R Studio (classic). It explains what custom R modules are and what files are used to define them. It illustrates how to construct the files that define a module and how to register the module for deployment in a Machine Learning workspace. The elements and attributes used in the definition of the custom module are then described in more detail. How to use auxiliary functionality and files and multiple outputs is also discussed. A **custom module** is a user-defined module that can be uploaded to your workspace and executed as part of Machine Learning Studio (classic) experiment. A **custom R module** is a custom module that executes a user-defined R function. **R** is a programming language for statistical computing and graphics that is widely used by statisticians and data scientists for implementing algorithms. Currently, R is the only language supported in custom modules, but support for additional languages is scheduled for future releases.
machine-learning Data Science For Beginners Ask A Question You Can Answer With Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-ask-a-question-you-can-answer-with-data.md
Last updated 03/22/2019 # Ask a question you can answer with data++ ## Video 3: Data Science for Beginners series+ Learn how to formulate a data science problem into a question in Data Science for Beginners video 3. This video includes a comparison of questions for classification and regression algorithms. To get the most out of the series, watch them all. [Go to the list of videos](#other-videos-in-this-series)
machine-learning Data Science For Beginners Copy Other Peoples Work To Do Data Science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-copy-other-peoples-work-to-do-data-science.md
Last updated 03/22/2019 # Copy other people's work to do data science++ ## Video 5: Data Science for Beginners series One of the trade secrets of data science is getting other people to do your work for you. Find a clustering algorithm example in Azure AI Gallery to use for your own machine learning experiment.
machine-learning Data Science For Beginners Is Your Data Ready For Data Science https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-is-your-data-ready-for-data-science.md
Last updated 03/22/2019 # Is your data ready for data science?++ ## Video 2: Data Science for Beginners series Learn how to evaluate your data to make sure it meets basic criteria to be ready for data science.
machine-learning Data Science For Beginners Predict An Answer With A Simple Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-predict-an-answer-with-a-simple-model.md
Last updated 03/22/2019
# Predict an answer with a simple model ## Video 4: Data Science for Beginners series Learn how to create a simple regression model to predict the price of a diamond in Data Science for Beginners video 4. We'll draw a regression model with target data.
machine-learning Data Science For Beginners The 5 Questions Data Science Answers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/data-science-for-beginners-the-5-questions-data-science-answers.md
Last updated 03/22/2019
# Data Science for Beginners video 1: The 5 questions data science answers + Get a quick introduction to data science from *Data Science for Beginners* in five short videos from a top data scientist. These videos are basic but useful, whether you're interested in doing data science or you work with data scientists. This first video is about the kinds of questions that data science can answer. To get the most out of the series, watch them all. [Go to the list of videos](#other-videos-in-this-series)
machine-learning Deploy A Machine Learning Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-a-machine-learning-web-service.md
Last updated 01/06/2017
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Machine Learning Studio (classic) enables you to build and test a predictive analytic solution. Then you can deploy the solution as a web service.
machine-learning Deploy Consume Web Service Guide https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-consume-web-service-guide.md
Last updated 04/19/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + You can use Machine Learning Studio (classic) to deploy machine learning workflows and models as web services. These web services can then be used to call the machine learning models from applications over the Internet to do predictions in real time or in batch mode. Because the web services are RESTful, you can call them from various programming languages and platforms, such as .NET and Java, and from applications, such as Excel. The next sections provide links to walkthroughs, code, and documentation to help get you started.
machine-learning Deploy With Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/deploy-with-resource-manager-template.md
Last updated 02/05/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + Using an Azure Resource Manager deployment template saves you time by giving you a scalable way to deploy interconnected components with a validation and retry mechanism. To set up Machine Learning Studio (classic) Workspaces, for example, you need to first configure an Azure storage account and then deploy your workspace. Imagine doing this manually for hundreds of workspaces. An easier alternative is to use an Azure Resource Manager template to deploy an Studio (classic) Workspace and all its dependencies. This article takes you through this process step-by-step. For a great overview of Azure Resource Manager, see [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md). [!INCLUDE [updated-for-az](../../../includes/updated-for-az.md)]
machine-learning Evaluate Model Performance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/evaluate-model-performance.md
Last updated 03/20/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) In this article, you can learn about the metrics you can use to monitor model performance in Machine Learning Studio (classic). Evaluating the performance of a model is one of the core stages in the data science process. It indicates how successful the scoring (predictions) of a dataset has been by a trained model. Machine Learning Studio (classic) supports model evaluation through two of its main machine learning modules: + [Evaluate Model][evaluate-model]
machine-learning Excel Add In For Web Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/excel-add-in-for-web-services.md
Last updated 02/01/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Excel makes it easy to call web services directly without the need to write any code.
machine-learning Execute Python Scripts https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/execute-python-scripts.md
Last updated 03/12/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Python is a valuable tool in the tool chest of many data scientists. It's used in every stage of typical machine learning workflows including data exploration, feature extraction, model training and validation, and deployment.
machine-learning Export Delete Personal Data Dsr https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/export-delete-personal-data-dsr.md
Last updated 05/25/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) -- You can delete or export in-product data stored by Machine Learning Studio (classic) by using the Azure portal, the Studio (classic) interface, PowerShell, and authenticated REST APIs. This article tells you how.
machine-learning Gallery How To Use Contribute Publish https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/gallery-how-to-use-contribute-publish.md
Last updated 01/11/2019
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) -- **[Azure AI Gallery](https://gallery.azure.ai)** is a community-driven site for discovering and sharing solutions built with Azure AI. The Gallery has a variety of resources that you can use to develop your own analytics solutions.
machine-learning Import Data https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/import-data.md
Last updated 02/01/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) To use your own data in Machine Learning Studio (classic) to develop and train a predictive analytics solution, you can use data from:
machine-learning Interpret Model Results https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/interpret-model-results.md
Last updated 11/29/2017
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + This topic explains how to visualize and interpret prediction results in Machine Learning Studio (classic). After you have trained a model and done predictions on top of it ("scored the model"), you need to understand and interpret the prediction result. There are four major kinds of machine learning models in Machine Learning Studio (classic):
machine-learning Manage Experiment Iterations https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-experiment-iterations.md
Last updated 03/20/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Developing a predictive analysis model is an iterative process - as you modify the various functions and parameters of your experiment, your results converge until you are satisfied that you have a trained, effective model. Key to this process is tracking the various iterations of your experiment parameters and configurations.
machine-learning Manage New Webservice https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-new-webservice.md
Last updated 02/28/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) You can manage your Machine Learning Studio (classic) web services using the Machine Learning Web Services portal.
machine-learning Manage Web Service Endpoints Using Api Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-web-service-endpoints-using-api-management.md
Last updated 11/03/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) ## Overview This guide shows you how to quickly get started using API Management to manage your Machine Learning Studio (classic) web services.
machine-learning Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/manage-workspace.md
Last updated 02/27/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) > [!NOTE] > For information on managing Web services in the Machine Learning Web Services portal, see [Manage a Web service using the Machine Learning Web Services portal](manage-new-webservice.md).
machine-learning Model Progression Experiment To Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/model-progression-experiment-to-web-service.md
Last updated 03/20/2017
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + Machine Learning Studio (classic) provides an interactive canvas that allows you to develop, run, test, and iterate an ***experiment*** representing a predictive analysis model. There are a wide variety of modules available that can: * Input data into your experiment
machine-learning Powershell Module https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/powershell-module.md
Last updated 04/25/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Using PowerShell modules, you can programmatically manage your Studio (classic) resources and assets such as workspaces, datasets, and web services.
machine-learning R Get Started https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/r-get-started.md
Last updated 03/01/2019
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + <!-- Stephen F Elston, Ph.D. --> In this tutorial, you learn how to use Machine Learning Studio (classic) to create, test, and execute R code. In the end, you'll have a complete forecasting solution.
machine-learning Retrain Classic Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/retrain-classic-web-service.md
Last updated 02/14/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Retraining machine learning models is one way to ensure they stay accurate and based on the most relevant data available. This article will show you how to retrain a classic Studio (classic) web service. For a guide on how to retrain a new Studio (classic) web service, [view this how-to article.](retrain-machine-learning-model.md)
machine-learning Retrain Machine Learning Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/retrain-machine-learning-model.md
Title: 'ML Studio (classic): retrain a web service - Azure'
+ Title: 'ML Studio (classic): Retrain a web service - Azure'
description: Learn how to update a web service to use a newly trained machine learning model in Machine Learning Studio (classic).
Last updated 02/14/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Retraining is one way to ensure machine learning models stay accurate and based on the most relevant data available. This article shows how to retrain and deploy a machine learning model as a new web service in Studio (classic). If you're looking to retrain a classic web service, [view this how-to article.](retrain-classic-web-service.md)
machine-learning Sample Experiments https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/sample-experiments.md
Title: 'ML Studio (classic): start experiments from examples - Azure'
+ Title: 'ML Studio (classic): Start experiments from examples - Azure'
description: Learn how to use example machine learning experiments to create new experiments with Azure AI Gallery and Machine Learning Studio (classic).
Last updated 01/05/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) -- Learn how to start with example experiments from [Azure AI Gallery](https://gallery.azure.ai/) instead of creating machine learning experiments from scratch. You can use the examples to build your own machine learning solution.
machine-learning Studio Classic Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/studio-classic-overview.md
Last updated 08/19/2020
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Machine Learning Studio (classic) is a drag-and-drop tool that you can use to build, test, and deploy machine learning models. Studio (classic) publishes models as web services, which can easily be consumed by custom apps or BI tools such as Excel.
machine-learning Support Aml Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/support-aml-studio.md
Last updated 01/18/2019
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) -- This article provides information on how to learn more about Machine Learning Studio (classic) and get support for your issues and questions.
machine-learning Tutorial Part1 Credit Risk https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part1-credit-risk.md
Last updated 02/11/2019
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as a Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part one of a three-part tutorial series**.
machine-learning Tutorial Part2 Credit Risk Train https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part2-credit-risk-train.md
Last updated 02/11/2019
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as a Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part two of a three-part tutorial series**. Suppose you need to predict an individual's credit risk based on the information they gave on a credit application.
machine-learning Tutorial Part3 Credit Risk Deploy https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/tutorial-part3-credit-risk-deploy.md
Last updated 07/27/2020
**APPLIES TO:** ![This is a check mark, which means that this article applies to Machine Learning Studio (classic).](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) + In this tutorial, you take an extended look at the process of developing a predictive analytics solution. You develop a simple model in Machine Learning Studio (classic). You then deploy the model as a Machine Learning web service. This deployed model can make predictions using new data. This tutorial is **part three of a three-part tutorial series**. Suppose you need to predict an individual's credit risk based on the information they gave on a credit application.
machine-learning Use Data From An On Premises Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/use-data-from-an-on-premises-sql-server.md
Last updated 03/13/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Often enterprises that work with on-premises data would like to take advantage of the scale and agility of the cloud for their machine learning workloads. But they don't want to disrupt their current business processes and workflows by moving their on-premises data to the cloud. Machine Learning Studio (classic) now supports reading your data from a SQL Server database and then training and scoring a model with this data. You no longer have to manually copy and sync the data between the cloud and your on-premises server. Instead, the **Import Data** module in Machine Learning Studio (classic) can now read directly from your SQL Server database for your training and scoring jobs.
machine-learning Use Sample Datasets https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/use-sample-datasets.md
Last updated 01/19/2018
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) [top]: #machine-learning-sample-datasets
machine-learning Version Control https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/version-control.md
Last updated 10/27/2016
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) Machine Learning Studio (classic) is a tool for developing machine learning experiments that are operationalized in the Azure cloud platform. It is like the Visual Studio IDE and scalable cloud service merged into a single platform. You can incorporate standard Application Lifecycle Management (ALM) practices from versioning various assets to automated execution and deployment, into Machine Learning Studio (classic). This article discusses some of the options and approaches.
machine-learning Web Service Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-service-error-codes.md
Last updated 11/16/2016
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) The following error codes could be returned by an operation on an Azure Machine Learning Studio (classic) web service.
machine-learning Web Service Parameters https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-service-parameters.md
Last updated 01/12/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) A Machine Learning web service is created by publishing an experiment that contains modules with configurable parameters. In some cases, you may want to change the module behavior while the web service is running. *Web Service Parameters* allow you to do this task.
machine-learning Web Services Logging https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-services-logging.md
Last updated 06/15/2017
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) This document provides information on the logging capability of Machine Learning Studio (classic) web services. Logging provides additional information, beyond just an error number and a message, that can help you troubleshoot your calls to the Machine Learning Studio (classic) APIs.
machine-learning Web Services That Use Import Export Modules https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/classic/web-services-that-use-import-export-modules.md
**APPLIES TO:** ![Applies to.](../../../includes/medi#ml-studio-classic-vs-azure-machine-learning-studio) When you create a predictive experiment, you typically add a web service input and output. When you deploy the experiment, consumers can send and receive data from the web service through the inputs and outputs. For some applications, a consumer's data may be available from a data feed or already reside in an external data source such as Azure Blob storage. In these cases, they do not need read and write data using web service inputs and outputs. They can, instead, use the Batch Execution Service (BES) to read data from the data source using an Import Data module and write the scoring results to a different data location using an Export Data module.
machine-learning How To Configure Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-configure-cli.md
Title: 'Install, set up, and use the CLI (v2)'
+ Title: 'Install and set up the CLI (v2)'
-description: Learn how to install, set up, and use the Azure CLI extension for Machine Learning.
+description: Learn how to install and set up the Azure CLI extension for Machine Learning.
-# Install, set up, and use the CLI (v2)
+# Install and set up the CLI (v2)
The `ml` extension (preview) to the [Azure CLI](/cli/azure/) is the enhanced interface for Azure Machine Learning. It enables you to train and deploy models from the command line, with features that accelerate scaling data science up and out while tracking the model lifecycle.
machine-learning How To Secure Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-secure-web-service.md
For either AKS deployment with custom certificate or ACI deployment, you must up
> When you use a certificate from Microsoft for AKS deployment, you don't need to manually update the DNS value for the cluster. The value should be set automatically. You can follow following steps to update DNS record for your custom domain name:
-1. Get scoring endpoint IP address from scoring endpoint URI, which is usually in the format of *http://104.214.29.152:80/api/v1/service/service-name/score*. In this example, the IP address is 104.214.29.152.
+1. Get scoring endpoint IP address from scoring endpoint URI, which is usually in the format of `http://104.214.29.152:80/api/v1/service/<service-name>/score`. In this example, the IP address is 104.214.29.152.
1. Use the tools from your domain name registrar to update the DNS record for your domain name. The record maps the FQDN (for example, www\.contoso.com) to the IP address. The record must point to the IP address of scoring endpoint. > [!TIP]
machine-learning How To Use Batch Endpoints Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/how-to-use-batch-endpoints-studio.md
+
+ Title: 'How to use batch endpoints in studio'
+
+description: In this article, learn how to create a batch endpoint in Azure Machine Learning studio. Batch endpoints are used to continuously batch score large data.
+++++++ Last updated : 08/16/2021+++
+# How to use batch endpoints (preview) in Azure Machine Learning studio
+
+In this article, you learn how to use batch endpoints (preview) to do batch scoring in [Azure Machine Learning studio](https://ml.azure.com). For more, see [What are Azure Machine Learning endpoints (preview)?](concept-endpoints.md).
+
+In this article, you learn about:
+
+> [!div class="checklist"]
+> * Create a batch endpoint with a no-code experience for MLflow model
+> * Check batch endpoint details
+> * Start a batch scoring job
+> * Overview of batch endpoint features in Azure machine learning studio
++
+## Prerequisites
+
+* An Azure subscription - If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
+
+* The example repository - Clone the [AzureML Example repository](https://github.com/Azure/azureml-examples). This article uses the assets in `/cli/endpoints/batch`.
+
+* A compute target where you can run batch scoring workflows. For more information on creating a compute target, see [Create compute targets in Azure Machine Learning studio](how-to-create-attach-compute-studio.md).
+
+* Register machine learning model.
+
+## Create a batch endpoint
+
+There are two ways to create Batch Endpoints in Azure Machine Learning studio:
+
+* From the **Endpoints** page, select **Batch Endpoints** and then select **+ Create**.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/create-batch-endpoints.png" alt-text="Screenshot of creating a batch endpoint/deployment from Endpoints page":::
+
+OR
+
+* From the **Models** page, select the model you want to deploy and then select **Deploy to batch endpoint (preview)**.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/models-page-deployment.png" alt-text="Screenshot of creating a batch endpoint/deployment from Models page":::
+
+> [!TIP]
+> If you're using an MLflow model, you can use no-code batch endpoint creation. That is, you don't need to prepare a scoring script and environment, both can be auto generated. For more, see [Train and track ML models with MLflow and Azure Machine Learning (preview)](how-to-use-mlflow.md).
+>
+> :::image type="content" source="media/how-to-use-batch-endpoints-studio/mlflow-model-wizard.png" alt-text="Screenshot of deploying an MLflow model":::
+
+Complete all the steps in the wizard to create a batch endpoint and deployment.
++
+## Check batch endpoint details
+
+After a batch endpoint is created, select it from the **Endpoints** page to view the details.
++
+## Start a batch scoring job
+
+A batch scoring workload runs as an offline job. By default, batch scoring stores the scoring outputs in blob storage. You can also configure the outputs location and overwrite some of the settings to get the best performance.
+
+1. Select **+ Create job**:
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring":::
+
+1. You can update the default deployment while submitting a job from the drop-down:
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job":::
+
+### Overwrite settings
+
+Some settings can be overwritten when you start a batch scoring job. For example, you might overwrite settings to make better use of the compute resource, or to improve performance. To override settings, select __Override deployment settings__ and provide the settings. For more information, see [Use batch endpoints](how-to-use-batch-endpoint.md#overwrite-settings).
++
+### Start a batch scoring job with different input options
+
+You have two options to specify the data inputs in Azure machine learning studio:
+
+* Use a **registered dataset**:
+
+ > [!NOTE]
+ > During Preview, only FileDataset is supported.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/select-dataset-for-job.png" alt-text="Screenshot of selecting registered dataset as an input option":::
+
+OR
+
+* Use a **datastore**:
+
+ You can specify AML registered datastore or if your data is publicly available, specify the public path.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option":::
+
+### Configure the output location
+
+By default, the batch scoring results are stored in the default blob store for the workspace. Results are in a folder named after the job name (a system-generated GUID).
+
+To change where the results are stored, providing a blob store and output path when you start a job.
+
+> [!IMPORTANT]
+> You must use a unique output location. If the output file exists, the batch scoring job will fail.
++
+### Summary of all submitted jobs
+
+To see a summary of all the submitted jobs for an endpoint, select the endpoint and then select the **Runs** tab.
+
+## Check batch scoring results
+
+To learn how to view the scoring results, see [Use batch endpoints](how-to-use-batch-endpoint.md#check-batch-scoring-results).
+
+## Add a deployment to an existing batch endpoint
+
+In Azure machine learning studio, there are two ways to add a deployment to an existing batch endpoint:
+
+* From the **Endpoints** page, select the batch endpoint to add a new deployment to. Select **+ Add deployment**, and complete the wizard to add a new deployment.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option":::
+
+OR
+
+* From the **Models** page, select the model you want to deploy. Then select **Deploy to batch endpoint (preview)** option from the drop-down. In the wizard, on the **Endpoint** screen, select **Existing**. Complete the wizard to add the new deployment.
+
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/add-deployment-models-page.png" alt-text="Screenshot of selecting an existing batch endpoint to add new deployment":::
+
+## Update the default deployment
+
+If an endpoint has multiple deployments, one of the deployments is the *default*. The default deployment receives 100% of the traffic to the endpoint. To change the default deployment, use the following steps:
+
+1. Select the endpoint from the **Endpoints** page.
+1. Select **Update default deployment**. From the **Details** tab, select the deployment you want to set as default and then select **Update**.
+ :::image type="content" source="media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment":::
+
+## Delete batch endpoint and deployments
+
+To delete an **endpoint**, select the endpoint from the **Endpoints** page and then select delete.
+
+> [!WARNING]
+> Deleting an endpoint also deletes all deployments to that endpoint.
+
+To delete a **deployment**, select the endpoint from the **Endpoints** page, select the deployment, and then select delete.
+
+## Next steps
+
+In this article, you learned how to create and call batch endpoints. See these other articles to learn more about Azure Machine Learning:
+
+* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md)
+* [Deploy and score a machine learning model with a managed online endpoint (preview)](how-to-deploy-managed-online-endpoints.md)
machine-learning Migrate Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-execute-r-script.md
+
+ Title: 'ML Studio (classic): Migrate Execute R Script'
+description: Rebuild Studio (classic) Execute R script modules to run on Azure Machine Learning.
+++++++ Last updated : 03/08/2021+++
+# Migrate Execute R Script modules in Studio (classic)
++
+In this article, you learn how to rebuild a Studio (classic) **Execute R Script** module in Azure Machine Learning.
+
+For more information on migrating from Studio (classic), see the [migration overview article](migrate-overview.md).
+
+## Execute R Script
+
+Azure Machine Learning designer now runs on Linux. Studio (classic) runs on Windows. Due to the platform change, you must adjust your **Execute R Script** during migration, otherwise the pipeline will fail.
+
+To migrate an **Execute R Script** module from Studio (classic), you must replace the `maml.mapInputPort` and `maml.mapOutputPort`interfaces with standard functions.
+
+The following table summarizes the changes to the R Script module:
+
+|Feature|Studio (classic)|Azure Machine Learning designer|
+||||
+|Script Interface|`maml.mapInputPort` and `maml.mapOutputPort`|Function interface|
+|Platform|Windows|Linux|
+|Internet Accessible |No|Yes|
+|Memory|14 GB|Dependent on Compute SKU|
+
+### How to update the R script interface
+
+Here are the contents of a sample **Execute R Script** module in Studio (classic):
+```r
+# Map 1-based optional input ports to variables
+dataset1 <- maml.mapInputPort(1) # class: data.frame
+dataset2 <- maml.mapInputPort(2) # class: data.frame
+
+# Contents of optional Zip port are in ./src/
+# source("src/yourfile.R");
+# load("src/yourData.rdata");
+
+# Sample operation
+data.set = rbind(dataset1, dataset2);
+
+
+# You'll see this output in the R Device port.
+# It'll have your stdout, stderr and PNG graphics device(s).
+
+plot(data.set);
+
+# Select data.frame to be sent to the output Dataset port
+maml.mapOutputPort("data.set");
+```
+
+Here are the updated contents in the designer. Notice that the `maml.mapInputPort` and `maml.mapOutputPort` have been replaced with the standard function interface `azureml_main`.
+```r
+azureml_main <- function(dataframe1, dataframe2){
+    # Use the parameters dataframe1 and dataframe2 directly
+    dataset1 <- dataframe1
+    dataset2 <- dataframe2
+
+    # Contents of optional Zip port are in ./src/
+    # source("src/yourfile.R");
+    # load("src/yourData.rdata");
+
+    # Sample operation
+    data.set = rbind(dataset1, dataset2);
++
+    # You'll see this output in the R Device port.
+    # It'll have your stdout, stderr and PNG graphics device(s).
+    plot(data.set);
+
+  # Return datasets as a Named List
+
+  return(list(dataset1=data.set))
+}
+```
+For more information, see the designer [Execute R Script module reference](/algorithm-module-reference/execute-r-script.md).
+
+### Install R packages from the internet
+
+Azure Machine Learning designer lets you install packages directly from CRAN.
+
+This is an improvement over Studio (classic). Since Studio (classic) runs in a sandbox environment with no internet access, you had to upload scripts in a zip bundle to install more packages.
+
+Use the following code to install CRAN packages in the designer's **Execute R Script** module:
+```r
+  if(!require(zoo)) {
+      install.packages("zoo",repos = "http://cran.us.r-project.org")
+  }
+  library(zoo)
+```
+
+## Next steps
+
+In this article, you learned how to migrate Execute R Script modules to Azure Machine Learning.
+
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate a Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. **Migrate Execute R Script modules**.
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-overview.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning'
+description: Migrate from Studio (classic) to Azure Machine Learning for a modernized data science platform.
+++++++ Last updated : 08/23/2021++
+# Migrate to Azure Machine Learning
+
+> [!IMPORTANT]
+> Support for Machine Learning Studio (classic) will end on 31 August 2024. We recommend you transition to [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning) by that date.
+>
+> Beginning 1 December 2021, you will not be able to create new Machine Learning Studio (classic) resources. Through 31 August 2024, you can continue to use the existing Machine Learning Studio (classic) resources.
+>
+> ML Studio (classic) documentation is being retired and may not be updated in the future.
+
+Learn how to migrate from Studio (classic) to Azure Machine Learning. Azure Machine Learning provides a modernized data science platform that combines no-code and code-first approaches.
+
+This is a guide for a basic "lift and shift" migration. If you want to optimize an existing machine learning workflow, or modernize an ml platform, see the [Azure Machine Learning adoption framework](https://aka.ms/mlstudio-classic-migration-repo) for additional resources including digital survey tools, worksheets, and planning templates.
+
+![Azure ML adoption framework](./media/migrate-overview/aml-adoption-framework.png)
+
+## Recommended approach
+
+To migrate to Azure Machine Learning, we recommend the following approach:
+
+> [!div class="checklist"]
+> * Step 1: Assess Azure Machine Learning
+> * Step 2: Define a strategy and plan
+> * Step 3: Rebuild experiments and web services
+> * Step 4: Integrate client apps
+> * Step 5: Clean up Studio (classic) assets
+> * Step 6: Review and expand scenarios
++
+## Step 1: Assess Azure Machine Learning
+1. Learn about [Azure Machine Learning](https://azure.microsoft.com/services/machine-learning/); its benefits, costs, and architecture.
+
+1. Compare the capabilities of Azure Machine Learning and Studio (classic).
+
+ >[!NOTE]
+ > The **designer** feature in Azure Machine Learning provides a similar drag-and-drop experience to Studio (classic). However, Azure Machine Learning also provides robust [code-first workflows](concept-model-management-and-deployment.md) as an alternative. This migration series focuses on the designer, since it's most similar to the Studio (classic) experience.
+
+ [!INCLUDE [aml-compare-classic](../../includes/machine-learning-compare-classic-aml.md)]
+
+3. Verify that your critical Studio (classic) modules are supported in Azure Machine Learning designer. For more information, see the [Studio (classic) and designer module-mapping](#studio-classic-and-designer-module-mapping) table below.
+
+4. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md?tabs=azure-portal).
+
+## Step 2: Define a strategy and plan
+
+1. Define business justifications and expected outcomes.
+1. Align an actionable Azure Machine Learning adoption plan to business outcomes.
+1. Prepare people, processes, and environments for change.
+
+See the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo) for planning resources including a planning doc template.
+
+## Step 3: Rebuild your first model
+
+After you've defined a strategy, migrate your first model.
+
+1. [Migrate datasets to Azure Machine Learning](migrate-register-dataset.md).
+1. Use the designer to [rebuild experiments](migrate-rebuild-experiment.md).
+1. Use the designer to [redeploy web services](migrate-rebuild-web-service.md).
+
+ >[!NOTE]
+ > Azure Machine Learning also supports code-first workflows for migrating [datasets](how-to-create-register-datasets.md), [training](how-to-set-up-training-targets.md), and [deployment](how-to-deploy-and-where.md).
+
+## Step 4: Integrate client apps
+
+1. Modify client applications that invoke Studio (classic) web services to use your new [Azure Machine Learning endpoints](migrate-rebuild-integrate-with-client-app.md).
+
+## Step 5: Cleanup Studio (classic) assets
+
+1. [Clean up Studio (classic) assets](/classic/export-delete-personal-data-dsr.md) to avoid extra charges. You may want to retain assets for fallback until you have validated Azure Machine Learning workloads.
+
+## Step 6: Review and expand scenarios
+
+1. Review the model migration for best practices and validate workloads.
+1. Expand scenarios and migrate additional workloads to Azure Machine Learning.
++
+## Studio (classic) and designer module-mapping
+
+Consult the following table to see which modules to use while rebuilding Studio (classic) experiments in the designer.
++
+> [!IMPORTANT]
+> The designer implements modules through open-source Python packages rather than C# packages like Studio (classic). Because of this difference, the output of designer modules may vary slightly from their Studio (classic) counterparts.
++
+|Category|Studio (classic) module|Replacement designer module|
+|--|-|--|
+|Data input and output|- Enter Data Manually </br> - Export Data </br> - Import Data </br> - Load Trained Model </br> - Unpack Zipped Datasets|- Enter Data Manually </br> - Export Data </br> - Import Data|
+|Data Format Conversions|- Convert to CSV </br> - Convert to Dataset </br> - Convert to ARFF </br> - Convert to SVMLight </br> - Convert to TSV|- Convert to CSV </br> - Convert to Dataset|
+|Data Transformation - Manipulation|- Add Columns</br> - Add Rows </br> - Apply SQL Transformation </br> - Cleaning Missing Data </br> - Convert to Indicator Values </br> - Edit Metadata </br> - Join Data </br> - Remove Duplicate Rows </br> - Select Columns in Dataset </br> - Select Columns Transform </br> - SMOTE </br> - Group Categorical Values|- Add Columns</br> - Add Rows </br> - Apply SQL Transformation </br> - Cleaning Missing Data </br> - Convert to Indicator Values </br> - Edit Metadata </br> - Join Data </br> - Remove Duplicate Rows </br> - Select Columns in Dataset </br> - Select Columns Transform </br> - SMOTE|
+|Data Transformation ΓÇô Scale and Reduce |- Clip Values </br> - Group Data into Bins </br> - Normalize Data </br>- Principal Component Analysis |- Clip Values </br> - Group Data into Bins </br> - Normalize Data|
+|Data Transformation ΓÇô Sample and Split|- Partition and Sample </br> - Split Data|- Partition and Sample </br> - Split Data|
+|Data Transformation ΓÇô Filter |- Apply Filter </br> - FIR Filter </br> - IIR Filter </br> - Median Filter </br> - Moving Average Filter </br> - Threshold Filter </br> - User Defined Filter||
+|Data Transformation ΓÇô Learning with Counts |- Build Counting Transform </br> - Export Count Table </br> - Import Count Table </br> - Merge Count Transform</br> - Modify Count Table Parameters||
+|Feature Selection |- Filter Based Feature Selection </br> - Fisher Linear Discriminant Analysis </br> - Permutation Feature Importance |- Filter Based Feature Selection </br> - Permutation Feature Importance|
+| Model - Classification| - Multiclass Decision Forest </br> - Multiclass Decision Jungle </br> - Multiclass Logistic Regression </br>- Multiclass Neural Network </br>- One-vs-All Multiclass </br>- Two-Class Averaged Perceptron </br>- Two-Class Bayes Point Machine </br>- Two-Class Boosted Decision Tree </br> - Two-Class Decision Forest </br> - Two-Class Decision Jungle </br> - Two-Class Locally-Deep SVM </br> - Two-Class Logistic Regression </br> - Two-Class Neural Network </br> - Two-Class Support Vector Machine | - Multiclass Decision Forest </br> - Multiclass Boost Decision Tree </br> - Multiclass Logistic Regression </br> - Multiclass Neural Network </br> - One-vs-All Multiclass </br> - Two-Class Averaged Perceptron </br> - Two-Class Boosted Decision Tree </br> - Two-Class Decision Forest </br>- Two-Class Logistic Regression </br> - Two-Class Neural Network </br>- Two-Class Support Vector Machine |
+| Model - Clustering| - K-means clustering| - K-means clustering|
+| Model - Regression| - Bayesian Linear Regression </br> - Boosted Decision Tree Regression </br>- Decision Forest Regression </br> - Fast Forest Quantile Regression </br> - Linear Regression </br> - Neural Network Regression </br> - Ordinal Regression Poisson Regression| - Boosted Decision Tree Regression </br>- Decision Forest Regression </br> - Fast Forest Quantile Regression </br> - Linear Regression </br> - Neural Network Regression </br> - Poisson Regression|
+| Model ΓÇô Anomaly Detection| - One-Class SVM </br> - PCA-Based Anomaly Detection | - PCA-Based Anomaly Detection|
+| Machine Learning ΓÇô Evaluate | - Cross Validate Model </br>- Evaluate Model </br>- Evaluate Recommender | - Cross Validate Model </br>- Evaluate Model </br> - Evaluate Recommender|
+| Machine Learning ΓÇô Train| - Sweep Clustering </br> - Train Anomaly Detection Model </br>- Train Clustering Model </br> - Train Matchbox Recommender -</br> Train Model </br>- Tune Model Hyperparameters| - Train Anomaly Detection Model </br> - Train Clustering Model </br> - Train Model -</br> - Train PyTorch Model </br>- Train SVD Recommender </br>- Train Wide and Deep Recommender </br>- Tune Model Hyperparameters|
+| Machine Learning ΓÇô Score| - Apply Transformation </br>- Assign Data to clusters </br>- Score Matchbox Recommender </br> - Score Model|- Apply Transformation </br> - Assign Data to clusters </br> - Score Image Model </br> - Score Model </br>- Score SVD Recommender </br> -Score Wide and Deep Recommender|
+| OpenCV Library Modules| - Import Images </br>- Pre-trained Cascade Image Classification | |
+| Python Language Modules| - Execute Python Script| - Execute Python Script </br> - Create Python Model |
+| R Language Modules | - Execute R Script </br> - Create R Model| - Execute R Script|
+| Statistical Functions | - Apply Math Operation </br>- Compute Elementary Statistics </br>- Compute Linear Correlation </br>- Evaluate Probability Function </br>- Replace Discrete Values </br>- Summarize Data </br>- Test Hypothesis using t-Test| - Apply Math Operation </br>- Summarize Data|
+| Text Analytics| - Detect Languages </br>- Extract Key Phrases from Text </br>- Extract N-Gram Features from Text </br>- Feature Hashing </br>- Latent Dirichlet Allocation </br>- Named Entity Recognition </br>- Preprocess Text </br>- Score Vowpal Wabbit Version 7-10 Model </br>- Score Vowpal Wabbit Version 8 Model </br>- Train Vowpal Wabbit Version 7-10 Model </br>- Train Vowpal Wabbit Version 8 Model |- Convert Word to Vector </br> - Extract N-Gram Features from Text </br>- Feature Hashing </br>- Latent Dirichlet Allocation </br>- Preprocess Text </br>- Score Vowpal Wabbit Model </br> - Train Vowpal Wabbit Model|
+| Time Series| - Time Series Anomaly Detection | |
+| Web Service | - Input </br> - Output | - Input </br> - Output|
+| Computer Vision| | - Apply Image Transformation </br> - Convert to Image Directory </br> - Init Image Transformation </br> - Split Image Directory </br> - DenseNet Image Classification </br>- ResNet Image Classification |
+
+For more information on how to use individual designer modules, see the [designer module reference](algorithm-module-reference/module-reference.md).
+
+### What if a designer module is missing?
+
+Azure Machine Learning designer contains the most popular modules from Studio (classic). It also includes new modules that take advantage of the latest machine learning techniques.
+
+If your migration is blocked due to missing modules in the designer, contact us by [creating a support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
+
+## Example migration
+
+The following experiment migration highlights some of the differences between Studio (classic) and Azure Machine Learning.
+
+### Datasets
+
+In Studio (classic), **datasets** were saved in your workspace and could only be used by Studio (classic).
+
+![automobile-price-classic-dataset](./media/migrate-overview/studio-classic-dataset.png)
+
+In Azure Machine Learning, **datasets** are registered to the workspace and can be used across all of Azure Machine Learning. For more information on the benefits of Azure Machine Learning datasets, see [Secure data access](concept-data.md#reference-data-in-storage-with-datasets).
+
+![automobile-price-aml-dataset](./media/migrate-overview/aml-dataset.png)
+
+### Pipeline
+
+In Studio (classic), **experiments** contained the processing logic for your work. You created experiments with drag-and-drop modules.
+
+![automobile-price-classic-experiment](./media/migrate-overview/studio-classic-experiment.png)
+
+In Azure Machine Learning, **pipelines** contain the processing logic for your work. You can create pipelines with either drag-and-drop modules or by writing code.
+
+![automobile-price-aml-pipeline](./media/migrate-overview/aml-pipeline.png)
+
+### Web service endpoint
+
+Studio (classic) used **REQUEST/RESPOND API** for real-time prediction and **BATCH EXECUTION API** for batch prediction or retraining.
+
+![automobile-price-classic-webservice](./media/migrate-overview/studio-classic-web-service.png)
+
+Azure Machine Learning uses **real-time endpoints** for real-time prediction and **pipeline endpoints** for batch prediction or retraining.
+
+![automobile-price-aml-endpoint](./media/migrate-overview/aml-endpoint.png)
+
+## Next steps
+
+In this article, you learned the high-level requirements for migrating to Azure Machine Learning. For detailed steps, see the other articles in the Studio (classic) migration series:
+
+1. **Migration overview**.
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
+
+See the [Azure Machine Learning Adoption Framework](https://aka.ms/mlstudio-classic-migration-repo) for additional migration resources.
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-rebuild-experiment.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild experiment'
+description: Rebuild Studio (classic) experiments in Azure Machine Learning designer.
+++++++ Last updated : 03/08/2021++
+# Rebuild a Studio (classic) experiment in Azure Machine Learning
++
+In this article, you learn how to rebuild an ML Studio (classic) experiment in Azure Machine Learning. For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
+
+Studio (classic) **experiments** are similar to **pipelines** in Azure Machine Learning. However, in Azure Machine Learning pipelines are built on the same back-end that powers the SDK. This means that you have two options for machine learning development: the drag-and-drop designer or code-first SDKs.
+
+For more information on building pipelines with the SDK, see [What are Azure Machine Learning pipelines](concept-ml-pipelines.md#building-pipelines-with-the-python-sdk).
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- A Studio (classic) experiment to migrate.
+- [Upload your dataset](migrate-register-dataset.md) to Azure Machine Learning.
+
+## Rebuild the pipeline
+
+After you [migrate your dataset to Azure Machine Learning](migrate-register-dataset.md), you're ready to recreate your experiment.
+
+In Azure Machine Learning, the visual graph is called a **pipeline draft**. In this section, you recreate your classic experiment as a pipeline draft.
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com))
+1. In the left navigation pane, select **Designer** > **Easy-to-use prebuilt modules**
+ ![Screenshot showing how to create a new pipeline draft.](./media/tutorial-designer-automobile-price-train-score/launch-designer.png)
+
+1. Manually rebuild your experiment with designer modules.
+
+ Consult the [module-mapping table](migrate-overview.md#studio-classic-and-designer-module-mapping) to find replacement modules. Many of Studio (classic)'s most popular modules have identical versions in the designer.
+
+ > [!Important]
+ > If your experiment uses the Execute R Script module, you need to perform additional steps to migrate your experiment. For more information, see [Migrate R Script modules](migrate-execute-r-script.md).
+
+1. Adjust parameters.
+
+ Select each module and adjust the parameters in the module settings panel to the right. Use the parameters to recreate the functionality of your Studio (classic) experiment. For more information on each module, see the [module reference](/algorithm-module-reference/module-reference.md).
+
+## Submit a run and check results
+
+After you recreate your Studio (classic) experiment, it's time to submit a **pipeline run**.
+
+A pipeline run executes on a **compute target** attached to your workspace. You can set a default compute target for the entire pipeline, or you can specify compute targets on a per-module basis.
+
+Once you submit a run from a pipeline draft, it turns into a **pipeline run**. Each pipeline run is recorded and logged in Azure Machine Learning.
+
+To set a default compute target for the entire pipeline:
+1. Select the **Gear icon** ![Gear icon in the designer](./media/tutorial-designer-automobile-price-train-score/gear-icon.png) next to the pipeline name.
+1. Select **Select compute target**.
+1. Select an existing compute, or create a new compute by following the on-screen instructions.
+
+Now that your compute target is set, you can submit a pipeline run:
+
+1. At the top of the canvas, select **Submit**.
+1. Select **Create new** to create a new experiment.
+
+ Experiments organize similar pipeline runs together. If you run a pipeline multiple times, you can select the same experiment for successive runs. This is useful for logging and tracking.
+1. Enter an experiment name. Then, select **Submit**.
+
+ The first run may take up to 20 minutes. Since the default compute settings have a minimum node size of 0, the designer must allocate resources after being idle. Successive runs take less time, since the nodes are already allocated. To speed up the running time, you can create a compute resources with a minimum node size of 1 or greater.
+
+After the run finishes, you can check the results of each module:
+
+1. Right-click the module whose output you want to see.
+1. Select either **Visualize**, **View Output**, or **View Log**.
+
+ - **Visualize**: Preview the results dataset.
+ - **View Output**: Open a link to the output storage location. Use this to explore or download the output.
+ - **View Log**: View driver and system logs. Use the **70_driver_log** to see information related to your user-submitted script such as errors and exceptions.
+
+> [!IMPORTANT]
+> Designer modules use open source Python packages, compared to C# packages in Studio (classic). As a result, module output may vary slightly between the designer and Studio (classic).
++
+## Next steps
+
+In this article, you learned how to rebuild a Studio (classic) experiment in Azure Machine Learning. The next step is to [rebuild web services in Azure Machine Learning](migrate-rebuild-web-service.md).
++
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. **Rebuild a Studio (classic) training pipeline**.
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Migrate Rebuild Integrate With Client App https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-rebuild-integrate-with-client-app.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Consume pipeline endpoints'
+description: Integrate pipeline endpoints with client applications in Azure Machine Learning.
+++++++ Last updated : 03/08/2021++
+# Consume pipeline endpoints from client applications
++
+In this article, you learn how to integrate client applications with Azure Machine Learning endpoints. For more information on writing application code, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
+
+This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see [the migration overview article](migrate-overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- An [Azure Machine Learning real-time endpoint or pipeline endpoint](migrate-rebuild-web-service.md).
++
+## Consume a real-time endpoint
+
+If you deployed your model as a **real-time endpoint**, you can find its REST endpoint, and pre-generated consumption code in C#, Python, and R:
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
+1. Go the **Endpoints** tab.
+1. Select your real-time endpoint.
+1. Select **Consume**.
+
+> [!NOTE]
+> You can also find the Swagger specification for your endpoint in the **Details** tab. Use the Swagger definition to understand your endpoint schema. For more information on Swagger definition, see [Swagger official documentation](https://swagger.io/docs/specification/2-0/what-is-swagger/).
++
+## Consume a pipeline endpoint
+
+There are two ways to consume a pipeline endpoint:
+
+- REST API calls
+- Integration with Azure Data Factory
+
+### Use REST API calls
+
+Call the REST endpoint from your client application. You can use the Swagger specification for your endpoint to understand its schema:
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
+1. Go the **Endpoints** tab.
+1. Select **Pipeline endpoints**.
+1. Select your pipeline endpoint.
+1. In the **Pipeline endpoint overview** pane, select the link under **REST endpoint documentation**.
+
+### Use Azure Data Factory
+
+You can call your Azure Machine Learning pipeline as a step in an Azure Data Factory pipeline. For more information, see [Execute Azure Machine Learning pipelines in Azure Data Factory](../data-factory/transform-data-machine-learning-service.md).
++
+## Next steps
+
+In this article, you learned how to find schema and sample code for your pipeline endpoints. For more information on consuming endpoints from the client application, see [Consume an Azure Machine Learning endpoint](how-to-consume-web-service.md).
+
+See the rest of the articles in the Azure Machine Learning migration series:
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. **Integrate an Azure Machine Learning web service with client apps**.
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-rebuild-web-service.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild web service'
+description: Rebuild Studio (classic) web services as pipeline endpoints in Azure Machine Learning
+++++++ Last updated : 03/08/2021++
+# Rebuild a Studio (classic) web service in Azure Machine Learning
++
+In this article, you learn how to rebuild an ML Studio (classic) web service as an **endpoint** in Azure Machine Learning.
+
+Use Azure Machine Learning pipeline endpoints to make predictions, retrain models, or run any generic pipeline. The REST endpoint lets you run pipelines from any platform.
+
+This article is part of the Studio (classic) to Azure Machine Learning migration series. For more information on migrating to Azure Machine Learning, see the [migration overview article](migrate-overview.md).
+
+> [!NOTE]
+> This migration series focuses on the drag-and-drop designer. For more information on deploying models programmatically, see [Deploy machine learning models in Azure](how-to-deploy-and-where.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- An Azure Machine Learning training pipeline. For more information, see [Rebuild a Studio (classic) experiment in Azure Machine Learning](migrate-rebuild-experiment.md).
+
+## Real-time endpoint vs pipeline endpoint
+
+Studio (classic) web services have been replaced by **endpoints** in Azure Machine Learning. Use the following table to choose which endpoint type to use:
+
+|Studio (classic) web service| Azure Machine Learning replacement
+|||
+|Request/respond web service (real-time prediction)|Real-time endpoint|
+|Batch web service (batch prediction)|Pipeline endpoint|
+|Retraining web service (retraining)|Pipeline endpoint|
++
+## Deploy a real-time endpoint
+
+In Studio (classic), you used a **REQUEST/RESPOND web service** to deploy a model for real-time predictions. In Azure Machine Learning, you use a **real-time endpoint**.
+
+There are multiple ways to deploy a model in Azure Machine Learning. One of the simplest ways is to use the designer to automate the deployment process. Use the following steps to deploy a model as a real-time endpoint:
+
+1. Run your completed training pipeline at least once.
+1. After the run completes, at the top of the canvas, select **Create inference pipeline** > **Real-time inference pipeline**.
+
+ ![Create realtime inference pipeline](./media/migrate-rebuild-web-service/create-inference-pipeline.png)
+
+ The designer converts the training pipeline into a real-time inference pipeline. A similar conversion also occurs in Studio (classic).
+
+ In the designer, the conversion step also [registers the trained model to your Azure Machine Learning workspace](how-to-deploy-and-where.md#registermodel).
+
+1. Select **Submit** to run the real-time inference pipeline, and verify that it runs successfully.
+
+1. After you verify the inference pipeline, select **Deploy**.
+
+1. Enter a name for your endpoint and a compute type.
+
+ The following table describes your deployment compute options in the designer:
+
+ | Compute target | Used for | Description | Creation |
+ | -- | -- | -- | -- |
+ |[Azure Kubernetes Service (AKS)](how-to-deploy-azure-kubernetes-service.md) |Real-time inference|Large-scale, production deployments. Fast response time and service autoscaling.| User-created. For more information, see [Create compute targets](how-to-create-attach-compute-studio.md#inference-clusters). |
+ |[Azure Container Instances](how-to-deploy-azure-container-instance.md)|Testing or development | Small-scale, CPU-based workloads that require less than 48 GB of RAM.| Automatically created by Azure Machine Learning.
+
+### Test the real-time endpoint
+
+After deployment completes, you can see more details and test your endpoint:
+
+1. Go the **Endpoints** tab.
+1. Select you endpoint.
+1. Select the **Test** tab.
+
+ ![Screenshot showing the Endpoints tab with the Test endpoint button](./media/migrate-rebuild-web-service/test-realtime-endpoint.png)
+
+## Publish a pipeline endpoint for batch prediction or retraining
+
+You can also use your training pipeline to create a **pipeline endpoint** instead of a real-time endpoint. Use **pipeline endpoints** to perform either batch prediction or retraining.
+
+Pipeline endpoints replace Studio (classic) **batch execution endpoints** and **retraining web services**.
+
+### Publish a pipeline endpoint for batch prediction
+
+Publishing a batch prediction endpoint is similar to the real-time endpoint.
+
+Use the following steps to publish a pipeline endpoint for batch prediction:
+
+1. Run your completed training pipeline at least once.
+
+1. After the run completes, at the top of the canvas, select **Create inference pipeline** > **Batch inference pipeline**.
+
+ ![Screenshot showing the create inference pipeline button on a training pipeline](./media/migrate-rebuild-web-service/create-inference-pipeline.png)
+
+ The designer converts the training pipeline into a batch inference pipeline. A similar conversion also occurs in Studio (classic).
+
+ In the designer, this step also [registers the trained model to your Azure Machine Learning workspace](how-to-deploy-and-where.md#registermodel).
+
+1. Select **Submit** to run the batch inference pipeline and verify that it successfully completes.
+
+1. After you verify the inference pipeline, select **Publish**.
+
+1. Create a new pipeline endpoint or select an existing one.
+
+ A new pipeline endpoint creates a new REST endpoint for your pipeline.
+
+ If you select an existing pipeline endpoint, you don't overwrite the existing pipeline. Instead, Azure Machine Learning versions each pipeline in the endpoint. You can specify which version to run in your REST call. You must also set a default pipeline if the REST call doesn't specify a version.
++
+ ### Publish a pipeline endpoint for retraining
+
+To publish a pipeline endpoint for retraining, you must already have a pipeline draft that trains a model. For more information on building a training pipeline, see [Rebuild a Studio (classic) experiment](migrate-rebuild-experiment.md).
+
+To reuse your pipeline endpoint for retraining, you must create a **pipeline parameter** for your input dataset. This lets you dynamically set your training dataset, so that you can retrain your model.
+
+Use the following steps to publish retraining pipeline endpoint:
+
+1. Run your training pipeline at least once.
+1. After the run completes, select the dataset module.
+1. In the module details pane, select **Set as pipeline parameter**.
+1. Provide a descriptive name like "InputDataset".
+
+ ![Screenshot highlighting how to create a pipeline parameter](./media/migrate-rebuild-web-service/create-pipeline-parameter.png)
+
+ This creates a pipeline parameter for your input dataset. When you call your pipeline endpoint for training, you can specify a new dataset to retrain the model.
+
+1. Select **Publish**.
+
+ ![Screenshot highlighting the Publish button on a training pipeline](./media/migrate-rebuild-web-service/create-retraining-pipeline.png)
++
+## Call your pipeline endpoint from the studio
+
+After you create your batch inference or retraining pipeline endpoint, you can call your endpoint directly from your browser.
+
+1. Go to the **Pipelines** tab, and select **Pipeline endpoints**.
+1. Select the pipeline endpoint you want to run.
+1. Select **Submit**.
+
+ You can specify any pipeline parameters after you select **Submit**.
+
+## Next steps
+
+In this article, you learned how to rebuild a Studio (classic) web service in Azure Machine Learning. The next step is to [integrate your web service with client apps](migrate-rebuild-integrate-with-client-app.md).
++
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. [Migrate dataset](migrate-register-dataset.md).
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. **Rebuild a Studio (classic) web service**.
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Migrate Register Dataset https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/migrate-register-dataset.md
+
+ Title: 'ML Studio (classic): Migrate to Azure Machine Learning - Rebuild dataset'
+description: Rebuild Studio (classic) datasets in Azure Machine Learning designer
+++++++ Last updated : 02/04/2021++
+# Migrate a Studio (classic) dataset to Azure Machine Learning
++
+In this article, you learn how to migrate a Studio (classic) dataset to Azure Machine Learning. For more information on migrating from Studio (classic), see [the migration overview article](migrate-overview.md).
+
+You have three options to migrate a dataset to Azure Machine Learning. Read each section to determine which option is best for your scenario.
++
+|Where is the data? | Migration option |
+|||
+|In Studio (classic) | Option 1: [Download the dataset from Studio (classic) and upload it to Azure Machine Learning](#download-the-dataset-from-studio-classic). |
+|Cloud storage | Option 2: [Register a dataset from a cloud source](#import-data-from-cloud-sources). <br><br> Option 3: [Use the Import Data module to get data from a cloud source](#import-data-from-cloud-sources). |
+
+> [!NOTE]
+> Azure Machine Learning also supports [code-first workflows](how-to-create-register-datasets.md) for creating and managing datasets.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An Azure Machine Learning workspace. [Create an Azure Machine Learning workspace](how-to-manage-workspace.md#create-a-workspace).
+- A Studio (classic) dataset to migrate.
++
+## Download the dataset from Studio (classic)
+
+The simplest way to migrate a Studio (classic) dataset to Azure Machine Learning is to download your dataset and register it in Azure Machine Learning. This creates a new copy of your dataset and uploads it to an Azure Machine Learning datastore.
+
+You can download the following Studio (classic) dataset types directly.
+
+* Plain text (.txt)
+* Comma-separated values (CSV) with a header (.csv) or without (.nh.csv)
+* Tab-separated values (TSV) with a header (.tsv) or without (.nh.tsv)
+* Excel file
+* Zip file (.zip)
+
+To download datasets directly:
+1. Go to your Studio (classic) workspace ([https://studio.azureml.net](https://studio.azureml.net)).
+1. In the left navigation bar, select the **Datasets** tab.
+1. Select the dataset(s) you want to download.
+1. In the bottom action bar, select **Download**.
+
+ ![Screenshot showing how to download a dataset in Studio (classic)](./media/migrate-register-dataset/download-dataset.png)
+
+For the following data types, you must use the **Convert to CSV** module to download datasets.
+
+* SVMLight data (.svmlight)
+* Attribute Relation File Format (ARFF) data (.arff)
+* R object or workspace file (.RData)
+* Dataset type (.data). Dataset type is Studio(classic) internal data type for module output.
+
+To convert your dataset to a CSV and download the results:
+
+1. Go to your Studio (classic) workspace ([https://studio.azureml.net](https://studio.azureml.net)).
+1. Create a new experiment.
+1. Drag and drop the dataset you want to download onto the canvas.
+1. Add a **Convert to CSV** module.
+1. Connect the **Convert to CSV** input port to the output port of your dataset.
+1. Run the experiment.
+1. Right-click the **Convert to CSV** module.
+1. Select **Results dataset** > **Download**.
+
+ ![Screenshot showing how to setup a convert to CSV pipeline](./media/migrate-register-dataset/csv-download-dataset.png)
+
+### Upload your dataset to Azure Machine Learning
+
+After you download the data file, you can register the dataset in Azure Machine Learning:
+
+1. Go to Azure Machine Learning studio ([ml.azure.com](https://ml.azure.com)).
+1. In the left navigation pane, select the **Datasets** tab.
+1. Select **Create dataset** > **From local files**.
+ ![Screenshot showing the datasets tab and the button for creating a local file](./media/migrate-register-dataset/register-dataset.png)
+1. Enter a name and description.
+1. For **Dataset type**, select **Tabular**.
+
+ > [!NOTE]
+ > You can also upload ZIP files as datasets. To upload a ZIP file, select **File** for **Dataset type**.
+
+1. **For Datastore and file selection**, select the datastore you want to upload your dataset file to.
+
+ By default, Azure Machine Learning stores the dataset to the default workspace blobstore. For more information on datastores, see [Connect to storage services](how-to-access-data.md).
+
+1. Set the data parsing settings and schema for your dataset. Then, confirm your settings.
+
+## Import data from cloud sources
+
+If your data is already in a cloud storage service, and you want to keep your data in its native location. You can use either of the following options:
+
+|Ingestion method|Description|
+|| |
+|Register an Azure Machine Learning dataset|Ingest data from local and online data sources (Blob, ADLS Gen1, ADLS Gen2, File share, SQL DB). <br><br>Creates a reference to the data source, which is lazily evaluated at runtime. Use this option if you repeatedly access this dataset and want to enable advanced data features like data versioning and monitoring.
+|Import Data module|Ingest data from online data sources (Blob, ADLS Gen1, ADLS Gen2, File share, SQL DB). <br><br> The dataset is only imported to the current designer pipeline run.
++
+>[!Note]
+> Studio (classic) users should note that the following cloud sources are not natively supported in Azure Machine Learning:
+> - Hive Query
+> - Azure Table
+> - Azure Cosmos DB
+> - On-premises SQL Database
+>
+> We recommend that users migrate their data to a supported storage services using Azure Data Factory.
+
+### Register an Azure Machine Learning dataset
+
+Use the following steps to register a dataset to Azure Machine Learning from a cloud service:
+
+1. [Create a datastore](how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
+
+1. [Register a dataset](how-to-connect-data-ui.md#create-datasets). If you are migrating a Studio (classic) dataset, select the **Tabular** dataset setting.
+
+After you register a dataset in Azure Machine Learning, you can use it in designer:
+
+1. Create a new designer pipeline draft.
+1. In the module palette to the left, expand the **Datasets** section.
+1. Drag your registered dataset onto the canvas.
+
+### Use the Import Data module
+
+Use the following steps to import data directly to your designer pipeline:
+
+1. [Create a datastore](how-to-connect-data-ui.md#create-datastores), which links the cloud storage service to your Azure Machine Learning workspace.
+
+After you create the datastore, you can use the [**Import Data**](algorithm-module-reference/import-data.md) module in the designer to ingest data from it:
+
+1. Create a new designer pipeline draft.
+1. In the module palette to the left, find the **Import Data** module and drag it to the canvas.
+1. Select the **Import Data** module, and use the settings in the right panel to configure your data source.
+
+## Next steps
+
+In this article, you learned how to migrate a Studio (classic) dataset to Azure Machine Learning. The next step is to [rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
++
+See the other articles in the Studio (classic) migration series:
+
+1. [Migration overview](migrate-overview.md).
+1. **Migrate datasets**.
+1. [Rebuild a Studio (classic) training pipeline](migrate-rebuild-experiment.md).
+1. [Rebuild a Studio (classic) web service](migrate-rebuild-web-service.md).
+1. [Integrate an Azure Machine Learning web service with client apps](migrate-rebuild-integrate-with-client-app.md).
+1. [Migrate Execute R Script](migrate-execute-r-script.md).
machine-learning Overview What Is Machine Learning Studio https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/overview-what-is-machine-learning-studio.md
Previously updated : 08/24/2020 Last updated : 08/23/2021 adobe-target: true
In this article you learn:
> - The differences between [Azure Machine Learning studio and ML Studio (classic)](#ml-studio-classic-vs-azure-machine-learning-studio). We recommend that you use the most up-to-date browser that's compatible with your operating system. The following browsers are supported:
- * Microsoft Edge (The new Microsoft Edge, latest version. Not Microsoft Edge legacy)
+ * Microsoft Edge (latest version)
* Safari (latest version, Mac only) * Chrome (latest version) * Firefox (latest version)
Even if you're an experienced developer, the studio can simplify how you manage
## ML Studio (classic) vs Azure Machine Learning studio
-Released in 2015, **ML Studio (classic)** was the first drag-and-drop machine learning model builder in Azure
-.
-
-**ML Studio (classic)** is a standalone service that only offers a visual experience. Studio (classic) does not interoperate with Azure Machine Learning.
+Released in 2015, **ML Studio (classic)** was the first drag-and-drop machine learning model builder in Azure. **ML Studio (classic)** is a standalone service that only offers a visual experience. Studio (classic) does not interoperate with Azure Machine Learning.
**Azure Machine Learning** is a separate, and modernized, service that delivers a complete data science platform. It supports both code-first and low-code experiences. **Azure Machine Learning studio** is a web portal *in* Azure Machine Learning that contains low-code and no-code options for project authoring and asset management.
-We recommend that new users choose **Azure Machine Learning**, instead of ML Studio (classic), for the latest range of data science tools. If you are an existing ML Studio (classic) user, consider [migrating to Azure Machine Learning](classic/migrate-overview.md).
-
-Here are some of the benefits of switching to Azure Machine Learning:
+If you're a new user, choose **Azure Machine Learning**, instead of ML Studio (classic). As a complete ML platform, Azure Machine Learning offers:
- Scalable compute clusters for large-scale training. - Enterprise security and governance.
machine-learning Reference Yaml Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/machine-learning/reference-yaml-overview.md
Reference | URI
- | - [Managed online (real-time)](reference-yaml-deployment-managed-online.md) | https://azuremlschemas.azureedge.net/latest/managedOnlineDeployment.schema.json [Managed batch](reference-yaml-deployment-managed-batch.md) | https://azuremlschemas.azureedge.net/latest/batchDeployment.schema.json
-[Kubernetes (k8s) online (real-time)](reference-yaml-deployment-k8s-online.md) | https://azuremlschemas.azureedge.net/latest/k8sOnelineDeployment.schema.json
+[Kubernetes (k8s) online (real-time)](reference-yaml-deployment-k8s-online.md) | https://azuremlschemas.azureedge.net/latest/k8sOnlineDeployment.schema.json
## Next steps
mysql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/mysql/flexible-server/quickstart-create-connect-server-vnet.md
ssh -i .\Downloads\myKey1.pem azureuser@10.111.12.123
You need to install mysql-client tool to be able to connect to the server. ```bash
-sude apt-getupdate
+sudo apt-getupdate
sudo apt-get install mysql-client ```
networking Networking Partners Msp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/networking/networking-partners-msp.md
Use the links in this section for more information about managed cloud networkin
|[Interxion](https://www.interxion.com/products/interconnection/cloud-connect/support-your-cloud-strategy/)|[Azure Networking Assessment - 5 Days](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/interxionhq.inxn_azure_networking_assessment)||||| |[IX Reach](https://www.ixreach.com/services/sdn-cloud-connect/)||[ExpressRoute by IX Reach, a BSO company](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/ixreach.cloudconnect?tab=Overview)|||| |[KoçSistem](https://azure.kocsistem.com.tr/en)|[KoçSistem Managed Cloud Services for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.kocsistemcloudmanagementtool?tab=Overview)|[KoçSistem Azure ExpressRoute Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_express_route?tab=Overview)|[KoçSistem Azure Virtual WAN Management](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_virtual_wan?tab=Overview)||[KoçSistem Azure Security Center Managed Service](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/kocsistem.ks_azure_security_center?tab=Overview)|
-|[Liquid Telecom](https://liquidcloud.africa/)|[Cloud Readiness - 2 Hour Assessment](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/liquidtelecommunicationsoperationslimited.liquid_cloud_readiness_assessment);[Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|[Liquid Managed ExpressRoute for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.42cfee0b-8f07-4948-94b0-c9fc3e1ddc42?tab=Overview)||||
+|[Liquid Telecom](https://liquidcloud.africa/)|[Cloud Readiness - 2 Hour Assessment](https://azuremarketplace.microsoft.com/marketplace/consulting-services/incremental_group_ltd.cloud-readiness-assess);[Liquid Azure Expert Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.5dab29ab-bb14-4df8-8978-9a8608a41ad7?tab=Overview)|[Liquid Managed ExpressRoute for Azure](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/liquidtelecommunicationsoperationslimited.42cfee0b-8f07-4948-94b0-c9fc3e1ddc42?tab=Overview)||||
|[Lumen](https://www.lumen.com/en-us/solutions/hybrid-cloud.html)||[ExpressRoute Consulting Svcs: 8-wk Implementation](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/centurylink2362604-2362604.centurylink_consultingservicesforexpressroute); [Lumen Landing Zone for ExpressRoute 1 Day](https://azuremarketplace.microsoft.com/en-us/marketplace/consulting-services/centurylinklimited.centurylink_landing_zone_for_azure_expressroute)|||| |[Macquarie Telecom](https://macquariecloudservices.com/azure-managed-services/)|[Azure Managed Services by Macquarie Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_services?tab=Overview); [Azure Extend by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_extend?tab=Overview)||[Azure Deploy by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview); [SD-WAN Virtual Edge offer by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.azure_deploy?tab=Overview)||[Managed Security by Macquarie Cloud Services](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/macquariecloudservices.managed_security?tab=Overview)| |[Megaport](https://www.megaport.com/services/microsoft-expressroute/)||[Managed Routing Service for ExpressRoute](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/megaport1582290752989.megaport_mcr?tab=Overview)||||
notification-hubs Configure Microsoft Push Notification Service https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/configure-microsoft-push-notification-service.md
Previously updated : 08/04/2020 Last updated : 08/23/2021 ms.lastreviewed: 03/25/2019
ms.lastreviewed: 03/25/2019
# Configure Microsoft Push Notification Service (MPNS) settings in the Azure portal
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ This article shows you how to configure Microsoft Push Notification Service (MPNS) settings for an Azure notification hub by using the Azure portal. ## Prerequisites
notification-hubs Configure Notification Hub Portal Pns Settings https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/configure-notification-hub-portal-pns-settings.md
Previously updated : 06/22/2020 Last updated : 08/23/2021 ms.lastreviewed: 02/14/2019
For information, see [Send notifications to UWP apps by using Azure Notification
## Microsoft Push Notification Service for Windows Phone
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ To set up Microsoft Push Notification Service (MPNS) for Windows Phone: 1. In the Azure portal, on the **Notification Hub** page, select **Windows Phone (MPNS)** from the left menu.
notification-hubs Notification Hubs Aspnet Cross Platform Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-aspnet-cross-platform-notification.md
mobile-windows ms.devlang: multiple Previously updated : 09/14/2020 Last updated : 08/23/2021 ms.lastreviewed: 10/02/2019
This article demonstrates how to take advantage of templates to send a notificat
## Send cross-platform notifications using templates
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ This section uses the sample code you built in the [Send notifications to specific users by using Azure Notification Hubs] tutorial. You can [download the complete sample from GitHub](https://github.com/Azure/azure-notificationhubs-dotnet/tree/master/Samples/NotifyUsers). To send cross-platform notifications using templates, do the following:
notification-hubs Notification Hubs Java Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-java-push-notification-tutorial.md
documentationcenter: '' java ms.devlang: java Previously updated : 01/04/2019 Last updated : 08/23/2021 -+ ms.lastreviewed: 01/04/2019
The SDK currently supports:
* Async operations via Java NIO * Supported platforms: APNS (iOS), FCM (Android), WNS (Windows Store apps), MPNS(Windows Phone), ADM (Amazon Kindle Fire), Baidu (Android without Google services)
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ ## SDK Usage ### Compile and build
notification-hubs Notification Hubs Nodejs Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-nodejs-push-notification-tutorial.md
documentationcenter: nodejs na ms.devlang: javascript Previously updated : 04/29/2020 Last updated : 08/23/2021 -+ ms.lastreviewed: 01/04/2019
The `NotificationHubService` object exposes the following object instances for s
- **Windows Phone** - use the `MpnsService` object, which is available at `notificationHubService.mpns` - **Universal Windows Platform** - use the `WnsService` object, which is available at `notificationHubService.wns`
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ ### How to: Send push notifications to Android applications The `GcmService` object provides a `send` method that can be used to send push notifications to Android applications. The `send` method accepts the following parameters:
notification-hubs Notification Hubs Push Notification Registration Management https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-push-notification-registration-management.md
documentationcenter: .net mobile-multiple ms.devlang: dotnet Previously updated : 07/07/2020 Last updated : 08/23/2021 ms.lastreviewed: 04/08/2019
Registrations and installations must contain a valid PNS handle for each device/
### Templates
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ If you want to use [Templates](notification-hubs-templates-cross-platform-push-messages.md), the device installation also holds all templates associated with that device in a JSON format (see sample above). The template names help target different templates for the same device. Each template name maps to a template body and an optional set of tags. Moreover, each platform can have additional template properties. For Windows Store (using WNS) and Windows Phone 8 (using MPNS), an additional set of headers can be part of the template. In the case of APNs, you can set an expiry property to either a constant or to a template expression. For a complete listing of the installation properties see, [Create or Overwrite an Installation with REST](/rest/api/notificationhubs/create-overwrite-installation) topic.
notification-hubs Notification Hubs Python Push Notification Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-python-push-notification-tutorial.md
documentationcenter: '' + python ms.devlang: php Previously updated : 01/04/2019 Last updated : 08/23/2021 -+ ms.lastreviewed: 01/04/2019
def generate_sas_token(self):
### Send a notification using HTTP REST API
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ First, let use define a class representing a notification. ```python
notification-hubs Notification Hubs Windows Mobile Push Notifications Mpns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-windows-mobile-push-notifications-mpns.md
documentationcenter: windows
keywords: push notification,push notification,windows phone push mobile-windows-phone ms.devlang: dotnet Previously updated : 01/04/2019 Last updated : 08/23/2021 -+ ms.lastreviewed: 01/04/2019 # Tutorial: Send push notifications to Windows Phone apps using Notification Hubs
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ [!INCLUDE [notification-hubs-selector-get-started](../../includes/notification-hubs-selector-get-started.md)] This tutorial shows you how to use Azure Notification Hubs to send push notifications to a Windows Phone 8 or Windows Phone 8.1 Silverlight applications. If you are targeting Windows Phone 8.1 (non-Silverlight), see the [Windows Universal](notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md) version of this tutorial.
notification-hubs Notification Hubs Windows Notification Dotnet Push Xplat Segmented Wns https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-windows-notification-dotnet-push-xplat-segmented-wns.md
documentationcenter: windows mobile-windows ms.devlang: dotnet Previously updated : 09/30/2019 Last updated : 08/23/2021 -+ ms.lastreviewed: 03/22/2019 # Tutorial: Send notifications to specific devices running Universal Windows Platform applications
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ [!INCLUDE [notification-hubs-selector-breaking-news](../../includes/notification-hubs-selector-breaking-news.md)] ## Overview
The app is now complete. It can store a set of categories in the device local st
## Create a console app to send tagged notifications ++ [!INCLUDE [notification-hubs-send-categories-template](../../includes/notification-hubs-send-categories-template.md)] ## Run the console app to send tagged notifications
notification-hubs Notification Hubs Windows Store Dotnet Get Started Wns Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-windows-store-dotnet-get-started-wns-push-notification.md
mobile-windows
ms.devlang: dotnet Previously updated : 12/05/2019 Last updated : 08/23/2021 -+ ms.lastreviewed: 12/04/2019
Completing this tutorial is a prerequisite for all other Notification Hubs tutor
## Create an app in Windows Store
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ To send push notifications to UWP apps, associate your app to the Windows Store. Then, configure your notification hub to integrate with WNS. 1. Navigate to the [Windows Dev Center](https://partner.microsoft.com/dashboard/windows/first-run-experience), sign in with your Microsoft account, and then select **Create a new app**.
notification-hubs Notification Hubs Windows Store Dotnet Xplat Localized Wns Push Notification https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/notification-hubs/notification-hubs-windows-store-dotnet-xplat-localized-wns-push-notification.md
documentationcenter: windows mobile-windows ms.devlang: dotnet Previously updated : 03/22/2019 Last updated : 08/23/2021 -+ ms.lastreviewed: 03/22/2019
ms.lastreviewed: 03/22/2019
## Overview
+> [!NOTE]
+> Microsoft Push Notification Service (MPNS) has been deprecated and is no longer supported.
+ This tutorial shows you how to push localized notifications to mobile devices registered with the Notification Hubs service. In the tutorial, you update applications created in the [Tutorial: Send notifications to specific devices (Universal Windows Platform)](notification-hubs-windows-phone-push-xplat-segmented-mpns-notification.md) to support the following scenarios: - The Windows Store app allows client devices to specify a language, and to subscribe to different breaking news categories.
open-datasets Dataset Gnomad https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/open-datasets/dataset-gnomad.md
The Storage Account hosting this dataset is in the East US Azure region. Allocat
Storage Account: 'https://datasetgnomad.blob.core.windows.net/dataset/'
-Th data is available publicly without restrictions, and the azcopy tool is recommended for bulk operations. For example, to view the VCFs in release 3.0 of gnomAD:
+The data is available publicly without restrictions, and the AzCopy tool is recommended for bulk operations. For example, to view the VCFs in release 3.0 of gnomAD:
```powershell $ azcopy ls https://datasetgnomad.blob.core.windows.net/dataset/release/3.0/vcf/genomes
postgresql Quickstart Create Connect Server Vnet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/postgresql/flexible-server/quickstart-create-connect-server-vnet.md
Complete these steps to create a flexible server:
4. On the **Basics** tab, enter the **subscription**, **resource group**, **region**, and **server name**. With the default values, this will provision a PostgreSQL server of version 12 with General purpose pricing tier using 2 vCores, 8 GiB RAM, and 28 GiB storage. The backup retention is **seven** days. You can use **Development** workload to default to a lower-cost pricing tier.
- :::image type="content" source="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png" alt-text="Screenshot that shows the Basics tab of the postgres flexible server page." lightbox="/media/quickstart-create-connect-server-vnet/postgres-create-basics.png":::
+ :::image type="content" source="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png" alt-text="Screenshot that shows the Basics tab of the postgres flexible server page." lightbox="./media/quickstart-create-connect-server-vnet/postgres-create-basics.png":::
5. In the **Basics** tab, enter a unique **admin username** and **admin password**.
- :::image type="content" source="./media/quickstart-create-connect-server-vnet/db-administrator-account.png" alt-text="Screenshot that shows the admin user information page." lightbox="/media/quickstart-create-connect-server-vnet/db-administrator-account.png":::
+ :::image type="content" source="./media/quickstart-create-connect-server-vnet/db-administrator-account.png" alt-text="Screenshot that shows the admin user information page." lightbox="./media/quickstart-create-connect-server-vnet/db-administrator-account.png":::
6. Go to the **Networking** tab, and select **private access**. You can't change the connectivity method after you create the server. Select **Create virtual network** to create a new virtual network **vnetenvironment1**. Select **OK** once you have provided the virtual network name and subnet information.
ssh -i .\Downloads\myKey1.pem azureuser@10.111.12.123
You need to install the postgresql-client tool to be able to connect to the server. ```bash
-sude apt-getupdate
+sudo apt-getupdate
sudo apt-get install postgresql-client ```
search Cognitive Search Attach Cognitive Services https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/cognitive-search-attach-cognitive-services.md
You can omit the key and the Cognitive Services section for skillsets that consi
As noted, [Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) is a special case in that it requires a key, but is [metered by Cognitive Search](https://azure.microsoft.com/pricing/details/search/#pricing). > [!TIP]
-> To lower the cost of skillset processing, enable [incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) to cache and reuse any enrichments that are unaffected by changes made to a skillset. Caching requires Azure Storage (see [pricing](/pricing/details/storage/blobs/) but the cumulative cost of skillset execution is lower if existing enrichments can be reused, especially for skillsets that use image extraction and analysis.
+> To lower the cost of skillset processing, enable [incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) to cache and reuse any enrichments that are unaffected by changes made to a skillset. Caching requires Azure Storage (see [pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) but the cumulative cost of skillset execution is lower if existing enrichments can be reused, especially for skillsets that use image extraction and analysis.
## Same-region requirement
search Samples Javascript https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/samples-javascript.md
Code samples from the Azure SDK development team demonstrate API usage. You can
| Samples | Description | ||-|
-| [indexes](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/javascript/src/indexes) | Demonstrates how to create, update, get, list, and delete [search indexes](search-what-is-an-index.md). This sample category also includes a service statistic sample. |
-| [dataSourceConnections (for indexers)](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/javascript/src/dataSourceConnections) | Demonstrates how to create, update, get, list, and delete indexer data sources, required for indexer-based indexing of [supported Azure data sources](search-indexer-overview.md#supported-data-sources). |
-| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/javascript/src/indexers) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).|
-| [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/javascript/src/skillSets) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. |
-| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/javascript/src/synonymMaps) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |
+| [indexes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [search indexes](search-what-is-an-index.md). This sample category also includes a service statistic sample. |
+| [dataSourceConnections (for indexers)](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/javascript/dataSourceConnectionOperations.js) | Demonstrates how to create, update, get, list, and delete indexer data sources, required for indexer-based indexing of [supported Azure data sources](search-indexer-overview.md#supported-data-sources). |
+| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).|
+| [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. |
+| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |
### TypeScript samples | Samples | Description | ||-|
-| [indexes](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/typescript/src/indexes) | Demonstrates how to create, update, get, list, and delete [search indexes](search-what-is-an-index.md). This sample category also includes a service statistic sample. |
-| [dataSourceConnections (for indexers)](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/typescript/src/dataSourceConnections) | Demonstrates how to create, update, get, list, and delete indexer data sources, required for indexer-based indexing of [supported Azure data sources](search-indexer-overview.md#supported-data-sources). |
-| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/typescript/src/indexers) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).|
-| [skillSet](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/typescript/src/skillSets) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. |
-| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/typescript/src/synonymMaps) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |
+| [indexes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/typescript/src) | Demonstrates how to create, update, get, list, and delete [search indexes](search-what-is-an-index.md). This sample category also includes a service statistic sample. |
+| [dataSourceConnections (for indexers)](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/dataSourceConnectionOperations.ts) | Demonstrates how to create, update, get, list, and delete indexer data sources, required for indexer-based indexing of [supported Azure data sources](search-indexer-overview.md#supported-data-sources). |
+| [indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/typescript/src) | Demonstrates how to create, update, get, list, reset, and delete [indexers](search-indexer-overview.md).|
+| [skillSet](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/skillSetOperations.ts) | Demonstrates how to create, update, get, list, and delete [skillsets](cognitive-search-working-with-skillsets.md) that are attached indexers, and that perform AI-based enrichment during indexing. |
+| [synonymMaps](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/search/search-documents/samples/v11/typescript/src/synonymMapOperations.ts) | Demonstrates how to create, update, get, list, and delete [synonym maps](search-synonyms.md). |
## Doc samples
search Search Howto Create Indexers https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-howto-create-indexers.md
For Cognitive Search, the Azure SDKs implement generally available features. As
|--|--|-| | .NET | [SearchIndexerClient](/dotnet/api/azure.search.documents.indexes.searchindexerclient) | [DotNetHowToIndexers](https://github.com/Azure-Samples/search-dotnet-getting-started/tree/master/DotNetHowToIndexers) | | Java | [SearchIndexerClient](/java/api/com.azure.search.documents.indexes.searchindexerclient) | [CreateIndexerExample.java](https://github.com/Azure/azure-sdk-for-java/blob/master/sdk/search/azure-search-documents/src/samples/java/com/azure/search/documents/indexes/CreateIndexerExample.java) |
-| JavaScript | [SearchIndexerClient](/javascript/api/@azure/search-documents/searchindexerclient) | [Indexers](https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/search/search-documents/samples/javascript/src/indexers) |
+| JavaScript | [SearchIndexerClient](/javascript/api/@azure/search-documents/searchindexerclient) | [Indexers](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) |
| Python | [SearchIndexerClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexerclient) | [sample_indexers_operations.py](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/search/azure-search-documents/samples/sample_indexers_operations.py) | ## Run the indexer
search Search Sku Manage Costs https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-sku-manage-costs.md
For [AI enrichment](cognitive-search-concept-intro.md) using billable skills, yo
| [Custom Entity Lookup](cognitive-search-skill-custom-entity-lookup.md) | Metered by Azure Cognitive Search. See the [pricing page](https://azure.microsoft.com/pricing/details/search/#pricing) for details. | > [!TIP]
-> [Incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) lowers the cost of skillset processing by caching and reusing enrichments that are unaffected by changes made to a skillset. Caching requires Azure Storage (see [pricing](/pricing/details/storage/blobs/) but the cumulative cost of skillset execution is lower if existing enrichments can be reused.
+> [Incremental enrichment (preview)](cognitive-search-incremental-indexing-conceptual.md) lowers the cost of skillset processing by caching and reusing enrichments that are unaffected by changes made to a skillset. Caching requires Azure Storage (see [pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) but the cumulative cost of skillset execution is lower if existing enrichments can be reused.
## Tips for managing costs
search Search What Is An Index https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/search/search-what-is-an-index.md
For Cognitive Search, the Azure SDKs implement generally available features. As
|--|--|-| | .NET | [SearchIndexClient](/dotnet/api/azure.search.documents.indexes.searchindexclient) | [azure-search-dotnet-samples/quickstart/v11/](https://github.com/Azure-Samples/azure-search-dotnet-samples/tree/master/quickstart/v11) | | Java | [SearchIndexClient](/java/api/com.azure.search.documents.indexes.searchindexclient) | [CreateIndexExample.java](https://github.com/Azure/azure-sdk-for-java/blob/azure-search-documents_11.1.3/sdk/search/azure-search-documents/src/samples/java/com/azure/search/documents/indexes/CreateIndexExample.java) |
-| JavaScript | [SearchIndexClient](/javascript/api/@azure/search-documents/searchindexclient) | [Indexes](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/search/search-documents/samples/javascript/src/indexes) |
+| JavaScript | [SearchIndexClient](/javascript/api/@azure/search-documents/searchindexclient) | [Indexes](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/search/search-documents/samples/v11/javascript) |
| Python | [SearchIndexClient](/python/api/azure-search-documents/azure.search.documents.indexes.searchindexclient) | [sample_index_crud_operations.py](https://github.com/Azure/azure-sdk-for-python/blob/7cd31ac01fed9c790cec71de438af9c45cb45821/sdk/search/azure-search-documents/samples/sample_index_crud_operations.py) | ## Define fields
sentinel Best Practices Workspace Architecture https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/best-practices-workspace-architecture.md
When planning your Azure Sentinel workspace deployment, you must also design you
For more information, see [Design your Azure Sentinel workspace architecture](design-your-workspace-architecture.md) and [Sample workspace designs](sample-workspace-designs.md) for common scenarios, and [Pre-deployment activities and prerequisites for deploying Azure Sentinel](prerequisites.md).
+See our video: [Architecting SecOps for Success: Best Practices for Deploying Azure Sentinel](https://youtu.be/DyL9MEMhqmI)
++ ## Tenancy considerations While fewer workspaces are simpler to manage, you may have specific needs for multiple tenants and workspaces. For example, many organizations have a cloud environment that contains multiple [Azure Active Directory (Azure AD) tenants](../active-directory/develop/quickstart-create-new-tenant.md), resulting from mergers and acquisitions or due to identity separation requirements.
sentinel Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/best-practices.md
The Azure Sentinel documentation has more best practice guidance scattered throu
- [Create custom analytics rules to detect threats](detect-threats-custom.md) - [Use Jupyter Notebook to hunt for security threats](notebooks.md)
+For more information, also see our video: [Architecting SecOps for Success: Best Practices for Deploying Azure Sentinel](https://youtu.be/DyL9MEMhqmI)
+ ## Next steps To get started with Azure Sentinel, see:
sentinel Connect Ai Vectra Detect https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-ai-vectra-detect.md
Configure AI Vectra Detect to forward CEF-formatted Syslog messages to your Log
3. To use the relevant schema in Log Analytics for the AI Vectra Detect events, search for **CommonSecurityLog**.
-4. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+4. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
## Next steps
sentinel Connect Akamai Security Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-akamai-security-events.md
To get its logs into Azure Sentinel, configure your Akamai Security Events colle
- Log format ΓÇô CEF - Log types ΓÇô all available
- 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Aruba Clearpass https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-aruba-clearpass.md
To get its logs into Azure Sentinel, configure your Aruba ClearPass appliance to
- Log format ΓÇô CEF - Log types ΓÇô all available or all appropriate
- 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Broadcom Symantec Dlp https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-broadcom-symantec-dlp.md
To get its logs into Azure Sentinel, configure your Broadcom Symantec DLP applia
- Log format ΓÇô CEF - Log types ΓÇô all available or all appropriate
- 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Cef Solution Config https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-solution-config.md
If a connector does not exist for your specific security solution, use the follo
1. To search for CEF events in Log Analytics, enter `CommonSecurityLog` in the query window.
-1. Continue to STEP 3: [validate connectivity](connect-cef-verify.md).
+1. Continue to STEP 3: [validate connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
> [!NOTE] > **Changing the source of the TimeGenerated field**
sentinel Connect Cef Verify https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cef-verify.md
- Title: Validate connectivity to Azure Sentinel | Microsoft Docs
-description: Validate connectivity of your security solution to make sure CEF messages are being forwarded to Azure Sentinel.
-------- Previously updated : 01/05/2021---
-# STEP 3: Validate connectivity
-
-Once you have deployed your log forwarder (in Step 1) and configured your security solution to send it CEF messages (in Step 2), follow these instructions to verify connectivity between your security solution and Azure Sentinel.
-
-## Prerequisites
--- You must have elevated permissions (sudo) on your log forwarder machine.--- You must have **python 2.7** or **3** installed on your log forwarder machine.<br>
-Use the `python ΓÇôversion` command to check.
--- You may need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**.-
-## How to validate connectivity
-
-1. From the Azure Sentinel navigation menu, open **Logs**. Run a query using the **CommonSecurityLog** schema to see if you are receiving logs from your security solution.<br>
-Be aware that it may take about 20 minutes until your logs start to appear in **Log Analytics**.
-
-1. If you don't see any results from the query, verify that events are being generated from your security solution, or try generating some, and verify they are being forwarded to the Syslog forwarder machine you designated.
-
-1. Run the following script on the log forwarder (applying the Workspace ID in place of the placeholder) to check connectivity between your security solution, the log forwarder, and Azure Sentinel. This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br>
-
- ```bash
- sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]
- ```
-
- - You may get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
-
- - You may get a message directing you to run a command to correct an issue with the **parsing of Cisco ASA firewall logs**. See the [explanation in the validation script](#parsing-command) for details.
-
-## Validation script explained
-
-The validation script performs the following checks:
-
-# [rsyslog daemon](#tab/rsyslog)
-
-1. Checks that the file<br>
- `/etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf`<br>
- exists and is valid.
-
-1. Checks that the file includes the following text:
-
- ```bash
- <source>
- type syslog
- port 25226
- bind 127.0.0.1
- protocol_type tcp
- tag oms.security
- format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
- <parse>
- message_format auto
- </parse>
- </source>
-
- <filter oms.security.**>
- type filter_syslog_security
- </filter>
- ```
-
-1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
-
- ```bash
- grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb
- ```
-
- - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
-
- ```bash
- # Cisco ASA parsing fix
- sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
-
- ```bash
- grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
- ```
-
- - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
-
- ```bash
- # Computer field mapping fix
- sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks if there are any security enhancements on the machine that might be blocking network traffic (such as a host firewall).
-
-1. Checks that the syslog daemon (rsyslog) is properly configured to send messages (that it identifies as CEF) to the Log Analytics agent on TCP port 25226:
-
- - Configuration file: `/etc/rsyslog.d/security-config-omsagent.conf`
-
- ```bash
- if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
- ```
-
-1. Restarts the syslog daemon and the Log Analytics agent:
-
- ```bash
- service rsyslog restart
-
- /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the necessary connections are established: tcp 514 for receiving data, tcp 25226 for internal communication between the syslog daemon and the Log Analytics agent:
-
- ```bash
- netstat -an | grep 514
-
- netstat -an | grep 25226
- ```
-
-1. Checks that the syslog daemon is receiving data on port 514, and that the agent is receiving data on port 25226:
-
- ```bash
- sudo tcpdump -A -ni any port 514 -vv
-
- sudo tcpdump -A -ni any port 25226 -vv
- ```
-
-1. Sends MOCK data to port 514 on localhost. This data should be observable in the Azure Sentinel workspace by running the following query:
-
- ```kusto
- CommonSecurityLog
- | where DeviceProduct == "MOCK"
- ```
-
-# [syslog-ng daemon](#tab/syslogng)
-
-1. Checks that the file<br>
- `/etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf`<br>
- exists and is valid.
-
-1. Checks that the file includes the following text:
-
- ```bash
- <source>
- type syslog
- port 25226
- bind 127.0.0.1
- protocol_type tcp
- tag oms.security
- format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
- <parse>
- message_format auto
- </parse>
- </source>
-
- <filter oms.security.**>
- type filter_syslog_security
- </filter>
- ```
-
-1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
-
- ```bash
- grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb
- ```
-
- - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
-
- ```bash
- # Cisco ASA parsing fix
- sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
-
- ```bash
- grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
- ```
-
- - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
-
- ```bash
- # Computer field mapping fix
- sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks if there are any security enhancements on the machine that might be blocking network traffic (such as a host firewall).
-
-1. Checks that the syslog daemon (syslog-ng) is properly configured to send messages that it identifies as CEF (using a regex) to the Log Analytics agent on TCP port 25226:
-
- - Configuration file: `/etc/syslog-ng/conf.d/security-config-omsagent.conf`
-
- ```bash
- filter f_oms_filter {match(\"CEF\|ASA\" ) ;};destination oms_destination {tcp(\"127.0.0.1\" port(25226));};
- log {source(s_src);filter(f_oms_filter);destination(oms_destination);};
- ```
-
-1. Restarts the syslog daemon and the Log Analytics agent:
-
- ```bash
- service syslog-ng restart
-
- /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
- ```
-
-1. Checks that the necessary connections are established: tcp 514 for receiving data, tcp 25226 for internal communication between the syslog daemon and the Log Analytics agent:
-
- ```bash
- netstat -an | grep 514
-
- netstat -an | grep 25226
- ```
-
-1. Checks that the syslog daemon is receiving data on port 514, and that the agent is receiving data on port 25226:
-
- ```bash
- sudo tcpdump -A -ni any port 514 -vv
-
- sudo tcpdump -A -ni any port 25226 -vv
- ```
-
-1. Sends MOCK data to port 514 on localhost. This data should be observable in the Azure Sentinel workspace by running the following query:
-
- ```kusto
- CommonSecurityLog
- | where DeviceProduct == "MOCK"
- ```
---
-## Next steps
-
-In this document, you learned how to connect CEF appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:
--- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md).-- Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Azure Sentinel](./detect-threats-built-in.md).-- [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Checkpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-checkpoint.md
Configure your Check Point appliance to forward Syslog messages in CEF format to
- Replace the **name** and **target-server IP address** in the CLI with the Syslog agent name and IP address. - Set the format to **CEF**. 1. If you are using version R77.30 or R80.10, scroll up to **Installations** and follow the instructions to install a Log Exporter for your version.
-1. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+1. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
## Next steps In this document, you learned how to connect Check Point appliances to Azure Sentinel. To learn more about Azure Sentinel, see the following articles:-- [Validate connectivity](connect-cef-verify.md).
+- [Validate connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
- Get started [detecting threats with Azure Sentinel](detect-threats-built-in.md). - [Use workbooks](monitor-your-data.md) to monitor your data.
sentinel Connect Cisco https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cisco.md
Cisco ASA doesn't support CEF, so the logs are sent as Syslog and the Azure Sent
1. To use the relevant schema in Log Analytics for the Cisco events, search for `CommonSecurityLog`.
-1. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+1. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
sentinel Connect Citrix Waf https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-citrix-waf.md
Citrix WAF sends Syslog messages in CEF format to a Linux-based log forwarding s
1. Follow Citrix's supplied instructions to [configure the WAF](https://support.citrix.com/article/CTX234174), [configure CEF logging](https://support.citrix.com/article/CTX136146), and [configure sending the logs to your log forwarder](https://docs.citrix.com/en-us/citrix-adc/13/system/audit-logging/configuring-audit-logging.html). Make sure you send the logs to TCP port 514 on the log forwarder machine's IP address.
-1. Validate your connection and verify data ingestion using [these instructions](connect-cef-verify.md). It may take up to 20 minutes until your logs start to appear in Log Analytics.
+1. Validate your connection and verify data ingestion using [these instructions](troubleshooting-cef-syslog.md#validate-cef-connectivity). It may take up to 20 minutes until your logs start to appear in Log Analytics.
## Find your data
sentinel Connect Common Event Format https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-common-event-format.md
When you connect an external solution that sends CEF messages, there are three s
STEP 1: [Connect CEF by deploying a Syslog/CEF forwarder](connect-cef-agent.md) STEP 2: [Perform solution-specific steps](connect-cef-solution-config.md)
-STEP 3: [Verify connectivity](connect-cef-verify.md)
+STEP 3: [Verify connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity)
This article describes how the connection works, lists prerequisites, and shows the steps for deploying a mechanism for security solutions to send Common Event Format (CEF) messages on top of Syslog.
In this document, you learned how Azure Sentinel collects CEF logs from security
- STEP 1: [Connect CEF by deploying a Syslog/CEF forwarder](connect-cef-agent.md) - STEP 2: [Perform solution-specific steps](connect-cef-solution-config.md)-- STEP 3: [Verify connectivity](connect-cef-verify.md)
+- STEP 3: [Verify connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity)
To learn more about what to do with the data you've collected in Azure Sentinel, see the following articles:
sentinel Connect Cyberark https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-cyberark.md
CyberArk EPV logs are sent from the Vault to a Linux-based log forwarding server
1. Follow the [CyberArk EPV instructions](https://docs.cyberark.com/Product-Doc/OnlineHelp/PAS/Latest/en/Content/PASIMP/DV-Integrating-with-SIEM-Applications.htm) to configure sending syslog data to the log forwarding server.
-1. Validate your connection and verify data ingestion using [these instructions](connect-cef-verify.md). It may take up to 20 minutes until your logs start to appear in Log Analytics.
+1. Validate your connection and verify data ingestion using [these instructions](troubleshooting-cef-syslog.md#validate-cef-connectivity). It may take up to 20 minutes until your logs start to appear in Log Analytics.
## Find your data
sentinel Connect F5 https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-f5.md
This article explains how to use the F5 ASM data connector to easily pull your F
1. To use the relevant schema in Log Analytics for CEF events, search for `CommonSecurityLog`.
-1. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+1. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
## Next steps
sentinel Connect Forcepoint Casb Ngfw https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-forcepoint-casb-ngfw.md
The Forcepoint data connectors allow you to connect Forcepoint Cloud Access Secu
2. Search for CommonSecurityLog to use the relevant schema in Log Analytics with DeviceVendor name contains 'Forcepoint'.
-3. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+3. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
sentinel Connect Fortinet https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-fortinet.md
Configure Fortinet to forward Syslog messages in CEF format to your Azure worksp
1. To use the relevant schema in Azure Monitor Log Analytics for the Fortinet events, search for `CommonSecurityLog`.
-1. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+1. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
## Next steps
sentinel Connect Imperva Waf Gateway https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-imperva-waf-gateway.md
To get its logs into Azure Sentinel, configure your Imperva WAF Gateway applianc
- Log format ΓÇô CEF - Log types ΓÇô all available
- 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Paloalto https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-paloalto.md
Configure Palo Alto Networks to forward Syslog messages in CEF format to your Az
1. To use the relevant schema in Log Analytics for the Palo Alto Networks events, search for **CommonSecurityLog**.
-1. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+1. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
sentinel Connect Thycotic Secret Server https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-thycotic-secret-server.md
To get its logs into Azure Sentinel, configure your Thycotic Secret Server to se
- Log format ΓÇô CEF - Log types ΓÇô all available
- 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Trend Micro Tippingpoint https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-trend-micro-tippingpoint.md
To get its logs into Azure Sentinel, configure your TippingPoint TPS solution to
- Log format ΓÇô **ArcSight CEF Format v4.2** - Log types ΓÇô all available
- 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. Under **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Wirex Systems https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-wirex-systems.md
To get its logs into Azure Sentinel, configure your WireX Systems NFP appliance
- Log format ΓÇô CEF - Log types ΓÇô all recommended by WireX
- 1. **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [STEP 3: Validate connectivity](connect-cef-verify.md) in the Azure Sentinel documentation for more detailed instructions and explanation.
+ 1. **3. Validate connection** - Verify data ingestion by copying the command on the connector page and running it on your log forwarder. See [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity) in the Azure Sentinel documentation for more detailed instructions and explanation.
It may take up to 20 minutes until your logs start to appear in Log Analytics.
sentinel Connect Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/connect-zscaler.md
This article explains how to connect your Zscaler Internet Access appliance to A
1. To use the relevant schema in Log Analytics for the CEF events, search for `CommonSecurityLog`.
-1. Continue to [STEP 3: Validate connectivity](connect-cef-verify.md).
+1. Continue to [Validate CEF connectivity](troubleshooting-cef-syslog.md#validate-cef-connectivity).
## Next steps
sentinel Quickstart Onboard https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/quickstart-onboard.md
For more information, see [Pre-deployment activities and prerequisites for deplo
### Geographical availability and data residency -- Azure Sentinel can run on workspaces in most [GA regions of Log Analytics](https://azure.microsoft.com/global-infrastructure/services/?products=monitor) except the China and Germany (Sovereign) regions. Sometimes New Log Analytics regions may take some time to onboard the Azure Sentinel service.
+- Azure Sentinel can run on workspaces in most [GA regions of Log Analytics](https://azure.microsoft.com/global-infrastructure/services/?products=monitor) except the China and Germany (Sovereign) regions. New Log Analytics regions may take some time to onboard the Azure Sentinel service.
-- Data generated by Azure Sentinel, such as incidents, bookmarks, and analytics rules, may contain some customer data sourced from the customer's Log Analytics workspaces. This Azure Sentinel-generated data is saved in the geography or region listed in the following table, according to the geography or region in which the workspace is located:
+- Data generated by Azure Sentinel, such as incidents, bookmarks, and analytics rules, may contain some customer data sourced from the customer's Log Analytics workspaces. This Azure Sentinel-generated data is saved in the geography listed in the following table, according to the geography in which the workspace is located:
- | Workspace geography/region | Azure Sentinel-generated data geography/region |
+ | Workspace geography | Azure Sentinel-generated data geography |
| | |
- | United States<br>India<br>Africa | United States |
+ | United States<br>India | United States |
| Europe<br>France | Europe | | Australia | Australia | | United Kingdom | United Kingdom | | Canada | Canada | | Japan | Japan |
- | Southeast Asia (Singapore) | Southeast Asia (Singapore)* |
- | Brazil | Brazil |
+ | Asia Pacific | Asia Pacific * |
+ | Brazil | Brazil * |
| Norway | Norway |
- | South Africa | South Africa |
+ | Africa | Africa |
| Korea | Korea | | Germany | Germany | | United Arab Emirates | United Arab Emirates | | Switzerland | Switzerland | |
- \* There is no paired region for Southeast Asia.
+ \* Single-region data residency is currently provided only in the Southeast Asia region (Singapore) of the Asia Pacific geography, and in the Brazil South (Sao Paulo State) region of the Brazil geography. For all other regions, customer data can be stored anywhere in the geography of the workspace where Sentinel is onboarded.
> [!IMPORTANT] > - By enabling certain rules that make use of the machine learning (ML) engine, **you give Microsoft permission to copy relevant ingested data outside of your Azure Sentinel workspace's geography** as may be required by the machine learning engine to process these rules.
sentinel Sap Deploy Solution https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-solution.md
To ingest SAP logs into Azure Sentinel, you must have the Azure Sentinel SAP dat
After the SAP data connector is deployed, deploy the SAP solution security content to smoothly gain insight into your organization's SAP environment and improve any related security operation capabilities.
-In this tutorial, you learn:
+In this article, you learn:
> [!div class="checklist"] > * How to prepare your SAP system for the SAP data connector deployment > * How to use a Docker container and an Azure VM to deploy the SAP data connector > * How to deploy the SAP solution security content in Azure Sentinel
+> [!NOTE]
+> Extra steps are required to deploy your SAP data connector over a secure SNC connection. For more information, see [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md).
+>
## Prerequisites In order to deploy the Azure Sentinel SAP data connector and security content as described in this tutorial, you must have the following prerequisites:
This procedure describes how to ensure that your SAP system has the correct prer
|- 700 to 702<br>- 710 to 711, 730, 731, and 740<br>- 750 to 752 | 2502336: CD (Change Document): RSSCD100 - read only from archive, not from database | | | |
- Later versions do not require the additional notes. For more information, see the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html), logging in with a SAP user account.
+ Later versions do not require the extra notes. For more information, see the [SAP support Launchpad site](https://support.sap.com/en/https://docsupdatetracker.net/index.html), logging in with an SAP user account.
1. Download and install one of the following SAP change requests from the Azure Sentinel GitHub repository, at https://github.com/Azure/Azure-Sentinel/tree/master/Solutions/SAP/CR:
If you have SAP HANA database audit logs configured with Syslog, you'll need als
Learn more about the Azure Sentinel SAP solutions: -- [Expert configuration options, on-premises deployment and SAPControl log sources](sap-solution-deploy-alternate.md)
+- [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)
+- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)
- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md) - [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md) - [Azure Sentinel SAP solution: built-in security content](sap-solution-security-content.md)
sentinel Sap Deploy Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-deploy-troubleshoot.md
If you want to check the SAP data connector configuration file and make manual u
## Reset the SAP data connector
-The following steps reset the connector and re-ingest SAP logs from the last 24 hours.
+The following steps reset the connector and reingest SAP logs from the last 24 hours.
1. Stop the connector. Run:
After having deployed both the SAP data connector and security content, you may
### Corrupt or missing SAP SDK file
-This occurs when the connector fails to boot with PyRfc, or zip-related error messages are shown.
+This error may occur when the connector fails to boot with PyRfc, or zip-related error messages are shown.
1. Reinstall the SAP SDK. 1. Verify that you're the correct Linux 64-bit version. As of the current date, the release filename is: **nwrfc750P_8-70002752.zip**.
A fixed configuration is when the password is stored directly in the **systemcon
If your credentials there are incorrect, verify your credentials.
-Use base64 encryption to encrypt the user and password. You can use online encryption tools to do this, such as https://www.base64encode.org/.
+Use base64 encryption to encrypt the user and password. You can use online encryption tools to do encrypt your credentials, such as https://www.base64encode.org/.
### Incorrect SAP ABAP user credentials in key vault
If you're having network connectivity issues to the SAP environment or to Azure
### Other unexpected issues
-If you have unexpected issues not listed in this article, try the following:
+If you have unexpected issues not listed in this article, try the following steps:
- [Reset the connector and reload your logs](#reset-the-sap-data-connector) - [Upgrade the the connector](sap-deploy-solution.md#update-your-sap-data-connector) to the latest version.
If you have unexpected issues not listed in this article, try the following:
### Retrieving an audit log fails with warnings
-If your attempt to retrieve an audit log, without the [required change request](sap-solution-detailed-requirements.md#required-sap-log-change-requests) deployed or on an older / un-patched version, and the process fails with warnings, verify that the SAP Auditlog can be retrieved using one of the following methods:
+If you attempt to retrieve an audit log, without the [required change request](sap-solution-detailed-requirements.md#required-sap-log-change-requests) deployed or on an older / unpatched version, and the process fails with warnings, verify that the SAP Auditlog can be retrieved using one of the following methods:
- Using a compatibility mode called *XAL* on older versions - Using a version not recently patched
Check that the OS user is valid and can run the following command on the target
sapcontrol -nr <SID> -function GetSystemInstanceList ```
-### SAPCONTROL or JAVA subsystem fails with timezone related error message
+### SAPCONTROL or JAVA subsystem fails with timezone-related error message
If your SAPCONTROL or JAVA subsystem fails with a timezone-related error message, such as: **Please check the configuration and network access to the SAP server - 'Etc/NZST'**, make sure that you're using standard timezone codes.
For more information, see:
- [Deploy SAP continuous threat monitoring (public preview)](sap-deploy-solution.md) - [Azure Sentinel SAP solution logs reference (public preview)](sap-solution-log-reference.md)-- [Expert configuration options, on-premises deployment and SAPControl log sources](sap-solution-deploy-alternate.md)
+- [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)
+- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)
- [Azure Sentinel SAP solution: security content reference (public preview)](sap-solution-security-content.md) - [Azure Sentinel SAP solution detailed SAP requirements (public preview)](sap-solution-detailed-requirements.md)
sentinel Sap Solution Deploy Alternate https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-deploy-alternate.md
-# Expert configuration options, on-premises deployment and SAPControl log sources
+# Expert configuration options, on-premises deployment, and SAPControl log sources
This article describes how to deploy the Azure Sentinel SAP data connector in an expert or custom process, such as using an on-premises machine and an Azure Key Vault to store your credentials.
This article describes how to deploy the Azure Sentinel SAP data connector in an
The basic prerequisites for deploying your Azure Sentinel SAP data connector are the same regardless of your deployment method.
-Make sure that your system complies with the prerequisites documented in the main [SAP data connector deployment tutorial](sap-deploy-solution.md#prerequisites) before you start.
+Make sure that your system complies with the prerequisites documented in the main [SAP data connector deployment procedure](sap-deploy-solution.md#prerequisites) before you start.
For more information, see [Azure Sentinel SAP solution detailed SAP requirements (public preview)](sap-solution-detailed-requirements.md).
To ingest all ABAP logs into Azure Sentinel, including both NW RFC and SAP Contr
To ingest SAP Control Web Service logs into Azure Sentinel, configure the following JAVA SAP Control instance details:
-|Paramter |Description |
+|Parameter |Description |
||| |**javaappserver** |Enter your SAP Control Java server host. <br>For example: `contoso-java.server.com` | |**javainstance** |Enter your SAP Control ABAP instance number. <br>For example: `10` |
For more information, see [Deploy the SAP solution](sap-deploy-solution.md#deplo
For more information, see:
+- [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)
- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md) - [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md) - [Azure Sentinel SAP solution: security content reference](sap-solution-security-content.md)
sentinel Sap Solution Deploy Snc https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-deploy-snc.md
+
+ Title: Deploy the Azure Sentinel SAP data connector with Secure Network Communications (SNC) | Microsoft Docs
+description: Learn how to deploy the Azure Sentinel data connector for SAP environments with a secure connection via SNC, for the NetWeaver/ABAP interface based logs.
+++++ Last updated : 08/01/2021++++
+# Deploy the Azure Sentinel SAP data connector with SNC
+
+This article describes how to deploy the Azure Sentinel SAP data connector when you have a secure connection to SAP via Secure Network Communications (SNC) for the NetWeaver/ABAP interface based logs.
+
+> [!NOTE]
+> The default, and most recommended process for deploying the Azure Sentinel SAP data connector is by [using an Azure VM](sap-deploy-solution.md). This article is intended for advanced users.
+
+> [!IMPORTANT]
+> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+
+## Prerequisites
+
+The basic prerequisites for deploying your Azure Sentinel SAP data connector are the same regardless of your deployment method.
+
+Make sure that your system complies with the prerequisites documented in the main [SAP data connector deployment procedure](sap-deploy-solution.md#prerequisites) before you start.
+
+Other prerequisites for working with SNC include:
+
+- **A secure connection to SAP with SNC**. Define the connection-specific SNC parameters in the repository constants for the AS ABAP system you're connecting to. For more information, see the relevant [SAP community wiki page](https://wiki.scn.sap.com/wiki/display/Security/Securing+Connections+to+AS+ABAP+with+SNC).
+
+- **The SAPCAR utility**, downloaded from the SAP Service Marketplace. For more information, see the [SAP Installation Guide](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/5.0.4/en-US/467291d0dc104d19bba073a0380dc6b4.html)
+
+For more information, see [Azure Sentinel SAP solution detailed SAP requirements (public preview)](sap-solution-detailed-requirements.md).
+
+## Create your Azure key vault
+
+Create an Azure key vault that you can dedicate to your Azure Sentinel SAP data connector.
+
+Run the following command to create your Azure key vault and grant access to an Azure service principal:
+
+``` azurecli
+kvgp=<KVResourceGroup>
+
+kvname=<keyvaultname>
+
+spname=<sp-name>
+
+kvname=<keyvaultname>
+# Optional when Azure MI not enabled - Create sp user for AZ cli connection, save details for env.list file
+az ad sp create-for-rbac ΓÇôname $spname
+
+SpID=$(az ad sp list ΓÇôdisplay-name $spname ΓÇôquery ΓÇ£[].appIdΓÇ¥ --output tsv
+
+#Create key vault
+az keyvault create \
+ --name $kvname \
+ --resource-group $kvgp
+
+# Add access to SP
+az keyvault set-policy --name $kvname --resource-group $kvgp --object-id $spID --secret-permissions get list set
+```
+
+For more information, see [Quickstart: Create a key vault using the Azure CLI](../key-vault/general/quick-create-cli.md).
+
+## Add Azure Key Vault secrets
+
+To add Azure Key Vault secrets, run the following script, with your own system ID and the credentials you want to add:
+
+```azurecli
+#Add Azure Log ws ID
+az keyvault secret set \
+ --name <SID>-LOG_WS_ID \
+ --value "<logwsod>" \
+ --description SECRET_AZURE_LOG_WS_ID --vault-name $kvname
+
+#Add Azure Log ws public key
+az keyvault secret set \
+ --name <SID>-LOG_WS_PUBLICKEY \
+ --value "<loswspubkey>" \
+ --description SECRET_AZURE_LOG_WS_PUBLIC_KEY --vault-name $kvname
+```
+
+For more information, see the [az keyvault secret](/cli/azure/keyvault/secret) CLI documentation.
+
+## Deploy the SAP data connector
+
+This procedure describes how to deploy the SAP data connector on a VM when connecting via SNC.
+
+We recommend that you perform this procedure after you have a [key vault](#create-your-azure-key-vault) ready with your [SAP credentials](#add-azure-key-vault-secrets).
+
+**To deploy the SAP data connector**:
+
+1. On your data connector VM, download the latest SAP NW RFC SDK from the [SAP Launchpad site](https://support.sap.com) > **SAP NW RFC SDK** > **SAP NW RFC SDK 7.50** > **nwrfc750X_X-xxxxxxx.zip**.
+
+ > [!NOTE]
+ > You'll need your SAP user sign-in information in order to access the SDK, and you must download the SDK that matches your operating system.
+ >
+ > Make sure to select the **LINUX ON X86_64** option.
+
+1. Create a new folder with a meaningful name, and copy the SDK zip file into your new folder.
+
+1. Clone the Azure Sentinel solution GitHub repo onto your data connector VM, and copy Azure Sentinel SAP solution **systemconfig.ini** file into your new folder.
+
+ For example:
+
+ ```bash
+ mkdir /home/$(pwd)/sapcon/<sap-sid>/
+ cd /home/$(pwd)/sapcon/<sap-sid>/
+ wget https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/Solutions/SAP/template/systemconfig.ini
+ cp <**nwrfc750X_X-xxxxxxx.zip**> /home/$(pwd)/sapcon/<sap-sid>/
+ ```
+
+1. Edit the **systemconfig.ini** file as needed, using the embedded comments as a guide.
+
+ You'll need to edit all configurations except for the key vault secrets. For more information, see [Manually configure the SAP data connector](sap-solution-deploy-alternate.md#manually-configure-the-sap-data-connector).
+
+1. Define the logs that you want to ingest into Azure Sentinel using the instructions in the **systemconfig.ini** file.
+
+ For example, see [Define the SAP logs that are sent to Azure Sentinel](sap-solution-deploy-alternate.md#define-the-sap-logs-that-are-sent-to-azure-sentinel).
+
+ > [!NOTE]
+ > Relevant logs for SNC communications are only those logs that are retrieved via the NetWeaver / ABAP interface. SAP Control and HANA logs are out of scope for SNC.
+ >
+
+1. Define the following configurations using the instructions in the **systemconfig.ini** file:
+
+ - Whether to include user email addresses in audit logs
+ - Whether to retry failed API calls
+ - Whether to include cexal audit logs
+ - Whether to wait an interval of time between data extractions, especially for large extractions
+
+ For more information, see [SAL logs connector configurations](sap-solution-deploy-alternate.md#sal-logs-connector-settings).
+
+1. Save your updated **systemconfig.ini** file in the **sapcon** directory on your VM.
+
+1. Download and run the pre-defined Docker image with the SAP data connector installed. Run:
+
+ ```bash
+ docker pull docker pull mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest-preview
+ docker create -v $(pwd):/sapcon-app/sapcon/config/system -v /home/azureuser /sap/sec:/sapcon-app/sec --env SCUDIR=/sapcon-app/sec --name sapcon-snc mcr.microsoft.com/azure-sentinel/solutions/sapcon:latest-preview
+ ```
++
+## Post-deployment SAP system procedures
+
+After deploying your SAP data connector, perform the following SAP system procedures:
+
+1. Download the SAP Cryptographic Library from the [SAP Service Marketplace](https://launchpad.support.sap.com/#/) > **Software Downloads** > **Browse our Download Catalog** > **SAP Cryptographic Software**.
+
+ For more information, see the [SAP Installation Guide](https://help.sap.com/viewer/d1d04c0d65964a9b91589ae7afc1bd45/5.0.4/en-US/86921b29cac044d68d30e7b125846860.html).
+
+1. Use the SAPCAR utility to extract the library files, and deploy them to your SAP data connector VM, in the `<sec>` directory.
+
+1. Verify that you have permissions to run the library files.
+
+1. Define an environment variable named **SECUDIR**, with a value of the full path to the `<sec>` directory.
+
+1. Create a personal security environment (PSE). The **sapgenspe** command-line tool is available in your `<sec>` directory on your SAP data connector VM.
+
+ For example:
+
+ ```bash
+ ./sapgenpse get_pse -p my_pse.pse -noreq -x my_pin "CN=sapcon.com, O=my_company, C=IL"
+ ```
+
+ For more information, see [Creating a Personal Security Environment](https://help.sap.com/viewer/4773a9ae1296411a9d5c24873a8d418c/8.0/en-US/285bb1fda3fa472c8d9205bae17a6f95.html) in the SAP documentation.
+
+1. Create credentials for your PSE. For example:
+
+ ```bash
+ ./sapgenpse seclogin -p my_pse.pse -x my_pin -O MXDispatcher_Service_User
+ ```
+
+ For more information, see [Creating Credentials](https://help.sap.com/viewer/4773a9ae1296411a9d5c24873a8d418c/8.0/en-US/d8b50371667740e797e6c9f0e9b7141f.html) in the SAP documentation.
+
+1. Exchange the Public-Key certificates between the Identity Center and the AS ABAP's SNC PSE.
+
+ For example, to export the Identity Center's Public-Key certificate, run:
+
+ ```bash
+ ./sapgenpse export_own_cert -o my_cert.crt -p my_pse.pse -x abcpin
+ ```
+
+ Import the certificate to the AS ABAP's SNC PSE, export it from the PSE, and then import it back to the Identity Center.
+
+ For example, to import the certificate to the Identity Center, run:
+
+ ```bash
+ ./sapgenpse maintain_pk -a full_path/my_secure_dir/my_exported_cert.crt -p my_pse.pse -x my_pin
+ ```
+
+ For more information, see [Exchanging the Public-Key Certificates](https://help.sap.com/viewer/4773a9ae1296411a9d5c24873a8d418c/8.0/en-US/7bbf90b29c694e6080e968559170fbcd.html) in the SAP documentation.
++
+## Edit the SAP data connector configuration
+
+1. On your SAP data connector VM, navigate to the **systemconfig.ini** file and define the following parameters with the relevant values:
+
+ ```ini
+ [Secrets Source]
+ secrets = AZURE_KEY_VAULT
+ ```
+
+1. In your [Azure key vault](#create-your-azure-key-vault), generate the following secrets:
+
+ - `<Interprefix>-ABAPSNCPARTNERNAME`, where the value is the `<Relevant DN details>`
+ - `<Interprefix>-ABAPSNCLIB`, where the value is the `<lib_Path>`
+ - `<Interprefix>-ABAPX509CERT`, where the value is the `<Certificate_Code>)`
+
+ For example:
+
+ ```ini
+ S4H-ABAPSNCPARTNERNAME = 'p:CN=help.sap.com, O=SAP_SE, C=IL' (Relevant DN)
+ S4H-ABAPSNCLIB = 'home/user/sec-dir' (Relevant directory)
+ S4H-ABAPX509CERT = 'MIIDJjCCAtCgAwIBAgIBNzA ... NgalgcTJf3iUjZ1e5Iv5PLKO' (Relevant certificate code)
+ ```
+
+ > [!NOTE]
+ > By default, the `<Interprefix>` value is your SID, such as `A4H-<ABAPSNCPARTNERNAME>`.
+ >
+
+If you're entering secrets directly to the configuration file, define the parameters as follows:
+
+```ini
+[Secrets Source]
+secrets = DOCKER_FIXED
+[ABAP Central Instance]
+snc_partnername = <Relevant_DN_Deatils>
+snc_lib = <lib_Path>
+x509cert = <Certificate_Code>
+For example:
+snc_partnername = p:CN=help.sap.com, O=SAP_SE, C=IL (Relevant DN)
+snc_lib = /sapcon-app/sec/libsapcrypto.so (Relevant directory)
+x509cert = MIIDJjCCAtCgAwIBAgIBNzA ... NgalgcTJf3iUjZ1e5Iv5PLKO (Relevant certificate code)
+```
+
+### Attach the SNC parameters to your user
+
+1. On your SAP data connector VM, call the `SM30` transaction and select to maintain the `USRACLEXT` table.
+
+1. Add a new entry. In the **User** field, enter the communication user that's used to connect to the ABAP system.
+
+1. Enter the SNC name when prompted. The SNC name is the unique, distinguished name provided when you created the Identity Manager PSE. For example: `CN=IDM, OU=SAP, C=DE`
+
+ Make sure to add a `p` before the SNC name. For example: `p:CN=IDM, OU=SAP, C=DE`.
+
+1. Select **Save**.
+
+SNC is enabled on your data connector VM.
+
+## Activate the SAP data connector
+
+This procedure describes how to activate the SAP data connector using the secured SNC connection you created using the procedures earlier in this article.
+
+1. Activate the docker image:
+
+ ```bash
+ docker start sapcon-<SID>
+ ```
+
+1. Check the connection. Run:
+
+ ```bash
+ docker logs sapcon-<SID>
+ ```
+
+1. If the connection fails, use the logs to understand the issue.
+
+ If you need to, disable the docker image:
+
+ ```bash
+ docker stop sapcon-<SID>
+ ```
+
+For example, issues may occur because of a misconfiguration in the **systemconfig.ini** file, or in your Azure key vault, or some of the steps for creating a secure connection via SNC weren't run correctly.
+
+Try performing the steps above again to configure a secure connection via SNC. For more information, see also [Troubleshooting your Azure Sentinel SAP solution deployment](sap-deploy-troubleshoot.md).
+
+## Next steps
+
+After your SAP data connector is activated, continue by deploying the **Azure Sentinel - Continuous Threat Monitoring for SAP** solution. For more information, see [Deploy SAP security content](sap-deploy-solution.md#deploy-sap-security-content).
+
+Deploying the solution enables the SAP data connector to display in Azure Sentinel and deploys the SAP workbook and analytics rules. When you're done, manually add and customize your SAP watchlists.
+
+For more information, see:
+
+- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)
+- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md)
+- [Azure Sentinel SAP solution: security content reference](sap-solution-security-content.md)
+
sentinel Sap Solution Detailed Requirements https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-detailed-requirements.md
Use this article as a reference if you're an admin, or if you're [deploying the
> The Azure Sentinel SAP solution is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. >
+> [!NOTE]
+> Additional requirements are listed if you're deploying your SAP data connector using a secure SNC connection. For more information, see [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md).
+>
## Recommended virtual machine sizing The following table describes the recommended sizing for your virtual machine, depending on your intended usage:
The following table describes the recommended sizing for your virtual machine, d
|Usage |Recommended sizing | ||| |**Minimum specification**, such as for a lab environment | A *Standard_B2s* VM |
-|**Standard connector** (default) | A *DS2_v2* VM, with: <br>- 2 cores<br>- 8 GB memory |
-|**Multiple connectors** |A *Standard_B4ms* VM, with: <br>- 4 cores<br>- 16 GB memory |
+|**Standard connector** (default) | A *DS2_v2* VM, with: <br>- 2 cores<br>- 8-GB memory |
+|**Multiple connectors** |A *Standard_B4ms* VM, with: <br>- 4 cores<br>- 16-GB memory |
| | | ## Required SAP log change requests
Required authorizations are listed by log type. You only need the authorizations
For more information, see: - [Deploy the Azure Sentinel solution for SAP](sap-deploy-solution.md)-- [Expert configuration options, on-premises deployment and SAPControl log sources](sap-solution-deploy-alternate.md)
+- [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)
+- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)
- [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md) - [Azure Sentinel SAP solution: available security content](sap-solution-security-content.md) - [Troubleshooting your Azure Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Log Reference https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-log-reference.md
For more information, see:
- [Deploy the Azure Sentinel solution for SAP](sap-deploy-solution.md) - [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md)-- [Expert configuration options, on-premises deployment and SAPControl log sources](sap-solution-deploy-alternate.md)
+- [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)
+- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)
- [Azure Sentinel SAP solution: built-in security content](sap-solution-security-content.md) - [Troubleshooting your Azure Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Sap Solution Security Content https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/sap-solution-security-content.md
Use the following built-in workbooks to visualize and monitor data ingested via
||| | |<a name="sapsystem-applications-and-products-workbook"></a>**SAP - Audit Log Browser** |Displays data such as: <br><br>General system health, including user sign-ins over time, events ingested by the system, message classes and IDs, and ABAP programs run <br><br>Severities of events occurring in your system <br><br>Authentication and authorization events occurring in your system |Uses data from the following log: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) | |**SAP - Suspicious Privileges Operations** | Displays data such as: <br><br>Sensitive and critical assignments <br><br>Actions and changes made to sensitive, privileged users <br><br>Changes made to roles |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPChangeDocsLog_CL](sap-solution-log-reference.md#abap-change-documents-log) |
-|**SAP - Initial Access & Attempts to Bypass SAP Security Mechanisms** | Displays data such as: <br><br>Executions of sensitive programs, code and function modules <br><br>Configuration changes, including log deactivations <br><br>Changes made in debug mode |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log)<br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
+|**SAP - Initial Access & Attempts to Bypass SAP Security Mechanisms** | Displays data such as: <br><br>Executions of sensitive programs, code, and function modules <br><br>Configuration changes, including log deactivations <br><br>Changes made in debug mode |Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log)<br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) |
|**SAP - Persistency & Data Exfiltration** | Displays data such as: <br><br>Internet Communication Framework (ICF) services, including activations and deactivations and data about new services and service handlers <br><br> Insecure operations, including both function modules and programs <br><br>Direct access to sensitive tables | Uses data from the following logs: <br><br>[ABAPAuditLog_CL](sap-solution-log-reference.md#abap-security-audit-log) <br><br>[ABAPTableDataLog_CL](sap-solution-log-reference.md#abap-db-table-data-log)<br><br>[ABAPSpoolLog_CL](sap-solution-log-reference.md#abap-spool-log)<br><br>[ABAPSpoolOutputLog_CL](sap-solution-log-reference.md#apab-spool-output-log)<br><br>[Syslog](sap-solution-log-reference.md#abap-syslog) | | | | |
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - Medium - Brute force attacks** | Identifies brute force attacks on the SAP system, according to failed sign-in attempts for the backend system. | Attempt to sign in from the same IP address to several systems/clients within the scheduled time interval. <br><br>**Data sources**: SAPcon - Audit Log | Credential Access | |**SAP - Medium - Multiple Logons from the same IP** | Identifies the sign-in of several users from same IP address within a scheduled time interval. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Sign in using several users through the same IP address. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access | |**SAP - Medium - Multiple Logons by User** | Identifies sign-ins of the same user from several terminals within scheduled time interval. <br><br>Available only via the Audit SAL method, for SAP versions 7.5 and higher. | Sign in using the same user, using different IP addresses. <br><br>**Data sources**: SAPcon - Audit Log | PreAttack, Credential Access, Initial Access, Collection <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) |
-|**SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement a SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
+|**SAP - Informational - Lifecycle - SAP Notes were implemented in system** | Identifies SAP Note implementation in the system. | Implement an SAP Note using SNOTE/TCI. <br><br>**Data sources**: SAPcon - Change Requests | - |
| | | | | ### Built-in SAP analytics rules for data exfiltration
The following tables list the built-in [analytics rules](sap-deploy-solution.md#
|**SAP - High - Sensitive privileged user logged in** | Identifies the Dialog sign-in of a sensitive privileged user. <br><br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist. | Sign in to the backend system using `SAP*` or another privileged user. <br><br>**Data sources**: SAPcon - Audit Log | Initial Access, Credential Access | | **SAP - High - Sensitive privileged user makes a change in other user** | Identifies changes of sensitive, privileged users in other users. | Change user details / authorizations using SU01. <br><br>**Data Sources**: SAPcon - Audit Log | Privilege Escalation, Credential Access | |**SAP - High - Sensitive Users Password Change and Login** | Identifies password changes for privileged users. | Change the password for a privileged user and sign into the system. <br>Maintain privileged users in the [SAP - Privileged Users](#users) watchlist.<br><br>**Data sources**: SAPcon - Audit Log | Impact, Command and Control, Privilege Escalation |
-|**SAP - High - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly-created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
+|**SAP - High - User Creates and uses new user** | Identifies a user creating and using other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Create a user using SU01, and then sign in, using the newly created user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log | Discovery, PreAttack, Initial Access |
|**SAP - High - User Unlocks and uses other users** |Identifies a user being unlocked and used by other users. <br><br>**Sub-use case**: [Persistency](#built-in-sap-analytics-rules-for-persistency) | Unlock a user using SU01, and then sign in using the unlocked user and the same IP address.<br><br>**Data sources**: SAPcon - Audit Log, SAPcon - Change Documents Log | Discovery, PreAttack, Initial Access, Lateral Movement | |**SAP - Medium - Assignment of a sensitive profile** | Identifies new assignments of a sensitive profile to a user. <br><br>Maintain sensitive profiles in the [SAP - Sensitive Profiles](#profiles) watchlist. | Assign a profile to a user using `SU01`. <br><br>**Data sources**: SAPcon - Change Documents Log | Privilege Escalation | |**SAP - Medium - Assignment of a sensitive role** | Identifies new assignments for a sensitive role to a user. <br><br>Maintain sensitive roles in the [SAP - Sensitive Roles](#roles) watchlist.| Assign a role to a user using `SU01` / `PFCG`. <br><br>**Data sources**: SAPcon - Change Documents Log, Audit Log | Privilege Escalation |
For more information, see:
- [Deploy the Azure Sentinel solution for SAP](sap-deploy-solution.md) - [Azure Sentinel SAP solution logs reference](sap-solution-log-reference.md)-- [Expert configuration options, on-premises deployment and SAPControl log sources](sap-solution-deploy-alternate.md)
+- [Deploy the Azure Sentinel SAP data connector with SNC](sap-solution-deploy-snc.md)
+- [Expert configuration options, on-premises deployment, and SAPControl log sources](sap-solution-deploy-alternate.md)
- [Azure Sentinel SAP solution detailed SAP requirements](sap-solution-detailed-requirements.md) - [Troubleshooting your Azure Sentinel SAP solution deployment](sap-deploy-troubleshoot.md)
sentinel Troubleshooting Cef Syslog https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/sentinel/troubleshooting-cef-syslog.md
+
+ Title: Troubleshoot a connection between Azure Sentinel and a CEF or Syslog data connector| Microsoft Docs
+description: Learn how to troubleshoot issues with your Azure Sentinel CEF or Syslog data connector.
+
+documentationcenter: na
++
+editor: ''
+++
+ms.devlang: na
+
+ na
+ Last updated : 08/23/2021++++
+# Troubleshoot your CEF or Syslog data connector
+
+This article describes common methods for verifying and troubleshooting a CEF or Syslog data connector for Azure Sentinel.
+
+For example, if your logs are not appearing in Azure Sentinel, either in the Syslog or the Common Security Log tables, your data source may be failing to connect or there may be another reason your data is not being ingested.
+
+Other symptoms of a failed connector deployment include when either the **security_events.conf** or the **security-omsagent.config.conf** files are missing, or if the rsyslog server is not listening on port 514.
+
+For more information, see [Connect your external solution using Common Event Format](connect-common-event-format.md) and [Collect data from Linux-based sources using Syslog](connect-syslog.md).
+
+> [!NOTE]
+> The Log Analytics agent for Windows is often referred to as the *Microsoft Monitoring Agent (MMA)*. The Log Analytics agent for Linux is often referred to as the *OMS agent*.
+>
+
+> [!TIP]
+> When troubleshooting, we recommend that you work through the steps in this article in the order they're presented to check and resolve issues in your Syslog Collector, operating system, or OMS agent.
+>
+> If you've deployed your connector using a method different than the documented procedure and are having issues, we recommend that you purge the deployment and install again as documented.
+>
+
+## Validate CEF connectivity
+
+After you've [deployed your log forwarder](connect-common-event-format.md) and [configured your security solution to send it CEF messages](connect-cef-solution-config.md), use the steps in this section to verify connectivity between your security solution and Azure Sentinel.
+
+1. Make sure that you have the following prerequisites:
+
+ - You must have elevated permissions (sudo) on your log forwarder machine.
+
+ - You must have **python 2.7** or **3** installed on your log forwarder machine. Use the `python ΓÇôversion` command to check.
+
+ - You may need the Workspace ID and Workspace Primary Key at some point in this process. You can find them in the workspace resource, under **Agents management**.
+
+1. From the Azure Sentinel navigation menu, open **Logs**. Run a query using the **CommonSecurityLog** schema to see if you are receiving logs from your security solution.
+
+ It may take about 20 minutes until your logs start to appear in **Log Analytics**.
+
+1. If you don't see any results from the query, verify that events are being generated from your security solution, or try generating some, and verify they are being forwarded to the Syslog forwarder machine you designated.
+
+1. Run the following script on the log forwarder (applying the Workspace ID in place of the placeholder) to check connectivity between your security solution, the log forwarder, and Azure Sentinel. This script checks that the daemon is listening on the correct ports, that the forwarding is properly configured, and that nothing is blocking communication between the daemon and the Log Analytics agent. It also sends mock messages 'TestCommonEventFormat' to check end-to-end connectivity. <br>
+
+ ```bash
+ sudo wget -O cef_troubleshoot.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_troubleshoot.py&&sudo python cef_troubleshoot.py [WorkspaceID]
+ ```
+
+ - You may get a message directing you to run a command to correct an issue with the **mapping of the *Computer* field**. See the [explanation in the validation script](#mapping-command) for details.
+
+ - You may get a message directing you to run a command to correct an issue with the **parsing of Cisco ASA firewall logs**. See the [explanation in the validation script](#parsing-command) for details.
+
+### CEF validation script explained
+
+The validation script performs the following checks:
+
+# [rsyslog daemon](#tab/rsyslog)
+
+1. Checks that the file<br>
+ `/etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf`<br>
+ exists and is valid.
+
+1. Checks that the file includes the following text:
+
+ ```bash
+ <source>
+ type syslog
+ port 25226
+ bind 127.0.0.1
+ protocol_type tcp
+ tag oms.security
+ format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
+ <parse>
+ message_format auto
+ </parse>
+ </source>
+
+ <filter oms.security.**>
+ type filter_syslog_security
+ </filter>
+ ```
+
+1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
+
+ ```bash
+ grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb
+ ```
+
+ - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
+
+ ```bash
+ # Cisco ASA parsing fix
+ sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
+ ```
+
+1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
+
+ ```bash
+ grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
+ ```
+
+ - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
+
+ ```bash
+ # Computer field mapping fix
+ sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
+ ```
+
+1. Checks if there are any security enhancements on the machine that might be blocking network traffic (such as a host firewall).
+
+1. Checks that the syslog daemon (rsyslog) is properly configured to send messages (that it identifies as CEF) to the Log Analytics agent on TCP port 25226:
+
+ - Configuration file: `/etc/rsyslog.d/security-config-omsagent.conf`
+
+ ```bash
+ if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
+ ```
+
+1. Restarts the syslog daemon and the Log Analytics agent:
+
+ ```bash
+ service rsyslog restart
+
+ /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
+ ```
+
+1. Checks that the necessary connections are established: tcp 514 for receiving data, tcp 25226 for internal communication between the syslog daemon and the Log Analytics agent:
+
+ ```bash
+ netstat -an | grep 514
+
+ netstat -an | grep 25226
+ ```
+
+1. Checks that the syslog daemon is receiving data on port 514, and that the agent is receiving data on port 25226:
+
+ ```bash
+ sudo tcpdump -A -ni any port 514 -vv
+
+ sudo tcpdump -A -ni any port 25226 -vv
+ ```
+
+1. Sends MOCK data to port 514 on localhost. This data should be observable in the Azure Sentinel workspace by running the following query:
+
+ ```kusto
+ CommonSecurityLog
+ | where DeviceProduct == "MOCK"
+ ```
+
+# [syslog-ng daemon](#tab/syslogng)
+
+1. Checks that the file<br>
+ `/etc/opt/microsoft/omsagent/[WorkspaceID]/conf/omsagent.d/security_events.conf`<br>
+ exists and is valid.
+
+1. Checks that the file includes the following text:
+
+ ```bash
+ <source>
+ type syslog
+ port 25226
+ bind 127.0.0.1
+ protocol_type tcp
+ tag oms.security
+ format /(?<time>(?:\w+ +){2,3}(?:\d+:){2}\d+|\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.[\w\-\:\+]{3,12}):?\s*(?:(?<host>[^: ]+) ?:?)?\s*(?<ident>.*CEF.+?(?=0\|)|%ASA[0-9\-]{8,10})\s*:?(?<message>0\|.*|.*)/
+ <parse>
+ message_format auto
+ </parse>
+ </source>
+
+ <filter oms.security.**>
+ type filter_syslog_security
+ </filter>
+ ```
+
+1. Checks that the parsing for Cisco ASA Firewall events is configured as expected, using the following command:
+
+ ```bash
+ grep -i "return ident if ident.include?('%ASA')" /opt/microsoft/omsagent/plugin/security_lib.rb
+ ```
+
+ - <a name="parsing-command"></a>If there is an issue with the parsing, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct parsing and restart the agent.
+
+ ```bash
+ # Cisco ASA parsing fix
+ sed -i "s|return '%ASA' if ident.include?('%ASA')|return ident if ident.include?('%ASA')|g" /opt/microsoft/omsagent/plugin/security_lib.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
+ ```
+
+1. Checks that the *Computer* field in the syslog source is properly mapped in the Log Analytics agent, using the following command:
+
+ ```bash
+ grep -i "'Host' => record\['host'\]" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb
+ ```
+
+ - <a name="mapping-command"></a>If there is an issue with the mapping, the script will produce an error message directing you to **manually run the following command** (applying the Workspace ID in place of the placeholder). The command will ensure the correct mapping and restart the agent.
+
+ ```bash
+ # Computer field mapping fix
+ sed -i -e "/'Severity' => tags\[tags.size - 1\]/ a \ \t 'Host' => record['host']" -e "s/'Severity' => tags\[tags.size - 1\]/&,/" /opt/microsoft/omsagent/plugin/filter_syslog_security.rb && sudo /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
+ ```
+
+1. Checks if there are any security enhancements on the machine that might be blocking network traffic (such as a host firewall).
+
+1. Checks that the syslog daemon (syslog-ng) is properly configured to send messages that it identifies as CEF (using a regex) to the Log Analytics agent on TCP port 25226:
+
+ - Configuration file: `/etc/syslog-ng/conf.d/security-config-omsagent.conf`
+
+ ```bash
+ filter f_oms_filter {match(\"CEF\|ASA\" ) ;};destination oms_destination {tcp(\"127.0.0.1\" port(25226));};
+ log {source(s_src);filter(f_oms_filter);destination(oms_destination);};
+ ```
+
+1. Restarts the syslog daemon and the Log Analytics agent:
+
+ ```bash
+ service syslog-ng restart
+
+ /opt/microsoft/omsagent/bin/service_control restart [workspaceID]
+ ```
+
+1. Checks that the necessary connections are established: tcp 514 for receiving data, tcp 25226 for internal communication between the syslog daemon and the Log Analytics agent:
+
+ ```bash
+ netstat -an | grep 514
+
+ netstat -an | grep 25226
+ ```
+
+1. Checks that the syslog daemon is receiving data on port 514, and that the agent is receiving data on port 25226:
+
+ ```bash
+ sudo tcpdump -A -ni any port 514 -vv
+
+ sudo tcpdump -A -ni any port 25226 -vv
+ ```
+
+1. Sends MOCK data to port 514 on localhost. This data should be observable in the Azure Sentinel workspace by running the following query:
+
+ ```kusto
+ CommonSecurityLog
+ | where DeviceProduct == "MOCK"
+ ```
++
+## Verify CEF or Syslog prerequisites
+
+Use the following sections to check your CEF or Syslog data connector prerequisites.
+
+### Azure Virtual Machine as a Syslog collector
+
+If you're using an Azure Virtual Machine as a Syslog collector, verify the following:
+
+- While you are setting up your Syslog data connector, make sure to turn off your [Azure Security Center auto-provisioning settings](/azure/security-center/security-center-enable-data-collection) for the [MM#connector-options).
+
+ You can turn them back on after your data connector is completely set up.
+
+- Before you deploy the [Common Event Format Data connector python script](connect-cef-agent.md), make sure that your Virtual Machine isn't already connected to an existing Syslog workspace. You can find this information on the Log Analytics Workspace Virtual Machine list, where a VM that's connected to a Syslog workspace is listed as **Connected**.
+
+- Make sure that Azure Sentinel is connected to the correct Syslog workspace, with the **SecurityInsights** solution installed.
+
+ For more information, see [Step 1: Deploy the log forwarder](connect-cef-agent.md).
+
+- Make sure that your machine is sized correctly with at least the minimum required prerequisites. For more information, see [CEF prerequisites](connect-common-event-format.md#prerequisites).
+
+### On-premises or a non-Azure Virtual Machine
+
+If you are using an on-premises machine or a non-Azure virtual machine for your data connector, make sure that you've run the installation script on a fresh installation of a supported Linux operating system:
+
+> [!TIP]
+> You can also find this script from the **Common Event Format** data connector page in Azure Sentinel.
+>
+
+```cli
+sudo wget -O cef_installer.py https://raw.githubusercontent.com/Azure/Azure-Sentinel/master/DataConnectors/CEF/cef_installer.py&&sudo python cef_installer.py <WorkspaceId> <Primary Key>
+```
+
+### Enable your Syslog facility and log severity collection
+
+The Syslog server, either rsyslog or syslog-ng, forwards any data defined in the relevant configuration file, which is automatically populated by the settings defined in your Log Analytics workspace.
+
+Make sure to add details about the facilities and severity log levels that you want to be ingested into Azure Sentinel. The configuration process may take about 20 minutes.
+
+For more information, see [Deployment script explained](connect-cef-agent.md#deployment-script-explained) and [Configure Syslog in the Azure portal](/azure/azure-monitor/agents/data-sources-syslog.md).
++
+**For example, for an rsyslog server**, run the following command to display the current settings for your Syslog forwarding, and review any changes to the configuration file:
+
+```bash
+cat /etc/rsyslog.d/95-omsagent.conf
+```
+
+In this case, for rsyslog, output similar to the following should display. The contents of this file should reflect what's defined in the Syslog Configuration on the **Log Analytics Workspace Client configuration - Syslog facility settings** screen.
++
+```bash
+OMS Syslog collection for workspace c69fa733-da2e-4cf9-8d92-eee3bd23fe81
+auth.=alert;auth.=crit;auth.=debug;auth.=emerg;auth.=err;auth.=info;auth.=notice;auth.=warning @127.0.0.1:25224
+authpriv.=alert;authpriv.=crit;authpriv.=debug;authpriv.=emerg;authpriv.=err;authpriv.=info;authpriv.=notice;authpriv.=warning @127.0.0.1:25224
+cron.=alert;cron.=crit;cron.=debug;cron.=emerg;cron.=err;cron.=info;cron.=notice;cron.=warning @127.0.0.1:25224
+local0.=alert;local0.=crit;local0.=debug;local0.=emerg;local0.=err;local0.=info;local0.=notice;local0.=warning @127.0.0.1:25224
+local4.=alert;local4.=crit;local4.=debug;local4.=emerg;local4.=err;local4.=info;local4.=notice;local4.=warning @127.0.0.1:25224
+syslog.=alert;syslog.=crit;syslog.=debug;syslog.=emerg;syslog.=err;syslog.=info;syslog.=notice;syslog.=warning @127.0.0.1:25224
+```
++
+**For CEF forwarding, for an rsyslog server**, run the following command to display the current settings for your Syslog forwarding, and review any changes to the configuration file:
+
+```bash
+cat /etc/rsyslog.d/security-config-omsagent.conf
+```
+
+In this case, for rsyslog, output similar to the following should display:
+
+```bash
+if $rawmsg contains "CEF:" or $rawmsg contains "ASA-" then @@127.0.0.1:25226
+```
+
+## Troubleshoot operating system issues
+
+This procedure describes how to troubleshoot issues that are certainly derived from the operating system configuration.
+
+**To troubleshoot operating system issues**:
+
+1. If you haven't yet, verify that you're working with a supported operating system and Python version. For more information, see [CEF prerequisites](connect-common-event-format.md#prerequisites) and [Configure your Linux machine or appliance](connect-syslog.md#configure-your-linux-machine-or-appliance).
+
+1. If your Virtual Machine is in Azure, verify that the network security group (NSG) allows inbound TCP/UDP connectivity from your log client (Sender) on port 514.
+
+1. Verify that packets are arriving to the Syslog Collector. To capture the syslog packets arriving to the Syslog Collector, run:
+
+ ```config
+ tcpdump -Ani any port 514 and host <ip_address_of_sender> -vv
+ ```
+
+1. Do one of the following:
+
+ - If you do not see any packets arriving, confirm the NSG security group permissions and the routing path to the Syslog Collector.
+
+ - If you do see packets arriving, confirm that they are not being rejected.
+
+ If you see rejected packets, confirm that the IP tables are not blocking the connections.
+
+ To confirm that packets are not being rejected, run:
+
+ ```config
+ watch -n 2 -d iptables -nvL
+ ```
+
+1. Verify whether the Syslog server is processing the logs. Run:
+
+ ```config
+ tail -f /var/log/messages or tail -f /var/log/syslog
+ ```
+
+ Any Syslog or CEF logs being processed are displayed in plain text.
+
+1. Confirm that the rsyslog server is listening on TCP/UDP port 514. Run:
+
+ ```config
+ netstat -anp | grep syslog
+ ```
+
+ If you have any CEF or ASA logs being sent to your Syslog Collector, you should see an established connection on TCP port 25226.
+
+ For example:
+
+ ```config
+ 0 127.0.0.1:36120 127.0.0.1:25226 ESTABLISHED 1055/rsyslogd
+ ```
+
+ If the connection is blocked, you may have a [blocked SELinux connection to the OMS agent](#selinux-blocking-connection-to-the-oms-agent), or a [blocked firewall process](#blocked-firewall-policy). Use the following sets of instructions to determine the issue.
+
+### SELinux blocking connection to the OMS agent
+
+This procedure describes how to confirm whether SELinux is currently in a `permissive` state, or is blocking a connection to the OMS agent. This procedure is relevant when your operating system is a distribution from RedHat or CentOS.
+
+> [!NOTE]
+> Azure Sentinel support for CEF and Syslog only includes FIPS hardening. Other hardening methods, such as SELinux or CIS are not currently supported.
+>
+
+1. Run:
+
+ ```config
+ sestatus
+ ```
+
+ The status is displayed as one of the following:
+
+ - `disabled`. This configuration is supported for your connection to Azure Sentinel.
+ - `permissive`. This configuration is supported for your connection to Azure Sentinel.
+ - `enforced`. This configuration is not supported, and you must either disable the status or set it to `permissive`.
+
+1. If the status is currently set to `enforced`, turn it off temporarily to confirm whether this was the blocker. Run:
+
+ ```config
+ setenforce 0
+ ```
+
+ > [!NOTE]
+ > This step turns off SELinux only until the server reboots. Modify the SELinux configuration to keep it turned off.
+ >
+
+1. To verify whether the change was successful, run:
+
+ ```
+ getenforce
+ ```
+
+ The `permissive` state should be returned.
+
+> [!IMPORTANT]
+> This setting update is lost when the system is rebooted. To permanently update this setting to `permissive`, modify the **/etc/selinux/config** file, changing the `SELINUX` value to `SELINUX=permissive`.
+>
+> For more information, see [RedHat documentation](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/using_selinux/changing-selinux-states-and-modes_using-selinux).
++
+### Blocked firewall policy
+
+This procedure describes how to verify whether a firewall policy is blocking the connection from the Rsyslog daemon to the OMS agent, and how to disable it as needed.
++
+1. Run the following command to verify whether there are any rejects in the IP tables, indicating traffic that's being dropped by the firewall policy:
+
+ ```config
+ watch -n 2 -d iptables -nvL
+ ```
+
+1. To keep the firewall policy enabled, create a policy rule to allow the connections. Add rules as needed to allow the TCP/UDP ports 25226 and 25224 through the active firewall.
+
+ For example:
+
+ ```config
+ Every 2.0s: iptables -nvL rsyslog: Wed Jul 7 15:56:13 2021
+
+ Chain INPUT (policy ACCEPT 6185K packets, 2466M bytes)
+ pkts bytes target prot opt in out source destination
++
+ Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
+ pkts bytes target prot opt in out source destination
++
+ Chain OUTPUT (policy ACCEPT 6792K packets, 6348M bytes)
+ pkts bytes target prot opt in out source destination
+ ```
+
+1. To create a rule to allow TCP/UDP ports 25226 and 25224 through the active firewall, add rules as needed.
+
+ 1. To install the Firewall Policy editor, run:
+
+ ```config
+ yum install policycoreutils-python
+ ```
+
+ 1. Add the firewall rules to the firewall policy. For example:
+
+ ```config
+ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 25226 -j ACCEPT
+ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p udp --dport 25224 -j ACCEPT
+ sudo firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp --dport 25224 -j ACCEPT
+ ```
+
+ 1. Verify that the exception was added. Run:
+
+ ```config
+ sudo firewall-cmd --direct --get-rules ipv4 filter INPUT
+ ```
+
+ 1. Reload the firewall. Run:
+
+ ```config
+ sudo firewall-cmd --reload
+ ```
+
+> [!NOTE]
+> To disable the firewall, run: `sudo systemctl disable firewalld`
+>
+
+## Linux and OMS Agent-related issues
+
+If the steps described earlier in this article do not solve your issue, you may have a connectivity problem between the OMS Agent and the Azure Sentinel workspace.
+
+In such cases, continue troubleshooting by verifying the following:
+
+- Make sure that you can see packets arriving on TCP/UDP port 514 on the Syslog collector
+
+- Make sure that you can see logs being written to the local log file, either **/var/log/messages** or **/var/log/syslog**
+
+- Make sure that you can see data packets flowing on port 25524, 25526, or both
+
+- Make sure that your virtual machine has an outbound connection to port 443 via TCP, or can connect to the [Log Analytics endpoints](/azure/azure-monitor/agents/log-analytics-agent#network-requirements)
+
+- Make sure that you have access to required URLs from your Syslog collector through your firewall policy. For more information, see [Log Analytics agent firewall requirements](/azure/azure-monitor/agents/log-analytics-agent##firewall-requirements).
+
+- Make sure that your Azure Virtual Machine is shown as connected in your workspace's list of virtual machines.
+
+Run the following command to determine if the agent is communicating successfully with Azure, or if the OMS agent is blocked from connecting to the Log Analytics workspace.
+
+```config
+Heartbeat
+ | where Computer contains "<computername>"
+ | sort by TimeGenerated desc
+```
+
+A log entry is returned if the agent is communicating successfully. Otherwise, the OMS agent may be blocked.
+
+## Next steps
+
+If the troubleshooting steps in this article have not helped your issue, open a support ticket or use the Azure Sentinel community resources. For more information, see [Useful resources for working with Azure Sentinel](resources.md).
+
+To learn more about Azure Sentinel, see the following articles:
+
+- Learn about [CEF and CommonSecurityLog field mapping](cef-name-mapping.md).
+- Learn how to [get visibility into your data, and potential threats](get-visibility.md).
+- Get started [detecting threats with Azure Sentinel](./detect-threats-built-in.md).
+- [Use workbooks](monitor-your-data.md) to monitor your data.
service-fabric Service Fabric Startupservices Model https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/service-fabric/service-fabric-startupservices-model.md
Last updated 05/05/2021
# Introducing StartupServices.xml in Service Fabric Application This feature introduces StartupServices.xml file in a Service Fabric Application design. This file hosts DefaultServices section of ApplicationManifest.xml. With this implementation, DefaultServices and Service definition-related parameters are moved from existing ApplicationManifest.xml to this new file called StartupServices.xml. This file is used in each functionalities (Build/Rebuild/F5/Ctrl+F5/Publish) in Visual Studio.
+Note - StartupServices.xml is only meant for Visual Studio deployments, this arrangement is to ensure that packages deployed with Visual Studio (with StartupServices.xml) do not have conflicts with ARM deployed services. StartupServices.xml is not packaged as part of application package. It is not supported in DevOps pipeline and customer should deploy individual services in Application either via ARM or through cmdlets with desired configuration.
+ ## Existing Service Fabric Application Design For each service fabric application, ApplicationManifest.xml is the source of all service-related information for the application. ApplicationManifest.xml consists of all Parameters, ServiceManifestImport, and DefaultServices. Configuration parameters are mentioned in Cloud.xml/Local1Node.xml/Local5Node.xml files under ApplicationParameters.
spring-cloud Monitor App Lifecycle Events https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/monitor-app-lifecycle-events.md
+
+ Title: Monitor app lifecycle events using Azure Activity log and Azure Service Health
+description: Monitor app lifecycle events and set up alerts with Azure Activity log and Azure Service Health.
++++ Last updated : 08/19/2021+++
+# Monitor app lifecycle events using Azure Activity log and Azure Service Health
+
+This article shows you how to monitor app lifecycle events and set up alerts with Azure Activity log and Azure Service Health.
+
+Azure Spring Cloud provides built-in tools to monitor the status and health of your applications. App lifecycle events help you understand any changes that were made to your applications so you can take action as necessary.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+- A deployed Azure Spring Cloud service instance and at least one application already created in your service instance. For more information, see [Quickstart: Deploy your first Azure Spring Cloud application](quickstart.md).
+
+## Monitor app lifecycle events triggered by users in Azure Activity logs
+
+[Azure Activity logs](../azure-monitor/essentials/activity-log.md) contain resource events emitted by operations taken on the resources in your subscription. The following details for application lifecycle events (start, stop, restart) are added into Azure Activity Logs:
+
+- When the operation occurred
+- The status of the operation
+- Which instance(s) are created when you start your app
+- Which instance(s) are deleted when you stop your app
+- Which instance(s) are deleted and created when you restart your app
+
+For example, when you restart your app, you can find the affected instances from the **Activity log** detail page in the Azure portal.
++
+## Monitor app lifecycle events in Azure Service Health
+
+[Azure Resource Health](../service-health/resource-health-overview.md) helps you diagnose and get support for issues that may affect the availability of your service. These issues include service incidents, planned maintenance periods, and regional outages. Application restarting events are added into Azure Service Health. They include both unexpected incidents (for example, an unplanned app crash) and scheduled actions (for example, planned maintenance).
+
+### Monitor unplanned app lifecycle events
+
+When your app is restarted because of unplanned events, your Azure Spring Cloud instance will show a status of **degraded** in the **Resource health** section of the Azure portal. Degraded means that your resource detected a loss in performance, although it's still available for use. Examples of unplanned events include app crash, health check failure, and system outage.
++
+You can find the latest status, the root cause, and affected instances in the health history page.
+++
+### Monitor planned app lifecycle events
+
+Your app may be restarted during platform maintenance. You can receive a maintenance notification in advance from the **Planned maintenance** page of Azure Service Health.
++
+When platform maintenance happens, your Azure Spring Cloud instance will also show a status of **degraded**. If restarting is needed during platform maintenance, Azure Spring Cloud will perform a rolling update to incrementally update your applications. Rolling updates are designed to update your workloads without downtime. You can find the latest status in the health history page.
++
+## Set up alerts
+
+You can set up alerts for app lifecycle events. Service health notifications are also stored in the Azure activity log. The activity log stores a large volume of information, so there's a separate user interface to make it easier to view and set up alerts on service health notifications.
+
+The following list describes the key steps needed to set up an alert:
+
+1. Set up an action group with the actions to take when an alert is triggered. Example action types include sending a voice call, SMS, email; or triggering various types of automated actions. Various alerts may use the same action group or different action groups depending on the user's requirements.
+2. Set up alert rules. The alerts use action groups to notify users that an alert for some specific app lifecycle event has been triggered.
+
+### Set up alerts on Activity log
+
+The following steps show you how to create an activity log alert rule in the Azure portal:
+
+1. Navigate to **Activity log**, open the detail page for any activity log, then select **New alert rule**.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/activity-log-alert.png" lightbox="media/monitor-app-lifecycle-events/activity-log-alert.png" alt-text="Screenshot of an activity log alert":::
+
+2. Select the **Scope** for the alert.
+
+3. Specify the alert **Condition**.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/activity-log-alert-condition.png" lightbox="media/monitor-app-lifecycle-events/activity-log-alert-condition.png" alt-text="Screenshot of an activity log alert condition":::
+
+4. Select **Actions** and add **Alert rule details**.
+
+5. Select **Create alert rule**.
+
+### Set up alerts to monitor app lifecycle events in Azure Service Health
+
+The following steps show you how to create an alert rule for service health notifications in the Azure portal:
+
+1. Navigate to **Resource health** under **Service Health**, then select **Add resource health alert**.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/add-resource-health-alert.png" alt-text="Screenshot of the resource health pane with the 'Add resource health alert' button highlighted":::
+
+2. Select the **Resource** for the alert.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/resource-health-alert-target.png" alt-text="Screenshot of a resource health alert target":::
+
+3. Specify the **Alert condition**.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/resource-health-alert-condition.png" alt-text="Screenshot of a resource health alert condition":::
+
+4. Select the **Actions** and add **Alert rule details**.
+
+5. Select **Create alert rule**.
+
+### Set up alerts to monitor the planned maintenance notification
+
+The following steps show you how to create an alert rule for planned maintenance notifications in the Azure portal:
+
+1. Navigate to **Health alerts** under **Service Health**, then select **Add service health alert**.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/add-service-health-alert.png" alt-text="Screenshot of the health alerts pane with the 'Add service health alert' button highlighted":::
+
+2. Provide values for **Subscription**, **Service(s)**, **Region(s)**, **Event type**, **Actions**, and **Alert rule details**.
+
+ :::image type="content" source="media/monitor-app-lifecycle-events/add-service-health-alert-details.png" lightbox="media/monitor-app-lifecycle-events/add-service-health-alert-details.png" alt-text="Screenshot of the 'Create rule alert' pane for Service Health":::
+
+3. Select **Create alert rule**.
+
+## Next steps
+
+[Self-diagnose and solve problems in Azure Spring Cloud](how-to-self-diagnose-solve.md)
spring-cloud Quickstart Provision Service Instance https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart-provision-service-instance.md
The following procedure creates an instance of Azure Spring Cloud using the Azur
![ASC icon start](media/spring-cloud-quickstart-launch-app-portal/find-spring-cloud-start.png)
-4. On the Azure Spring Cloud page, select **Add**.
+4. On the Azure Spring Cloud page, select **Create**.
- ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-add.png)
+ ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
5. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
spring-cloud Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/spring-cloud/quickstart.md
In Visual Studio, create an ASP.NET Core Web application named as "hello-world"
```xml <ItemGroup>
- <PackageReference Include="Steeltoe.Discovery.ClientCore" Version="3.0.0" />
+ <PackageReference Include="Steeltoe.Discovery.ClientCore" Version="3.1.0" />
<PackageReference Include="Microsoft.Azure.SpringCloud.Client" Version="2.0.0-preview.1" /> </ItemGroup> <Target Name="Publish-Zip" AfterTargets="Publish">
In Visual Studio, create an ASP.NET Core Web application named as "hello-world"
}); ```
-1. In the *Startup.cs* file, add a `using` directive and code that uses the Steeltoe Service Discovery at the end of the `ConfigureServices` and `Configure` methods:
+1. In the *Startup.cs* file, add a `using` directive and code that uses the Steeltoe Service Discovery at the end of the `ConfigureServices` method:
```csharp using Steeltoe.Discovery.Client;
In Visual Studio, create an ASP.NET Core Web application named as "hello-world"
} ```
- ```csharp
- public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
- {
- // Template code not shown.
-
- app.UseDiscoveryClient();
- }
- ```
- 1. Build the project to make sure there are no compile errors. ```dotnetcli
The following procedure creates an instance of Azure Spring Cloud using the Azur
4. On the Azure Spring Cloud page, select **Create**.
- ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-add.png)
+ ![ASC icon add](media/spring-cloud-quickstart-launch-app-portal/spring-cloud-create.png)
5. Fill out the form on the Azure Spring Cloud **Create** page. Consider the following guidelines:
The following procedure builds and deploys the application using the Azure CLI.
az account list -o table ```
-Use the following command to set the default subscription to use with the Azure CLI commands in this quickstart.
+ Use the following command to set the default subscription to use with the Azure CLI commands in this quickstart.
```azurecli az account set --subscription <Name or ID of a subscription from the last step>
-
+ ```
1. Build the project using Maven:
Use the following command to set the default subscription to use with the Azure
mvn clean package -DskipTests ```
-1. (If you haven't already installed it) Install the Azure Spring Cloud extension for the Azure CLI:
-
- ```azurecli
- az extension add --name spring-cloud
- ```
- 1. Create the app with public endpoint assigned. If you selected Java version 11 when generating the Spring Cloud project, include the --runtime-version=Java_11 switch. ```azurecli
Use the following command to set the default subscription to use with the Azure
1. Deploy the Jar file for the app (`target\hellospring-0.0.1-SNAPSHOT.jar` on Windows): ```azurecli
- az spring-cloud app deploy -n hellospring -s <service instance name> -g <resource group name> --jar-path <jar file path>
+ az spring-cloud app deploy -n hellospring -s <service instance name> -g <resource group name> --artifact-path <jar file path>
``` 1. It takes a few minutes to finish deploying the application. To confirm that it has deployed, go to the **Apps** blade in the Azure portal. You should see the status of the application.
storage Blob Storage Monitoring Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/blob-storage-monitoring-scenarios.md
Shared Key and SAS authentication provide no means of auditing individual identi
## Optimize cost for infrequent queries
-If you maintain large amounts of log data but plan to query them only occasionally (For example, to meet compliance and security obligations), consider archiving your logs to a storage account instead of using Log Analytics. For a massive number of transactions, [the cost of using Log Analytics](https://azure.microsoft.com/pricing/details/monitor/) might be high relative to just archiving to storage and using other query techniques. Log Analytics makes sense in cases where you want to use the rich capabilities of Log Analytics. You can reduce the cost of querying data by archiving logs to a storage account, and then querying those logs using a serverless query solution on top of log data, for example, Azure Synapse.
+You can export logs to Log Analytics for rich native query capabilities. When you have massive transactions on your storage account, the cost of using logs with Log Analytics might be high. See [Azure Log Analytics Pricing](https://azure.microsoft.com/pricing/details/monitor/). If you only plan to query logs occasionally (for example, query logs for compliance auditing), you can consider reducing the total cost by exporting logs to storage account, and then using a serverless query solution on top of log data, for example, Azure Synapse.
With Azure Synapse, you can create server-less SQL pool to query log data when you need. This could save costs significantly.
storage Storage Blob Static Website https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/storage/blobs/storage-blob-static-website.md
You can use any of these tools to upload content to the **$web** container:
> * [AzCopy](../common/storage-use-azcopy-v10.md) > * [Azure Storage Explorer](https://azure.microsoft.com/features/storage-explorer/) > * [Azure Pipelines](https://azure.microsoft.com/services/devops/pipelines/)
-> * [Visual Studio Code extension](/azure/developer/javascript/tutorial-vscode-static-website-node-01)
+> * [Visual Studio Code extension](- https://channel9.msdn.com/Shows/Docs-Azure/Deploy-static-website-to-Azure-from-Visual-Studio-Code/player)
## Viewing content
stream-analytics Stream Analytics Introduction https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/stream-analytics/stream-analytics-introduction.md
Title: Introduction to Azure Stream Analytics description: Learn about Azure Stream Analytics, a managed service that helps you analyze streaming data from the Internet of Things (IoT) in real time.--++ Previously updated : 11/12/2020 Last updated : 8/20/2021 #Customer intent: "What is Azure Stream Analytics and why should I care? As an IT Pro or developer, how do I use Stream Analytics to perform analytics on data streams?".
Azure Stream Analytics follows multiple compliance certifications as described i
## Performance
-Stream Analytics can process millions of events every second and it can deliver results with ultra low latencies. It allows you to scale-up and scale-out to handle large real-time and complex event processing applications. Stream Analytics supports higher performance by partitioning, allowing complex queries to be parallelized and executed on multiple streaming nodes. Azure Stream Analytics is built on [Trill](https://github.com/Microsoft/Trill), a high-performance in-memory streaming analytics engine developed in collaboration with Microsoft Research.
+Stream Analytics can process millions of events every second and it can deliver results with ultra low latencies. It allows you to [scale out](stream-analytics-autoscale.md) to adjust to your workloads. Stream Analytics supports higher performance by partitioning, allowing complex queries to be parallelized and executed on multiple streaming nodes. Azure Stream Analytics is built on [Trill](https://github.com/Microsoft/Trill), a high-performance in-memory streaming analytics engine developed in collaboration with Microsoft Research.
## Next steps
synapse-analytics Tutorial Configure Cognitive Services Synapse https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/machine-learning/tutorial-configure-cognitive-services-synapse.md
# Tutorial: Prerequisites for using Cognitive Services in Azure Synapse Analytics
-In this tutorial, you'll learn how set up the prerequisites for securely using Azure Cognitive Services in Azure Synapse Analytics.
+In this tutorial, you'll learn how set up the prerequisites for securely using Azure Cognitive Services in Azure Synapse Analytics. Linking these Azure Cognitive Services allows you to leverage Azure Cognitive Services from various experiences in Synapse.
This tutorial covers: > [!div class="checklist"] > - Create a Cognitive Services resource like Text Analytics or Anomaly Detector. > - Store an authentication key to Cognitive Services resources as secrets in Azure Key Vault, and configure access for an Azure Synapse Analytics workspace. > - Create an Azure Key Vault linked service in your Azure Synapse Analytics workspace.
+> - Create an Azure Cognitive Services linked service in your Azure Synapse Analytics workspace.
If you don't have an Azure subscription, [create a free account before you begin](https://azure.microsoft.com/free/).
You can create an [Anomaly Detector](https://ms.portal.azure.com/#create/Microso
![Screenshot that shows selections for creating a secret.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00d.png) > [!IMPORTANT]
- > Make sure you remember or note down this secret name. You'll use it later when you connect to Cognitive Services from Synapse Studio.
+ > Make sure you remember or note down this secret name. You'll use it later when you create the Azure Cognitive Services linked service.
## Create an Azure Key Vault linked service in Azure Synapse
You can create an [Anomaly Detector](https://ms.portal.azure.com/#create/Microso
![Screenshot that shows Azure Key Vault as a new linked service.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-00e.png) +
+## Create an Azure Cognitive Service linked service in Azure Synapse
+
+1. Open your workspace in Synapse Studio.
+2. Go to **Manage** > **Linked Services**. Create an **Azure Cognitive Services** linked service by pointing to the Cognitive Service that you just created.
+3. Verify the connection by selecting the **Test connection** button. If the connection is green, select **Create** and then select **Publish all** to save your change.
+
+![Screenshot that shows Azure Cognitive Service as a new linked service.](media/tutorial-configure-cognitive-services/tutorial-configure-cognitive-services-linked-service.png)
+ You're now ready to continue with one of the tutorials for using the Azure Cognitive Services experience in Synapse Studio. ## Next steps
synapse-analytics Apache Spark 3 Runtime https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-3-runtime.md
Title: Azure Synapse Runtime for Apache Spark 3.0 (preview)
-description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.0 (preview).
+ Title: Azure Synapse Runtime for Apache Spark 3.1 (preview)
+description: Supported versions of Spark, Scala, Python, and .NET for Apache Spark 3.1 (preview).
Previously updated : 05/26/2021 Last updated : 08/23/2021
-# Azure Synapse Runtime for Apache Spark 3.0 (preview)
+# Azure Synapse Runtime for Apache Spark 3.1 (preview)
-Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.0 (preview). The runtime engine will be periodically updated with the latest features and libraries during the preview period. Check here to see the latest updates to the libraries and their versions.
+Azure Synapse Analytics supports multiple runtimes for Apache Spark. This document will cover the runtime components and versions for the Azure Synapse Runtime for Apache Spark 3.1 (preview). The runtime engine will be periodically updated with the latest features and libraries during the preview period. Check here to see the latest updates to the libraries and their versions.
## Known Issues in Preview * Synapse Pipeline/Dataflows support is coming soon.
-* Library Management to add libraries is coming soon.
-* Connectors : the following connector support are coming soon.
+* The following connector support are coming soon:
* Azure Data Explorer connector * CosmosDB * SQL Server
Azure Synapse Analytics supports multiple runtimes for Apache Spark. This docume
## Component versions | Component | Version | | -- | -- |
-| Apache Spark | 3.0 |
+| Apache Spark | 3.1 |
| Operating System | Ubuntu 18.04 | | Java | 1.8.0_282 | | Scala | 2.12 | | .NET Core | 3.1 |
-| .NET | 1.0.0 |
-| Delta Lake | 0.8 |
-| Python | 3.6 |
+| .NET | 2.0.0 |
+| Delta Lake | 1.0 |
+| Python | 3.8 |
## Scala and Java libraries
synapse-analytics Apache Spark Azure Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/spark/apache-spark-azure-log-analytics.md
Title: Use Azure Log Analytics to collect and visualize metrics and logs (Preview)
-description: Learn how to enable the Synapse built-in Azure Log Analytics connector for collecting and sending the Apache Spark application metrics and logs to your Azure Log Analytics workspace.
+ Title: Use Log Analytics to collect and visualize metrics and logs (preview)
+description: Learn how to enable the Synapse Studio connector for collecting and sending the Apache Spark application metrics and logs to your Log Analytics workspace.
Last updated 03/25/2021
-# Tutorial: Use Azure Log Analytics to collect and visualize metrics and logs (Preview)
+# Tutorial: Use Log Analytics to collect and visualize metrics and logs (preview)
-In this tutorial, you will learn how to enable the Synapse built-in Azure Log Analytics connector for collecting and sending the Apache Spark application metrics and logs to your [Azure Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). You can then leverage an Azure monitor workbook to visualize the metrics and logs.
+In this tutorial, you learn how to enable the Synapse Studio connector that's built in to Log Analytics. You can then collect and send Apache Spark application metrics and logs to your [Log Analytics workspace](../../azure-monitor/logs/quick-create-workspace.md). Finally, you can use an Azure Monitor workbook to visualize the metrics and logs.
-## Configure Azure Log Analytics Workspace information in Synapse Studio
+## Configure workspace information
-### Step 1: Create an Azure Log Analytics workspace
+Follow these steps to configure the necessary information in Synapse Studio.
-You can follow below documents to create a Log Analytics workspace:
-- [Create a Log Analytics workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md)-- [Create a Log Analytics workspace with Azure CLI](../../azure-monitor/logs/quick-create-workspace-cli.md)-- [Create and configure a Log Analytics workspace in Azure Monitor using PowerShell](../../azure-monitor/logs/powershell-workspace-configuration.md)
+### Step 1: Create a Log Analytics workspace
+
+Consult one of the following resources to create this workspace:
+- [Create a workspace in the Azure portal](../../azure-monitor/logs/quick-create-workspace.md)
+- [Create a workspace with Azure CLI](../../azure-monitor/logs/quick-create-workspace-cli.md)
+- [Create and configure a workspace in Azure Monitor by using PowerShell](../../azure-monitor/logs/powershell-workspace-configuration.md)
### Step 2: Prepare a Spark configuration file
-#### Option 1. Configure with Azure Log Analytics Workspace ID and Key
+Use any of the following options to prepare the file.
+
+#### Option 1: Configure with Log Analytics workspace ID and key
-Copy the following Spark configuration, save it as **"spark_loganalytics_conf.txt"** and fill the parameters:
+Copy the following Spark configuration, save it as *spark_loganalytics_conf.txt*, and fill in the following parameters:
- - `<LOG_ANALYTICS_WORKSPACE_ID>`: Azure Log Analytics workspace ID.
- - `<LOG_ANALYTICS_WORKSPACE_KEY>`: Azure Log Analytics key: **Azure portal > Azure Log Analytics workspace > Agents management > Primary key**
+ - `<LOG_ANALYTICS_WORKSPACE_ID>`: Log Analytics workspace ID.
+ - `<LOG_ANALYTICS_WORKSPACE_KEY>`: Log Analytics key. To find this, in the Azure portal, go to **Azure Log Analytics workspace** > **Agents management** > **Primary key**.
```properties spark.synapse.logAnalytics.enabled true
spark.synapse.logAnalytics.workspaceId <LOG_ANALYTICS_WORKSPACE_ID>
spark.synapse.logAnalytics.secret <LOG_ANALYTICS_WORKSPACE_KEY> ```
-#### Option 2. Configure with an Azure Key Vault
+#### Option 2: Configure with Azure Key Vault
> [!NOTE]
->
-> You need to grant read secret permission to the users who will submit Spark applications. Please see [provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../../key-vault/general/rbac-guide.md)
+> You need to grant read secret permission to the users who will submit Spark applications. For more information, see [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../../key-vault/general/rbac-guide.md).
-To configure an Azure Key Vault to store the workspace key, follow the steps:
+To configure Azure Key Vault to store the workspace key, follow these steps:
-1. Create and navigate to your key vault in the Azure portal
-2. On the Key Vault settings pages, select **Secrets**.
-3. Click on **Generate/Import**.
-4. On the **Create a secret** screen choose the following values:
- - **Name**: Type a name for the secret, type `"SparkLogAnalyticsSecret"` as default.
- - **Value**: Type the **<LOG_ANALYTICS_WORKSPACE_KEY>** for the secret.
- - Leave the other values to their defaults. Click **Create**.
-5. Copy the following Spark configuration, save it as **"spark_loganalytics_conf.txt"** and fill the parameters:
+1. Create and go to your key vault in the Azure portal.
+2. On the settings page for the key vault, select **Secrets**.
+3. Select **Generate/Import**.
+4. On the **Create a secret** screen, choose the following values:
+ - **Name**: Enter a name for the secret. For the default, enter `SparkLogAnalyticsSecret`.
+ - **Value**: Enter the `<LOG_ANALYTICS_WORKSPACE_KEY>` for the secret.
+ - Leave the other values to their defaults. Then select **Create**.
+5. Copy the following Spark configuration, save it as *spark_loganalytics_conf.txt*, and fill in the following parameters:
- - `<LOG_ANALYTICS_WORKSPACE_ID>`: Azure Log Analytics workspace ID.
- - `<AZURE_KEY_VAULT_NAME>`: The Azure Key Vault name you configured.
- - `<AZURE_KEY_VAULT_SECRET_KEY_NAME>` (Optional): The secret name in the Azure Key Vault for workspace key, default: "SparkLogAnalyticsSecret".
+ - `<LOG_ANALYTICS_WORKSPACE_ID>`: The Log Analytics workspace ID.
+ - `<AZURE_KEY_VAULT_NAME>`: The key vault name that you configured.
+ - `<AZURE_KEY_VAULT_SECRET_KEY_NAME>` (optional): The secret name in the key vault for the workspace key. The default is `SparkLogAnalyticsSecret`.
```properties spark.synapse.logAnalytics.enabled true
spark.synapse.logAnalytics.keyVault.key.secret <AZURE_KEY_VAULT_SECRET_KEY_NAME>
``` > [!NOTE]
->
-> You can also store the Log Analytics workspace id to Azure Key vault. Please refer to the above steps and store the workspace id with secret name `"SparkLogAnalyticsWorkspaceId"`. Or use the config `spark.synapse.logAnalytics.keyVault.key.workspaceId` to specify the workspace id secret name in Azure Key vault.
+> You can also store the workspace ID in Key Vault. Refer to the preceding steps, and store the workspace ID with the secret name `SparkLogAnalyticsWorkspaceId`. Alternatively, you can use the configuration `spark.synapse.logAnalytics.keyVault.key.workspaceId` to specify the workspace ID secret name in Key Vault.
-#### Option 3. Configure with an Azure Key Vault linked service
+#### Option 3. Configure with a linked service
> [!NOTE]
->
-> You need to grant read secret permission to the Synapse workspace. Please see [provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../../key-vault/general/rbac-guide.md)
+> You need to grant read secret permission to the users who will submit Spark applications. For more information, see [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../../key-vault/general/rbac-guide.md).
-To configure an Azure Key Vault linked service in Synapse Studio to store the workspace key, follow the steps:
+To configure a Key Vault linked service in Synapse Studio to store the workspace key, follow these steps:
-1. Follow all the steps in the `Option 2. Configure with an Azure Key Vault` section.
-2. Create an Azure Key vault linked service in Synapse Studio:
+1. Follow all the steps in the preceding section, "Option 2."
+2. Create a Key Vault linked service in Synapse Studio:
- a. Navigate to **Synapse Studio > Manage > Linked services**, click **New** button.
+ a. Go to **Synapse Studio** > **Manage** > **Linked services**, and then select **New**.
- b. Search **Azure Key Vault** in the search box.
+ b. In the search box, search for **Azure Key Vault**.
- c. Type a name for the linked service.
+ c. Enter a name for the linked service.
- d. Choose your Azure key vault. Click **Create**.
+ d. Choose your key vault, and select **Create**.
-3. Add a `spark.synapse.logAnalytics.keyVault.linkedServiceName` item to Spark configuration.
+3. Add a `spark.synapse.logAnalytics.keyVault.linkedServiceName` item to the Spark configuration.
```properties spark.synapse.logAnalytics.enabled true
spark.synapse.logAnalytics.keyVault.key.secret <AZURE_KEY_VAULT_SECRET_KEY_NAME>
spark.synapse.logAnalytics.keyVault.linkedServiceName <LINKED_SERVICE_NAME> ```
-#### Available Spark Configuration
+#### Available Spark configuration
-| Configuration Name | Default Value | Description |
+| Configuration name | Default value | Description |
| | - | - |
-| spark.synapse.logAnalytics.enabled | false | To enable the Azure Log Analytics sink for the Spark applications, true. Otherwise, false. |
-| spark.synapse.logAnalytics.workspaceId | - | The destination Azure Log Analytics workspace ID |
-| spark.synapse.logAnalytics.secret | - | The destination Azure Log Analytics workspace secret. |
-| spark.synapse.logAnalytics.keyVault.linkedServiceName | - | Azure Key vault linked service name for the Azure Log Analytics workspace ID and key |
-| spark.synapse.logAnalytics.keyVault.name | - | Azure Key vault name for the Azure Log Analytics ID and key |
-| spark.synapse.logAnalytics.keyVault.key.workspaceId | SparkLogAnalyticsWorkspaceId | Azure Key vault secret name for the Azure Log Analytics workspace ID |
-| spark.synapse.logAnalytics.keyVault.key.secret | SparkLogAnalyticsSecret | Azure Key vault secret name for the Azure Log Analytics workspace key |
-| spark.synapse.logAnalytics.uriSuffix | ods.opinsights.azure.com | The destination Azure Log Analytics workspace [URI suffix][uri_suffix]. If your Azure Log Analytics Workspace is not in Azure global, you need to update the URI suffix according to the respective cloud. |
+| spark.synapse.logAnalytics.enabled | false | To enable the Log Analytics sink for the Spark applications, true. Otherwise, false. |
+| spark.synapse.logAnalytics.workspaceId | - | The destination Log Analytics workspace ID. |
+| spark.synapse.logAnalytics.secret | - | The destination Log Analytics workspace secret. |
+| spark.synapse.logAnalytics.keyVault.linkedServiceName | - | The Key Vault linked service name for the Log Analytics workspace ID and key. |
+| spark.synapse.logAnalytics.keyVault.name | - | The Key Vault name for the Log Analytics ID and key. |
+| spark.synapse.logAnalytics.keyVault.key.workspaceId | SparkLogAnalyticsWorkspaceId | The Key Vault secret name for the Log Analytics workspace ID. |
+| spark.synapse.logAnalytics.keyVault.key.secret | SparkLogAnalyticsSecret | The Key Vault secret name for the Log Analytics workspace key. |
+| spark.synapse.logAnalytics.uriSuffix | ods.opinsights.azure.com | The destination Log Analytics workspace [URI suffix][uri_suffix]. If your workspace isn't in Azure global, you need to update the URI suffix according to the respective cloud. |
> [!NOTE]
-> - For Azure China clouds, the "spark.synapse.logAnalytics.uriSuffix" parameter should be "ods.opinsights.azure.cn".
-> - For Azure Gov clouds, the "spark.synapse.logAnalytics.uriSuffix" parameter should be "ods.opinsights.azure.us".
+> - For Azure China, the `spark.synapse.logAnalytics.uriSuffix` parameter should be `ods.opinsights.azure.cn`.
+> - For Azure Government, the `spark.synapse.logAnalytics.uriSuffix` parameter should be `ods.opinsights.azure.us`.
[uri_suffix]: ../../azure-monitor/logs/data-collector-api.md#request-uri ### Step 3: Upload your Spark configuration to a Spark pool
-You can upload the configuration file to your Synapse Spark pool in Synapse Studio.
+You can upload the configuration file to your Azure Synapse Analytics Spark pool. In Synapse Studio:
- 1. Navigate to your Apache Spark pool in the Azure Synapse Studio (Manage -> Apache Spark pools)
- 2. Click the **"..."** button on the right of your Apache Spark pool
- 3. Select Apache Spark configuration
- 4. Click **Upload** and choose the **"spark_loganalytics_conf.txt"** created.
- 5. Click **Upload** and **Apply**.
+ 1. Select **Manage** > **Apache Spark pools**.
+ 2. Next to your Apache Spark pool, select the **...** button.
+ 3. Select **Apache Spark configuration**.
+ 4. Select **Upload**, and choose the *spark_loganalytics_conf.txt* file.
+ 5. Select **Upload**, and then select **Apply**.
> [!div class="mx-imgBorder"]
- > ![spark pool configuration](./media/apache-spark-azure-log-analytics/spark-pool-configuration.png)
+ > ![Screenshot that shows the Spark pool configuration.](./media/apache-spark-azure-log-analytics/spark-pool-configuration.png)
> [!NOTE] >
-> All the Spark application submitted to the Spark pool above will use the configuration setting to push the Spark application metrics and logs to your specified Azure Log Analytics workspace.
+> All the Spark applications submitted to the Spark pool will use the configuration setting to push the Spark application metrics and logs to your specified workspace.
-## Submit a Spark application and view the logs and metrics in Azure Log Analytics
+## Submit a Spark application and view the logs and metrics
- 1. You can submit a Spark application to the Spark pool configured in the previous step, using one of the following ways:
- - Run a Synapse Studio notebook.
- - Submit a Synapse Apache Spark batch job through Spark job definition.
- - Run a Pipeline that contains Spark activity.
+Here's how:
- 2. Go to the specified Azure Log Analytics Workspace, then view the application metrics and logs when the Spark application starts to run.
+1. Submit a Spark application to the Spark pool configured in the previous step. You can use any of the following ways to do so:
+ - Run a notebook in Synapse Studio.
+ - In Synapse Studio, submit an Apache Spark batch job through a Spark job definition.
+ - Run a pipeline that contains Spark activity.
-## Use the Sample Azure Log Analytics Workbook to visualize the metrics and logs
+1. Go to the specified Log Analytics workspace, and then view the application metrics and logs when the Spark application starts to run.
-1. [Download the workbook](https://aka.ms/SynapseSparkLogAnalyticsWorkbook) here.
-2. Open and **Copy** the workbook file content.
-3. Navigate to Azure Log Analytics workbook ([Azure portal](https://portal.azure.com/) > Log Analytics workspace > Workbooks)
-4. Open the **"Empty"** Azure Log Analytics Workbook, in **"Advanced Editor"** mode (press the </> icon).
-5. **Paste** over any json that exists.
-6. Then Press **Apply** then **Done Editing**.
+## Use the sample workbook to visualize the metrics and logs
+
+1. [Download the workbook](https://aka.ms/SynapseSparkLogAnalyticsWorkbook).
+2. Open and copy the workbook file content.
+3. In the [Azure portal](https://portal.azure.com/), select **Log Analytics workspace** > **Workbooks**.
+4. Open the **Empty** workbook. Use the **Advanced Editor** mode by selecting the **</>** icon.
+5. Paste over any JSON code that exists.
+6. Select **Apply**, and then select **Done Editing**.
> [!div class="mx-imgBorder"]
- > ![new workbook](./media/apache-spark-azure-log-analytics/new-workbook.png)
+ > ![Screenshot that shows a new workbook.](./media/apache-spark-azure-log-analytics/new-workbook.png)
> [!div class="mx-imgBorder"]
- > ![import workbook](./media/apache-spark-azure-log-analytics/import-workbook.png)
+ > ![Screenshot that shows how to import a workbook.](./media/apache-spark-azure-log-analytics/import-workbook.png)
-Then, submit your Apache Spark application to the configured Spark pool. After the application goes to running state, choose the running application in the workbook dropdown list.
+Then, submit your Apache Spark application to the configured Spark pool. After the application goes to a running state, choose the running application in the workbook dropdown list.
> [!div class="mx-imgBorder"]
-> ![workbook imange](./media/apache-spark-azure-log-analytics/workbook.png)
+> ![Screenshot that shows a workbook.](./media/apache-spark-azure-log-analytics/workbook.png)
-And you can customize the workbook by Kusto query and configure alerts.
+You can customize the workbook. For example, you can use Kusto queries and configure alerts.
> [!div class="mx-imgBorder"]
-> ![kusto query and alerts](./media/apache-spark-azure-log-analytics/kusto-query-and-alerts.png)
+> ![Screenshot that shows customizing a workbook with a query and alerts.](./media/apache-spark-azure-log-analytics/kusto-query-and-alerts.png)
## Sample Kusto queries
-1. Query Spark events example.
+The following is an example of querying Spark events:
- ```kusto
- SparkListenerEvent_CL
- | where workspaceName_s == "{SynapseWorkspace}" and clusterName_s == "{SparkPool}" and livyId_s == "{LivyId}"
- | order by TimeGenerated desc
- | limit 100
- ```
+```kusto
+SparkListenerEvent_CL
+| where workspaceName_s == "{SynapseWorkspace}" and clusterName_s == "{SparkPool}" and livyId_s == "{LivyId}"
+| order by TimeGenerated desc
+| limit 100
+```
-2. Query Spark application driver and executors logs example.
+Here's an example of querying the Spark application driver and executors logs:
- ```kusto
- SparkLoggingEvent_CL
- | where workspaceName_s == "{SynapseWorkspace}" and clusterName_s == "{SparkPool}" and livyId_s == "{LivyId}"
- | order by TimeGenerated desc
- | limit 100
- ```
+```kusto
+SparkLoggingEvent_CL
+| where workspaceName_s == "{SynapseWorkspace}" and clusterName_s == "{SparkPool}" and livyId_s == "{LivyId}"
+| order by TimeGenerated desc
+| limit 100
+```
-3. Query Spark metrics example.
+And here's an example of querying Spark metrics:
- ```kusto
- SparkMetrics_CL
- | where workspaceName_s == "{SynapseWorkspace}" and clusterName_s == "{SparkPool}" and livyId_s == "{LivyId}"
- | where name_s endswith "jvm.total.used"
- | summarize max(value_d) by bin(TimeGenerated, 30s), executorId_s
- | order by TimeGenerated asc
- ```
+```kusto
+SparkMetrics_CL
+| where workspaceName_s == "{SynapseWorkspace}" and clusterName_s == "{SparkPool}" and livyId_s == "{LivyId}"
+| where name_s endswith "jvm.total.used"
+| summarize max(value_d) by bin(TimeGenerated, 30s), executorId_s
+| order by TimeGenerated asc
+```
## Write custom application logs
logger.warn("warn message")
logger.error("error message") ```
-## Create and manage alerts using Azure Log Analytics
-
-Azure Monitor alerts allow users to use a Log Analytics query to evaluate metrics and logs every set frequency, and fire an alert based on the results.
+## Create and manage alerts
-For more information, see [Create, view, and manage log alerts using Azure Monitor](../../azure-monitor/alerts/alerts-log.md).
+Users can query to evaluate metrics and logs at a set frequency, and fire an alert based on the results. For more information, see [Create, view, and manage log alerts by using Azure Monitor](../../azure-monitor/alerts/alerts-log.md).
## Limitation
-Azure Synapse Analytics workspace with [managed virtual network](../security/synapse-workspace-managed-vnet.md) enabled is not supported.
+Azure Synapse Analytics workspace with [managed virtual network](../security/synapse-workspace-managed-vnet.md) enabled isn't supported.
## Next steps
+ - [Use serverless Apache Spark pool in Synapse Studio](../quickstart-create-apache-spark-pool-studio.md).
+ - [Run a Spark application in notebook](./apache-spark-development-using-notebooks.md).
+ - [Create Apache Spark job definition in Azure Studio](./apache-spark-job-definitions.md).
synapse-analytics Develop Tables Data Types https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/synapse-analytics/sql/develop-tables-data-types.md
FROM sys.tables t
JOIN sys.columns c on t.[object_id] = c.[object_id] JOIN sys.types y on c.[user_type_id] = y.[user_type_id] WHERE y.[name] IN ('geography','geometry','hierarchyid','image','text','ntext','sql_variant','xml')
- AND y.[is_user_defined] = 1;
+ OR y.[is_user_defined] = 1;
``` ## <a name="unsupported-data-types"></a>Workarounds for unsupported data types
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-desktop/multimedia-redirection.md
to do these things:
1. [Install the Windows Desktop client](./user-documentation/connect-windows-7-10.md#install-the-windows-desktop-client) on a Windows 10 or Windows 10 IoT Enterprise device that meets the [hardware requirements for Teams on a Windows PC](/microsoftteams/hardware-requirements-for-the-teams-app#hardware-requirements-for-teams-on-a-windows-pc/). Installing version 1.2.2222 or later of the client will also install the multimedia redirection plugin (MsMmrDVCPlugin.dll) on the client device. To learn more about updates and new versions, see [What's new in the Windows Desktop client](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew).
-2. [Configure the client machine for the insider group](create-host-pools-azure-marketplace.md).
+2. [Create a host pool for your users](create-host-pools-azure-marketplace.md).
-3. Install [the Multimedia Redirector service](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWIzIk) and any required browser extensions on the virtual machine (VM).
-
-4. Configure the client machine to let your users access the Insiders program. To configure the client for the Insider group, set the following registry information:
+3. Configure the client machine to let your users access the Insiders program. To configure the client for the Insider group, set the following registry information:
- **Key**: HKLM\\Software\\Microsoft\\MSRDC\\Policies - **Type**: REG_SZ
to do these things:
To learn more about the Insiders program, see [Windows Desktop client for admins](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-admin#configure-user-groups).
-5. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWIzIk) to install the multimedia redirection extensions for your internet browser on your Azure VM. Multimedia redirection for Azure Virtual Desktop currently only supports Microsoft Edge and Google Chrome.
+4. Use [the MSI installer (MsMmrHostMri)](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RWIzIk) to install the multimedia redirection extensions for your internet browser on your Azure VM. Multimedia redirection for Azure Virtual Desktop currently only supports Microsoft Edge and Google Chrome.
## Managing group policies for the multimedia redirection browser extension
In some cases, you can change the group policy to manage the browser extensions
### Configure Microsoft Edge group policies for multimedia redirection
-To configure the group policies, you'll need to edit the Microsoft Ege Administrative Template. You should see the extension configuration options under **Administrative Templates Microsoft Edge Extensions** > **Configure extension management settings**.
+To configure the group policies, you'll need to edit the Microsoft Edge Administrative Template. You should see the extension configuration options under **Administrative Templates Microsoft Edge Extensions** > **Configure extension management settings**.
The following code is an example of a Microsoft Edge group policy that makes the browser install the multimedia redirection extension and only lets multimedia redirection load on YouTube:
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
# Azure Hybrid Benefit for Linux virtual machine scale set
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Uniform scale sets
+ **Azure Hybrid Benefit for Linux virtual machine scale set is in GA now**. AHB benefit can help you reduce the cost of running your RHEL and SLES [virtual machine scale sets](./overview.md). With this benefit, you pay for only the infrastructure cost of your scale set. The benefit is available for all RHEL and SLES Marketplace pay-as-you-go (PAYG) images.
To start using the benefit for SUSE:
1. Register your VMs that are receiving the benefit with a separate source of updates.
-## Enable and disable the benefit on Azure Portal
+## Enable and disable the benefit on Azure portal
### Azure portal example to enable the benefit during creation: 1. Visit [Microsoft Azure portal](https://portal.azure.com/) 1. Go to 'Create a Virtual Machine scale set' page on the portal.
virtual-machine-scale-sets Disk Encryption Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md
# Encrypt virtual machine scale sets with Azure Resource Manager
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ You can encrypt or decrypt Linux virtual machine scale sets using Azure Resource Manager templates. ## Deploying templates
virtual-machine-scale-sets Disk Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-cli.md
# Encrypt OS and attached data disks in a virtual machine scale set with the Azure CLI
+**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to create and encrypt a virtual machine scale set. For more information on applying Azure Disk encryption to a virtual machine scale set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md). [!INCLUDE [azure-cli-prepare-your-environment.md](../../includes/azure-cli-prepare-your-environment.md)]
virtual-machine-scale-sets Disk Encryption Extension Sequencing https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-extension-sequencing.md
# Use Azure Disk Encryption with virtual machine scale set extension sequencing
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ Extensions such as Azure disk encryption can be added to an Azure virtual machines scale set in a specified order. To do so, use [extension sequencing](virtual-machine-scale-sets-extension-sequencing.md). In general, encryption should be applied to a disk:
In either case, the `provisionAfterExtensions` property designates which extensi
If you wish to have Azure Disk Encryption applied after another extension, put the `provisionAfterExtensions` property in the AzureDiskEncryption extension block.
-Here is an example using "CustomScriptExtension", a Powershell script that initializes and formats a Windows disk, followed by "AzureDiskEncryption":
+Here is an example using "CustomScriptExtension", a PowerShell script that initializes and formats a Windows disk, followed by "AzureDiskEncryption":
```json "virtualMachineProfile": {
virtual-machine-scale-sets Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-key-vault.md
- Title: Creating and configuring a key vault for Azure Disk Encryption
-description: This article provides steps for creating and configuring a key vault for use with Azure Disk Encryption
----- Previously updated : 10/10/2019-----
-# Create and configure a key vault for Azure Disk Encryption
-
-Azure Disk Encryption uses Azure Key Vault to control and manage disk encryption keys and secrets. For more information about key vaults, see [Get started with Azure Key Vault](../key-vault/general/overview.md) and [Azure Key Vault security features](../key-vault/general/security-features.md).
-
-Creating and configuring a key vault for use with Azure Disk Encryption involves three steps:
-
-1. Creating a resource group, if needed.
-2. Creating a key vault.
-3. Setting key vault advanced access policies.
-
-These steps are illustrated in the following quickstarts:
-
-You may also, if you wish, generate or import a key encryption key (KEK).
-
-## Install tools and connect to Azure
-
-The steps in this article can be completed with the [Azure CLI](/cli/azure/), the [Azure PowerShell Az module](/powershell/azure/), or the [Azure portal](https://portal.azure.com).
-
-### Connect to your Azure account
-
-Before using the Azure CLI or Azure PowerShell, you must first connect to your Azure subscription. You do so by [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli), [Signing in with Azure PowerShell](/powershell/azure/authenticate-azureps), or supplying your credentials to the Azure portal when prompted.
-
-```azurecli-interactive
-az login
-```
-
-```azurepowershell-interactive
-Connect-AzAccount
-```
-
-
-## Next steps
--- [Azure Disk Encryption overview](disk-encryption-overview.md)-- [Encrypt a virtual machine scale sets using the Azure CLI](disk-encryption-cli.md)-- [Encrypt a virtual machine scale sets using the Azure PowerShell](disk-encryption-powershell.md)+
+ Title: Creating and configuring a key vault for Azure Disk Encryption
+description: This article provides steps for creating and configuring a key vault for use with Azure Disk Encryption
+++++ Last updated : 10/10/2019+++++
+# Create and configure a key vault for Azure Disk Encryption
+
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+
+Azure Disk Encryption uses Azure Key Vault to control and manage disk encryption keys and secrets. For more information about key vaults, see [Get started with Azure Key Vault](../key-vault/general/overview.md) and [Secure your key vault](../key-vault/general/secure-your-key-vault.md).
+
+Creating and configuring a key vault for use with Azure Disk Encryption involves three steps:
+
+1. Creating a resource group, if needed.
+2. Creating a key vault.
+3. Setting key vault advanced access policies.
+
+These steps are illustrated in the following quickstarts:
+
+You may also, if you wish, generate or import a key encryption key (KEK).
+
+## Install tools and connect to Azure
+
+The steps in this article can be completed with the [Azure CLI](/cli/azure/), the [Azure PowerShell Az module](/powershell/azure/), or the [Azure portal](https://portal.azure.com).
+
+### Connect to your Azure account
+
+Before using the Azure CLI or Azure PowerShell, you must first connect to your Azure subscription. You do so by [Signing in with Azure CLI](/cli/azure/authenticate-azure-cli), [Signing in with Azure PowerShell](/powershell/azure/authenticate-azureps), or supplying your credentials to the Azure portal when prompted.
+
+```azurecli-interactive
+az login
+```
+
+```azurepowershell-interactive
+Connect-AzAccount
+```
+
+
+## Next steps
+
+- [Azure Disk Encryption overview](disk-encryption-overview.md)
+- [Encrypt a virtual machine scale sets using the Azure CLI](disk-encryption-cli.md)
+- [Encrypt a virtual machine scale sets using the Azure PowerShell](disk-encryption-powershell.md)
+
virtual-machine-scale-sets Disk Encryption Overview https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-overview.md
# Azure Disk Encryption for Virtual Machine Scale Sets
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ Azure Disk Encryption provides volume encryption for the OS and data disks of your virtual machines, helping protect and safeguard your data to meet organizational security and compliance commitments. To learn more, see [Azure Disk Encryption: Linux VMs](../virtual-machines/linux/disk-encryption-overview.md) and [Azure Disk Encryption: Windows VMs](../virtual-machines/windows/disk-encryption-overview.md) Azure Disk Encryption can also be applied to Windows and Linux virtual machine scale sets, in these instances:
virtual-machine-scale-sets Disk Encryption Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/disk-encryption-powershell.md
# Encrypt OS and attached data disks in a virtual machine scale set with Azure PowerShell
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ The Azure PowerShell module is used to create and manage Azure resources from the PowerShell command line or in scripts. This article shows you how to use Azure PowerShell to create and encrypt a virtual machine scale set. For more information on applying Azure Disk Encryption to a virtual machine scale set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md). [!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
virtual-machine-scale-sets Instance Generalized Image Version Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/instance-generalized-image-version-cli.md
# Create a scale set from a generalized image with Azure CLI
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ Create a scale set from a generalized image version stored in a [Shared Image Gallery](../virtual-machines/shared-image-galleries.md) using the Azure CLI. If want to create a scale set using a specialized image version, see [Create scale set instances from a specialized image](instance-specialized-image-version-cli.md). If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.4.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
virtual-machine-scale-sets Instance Generalized Image Version Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/instance-generalized-image-version-powershell.md
# Create a scale set from a generalized image using PowerShell
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ Create a VM from a generalized image version stored in a [Shared Image Gallery](../virtual-machines/shared-image-galleries.md). If want to create a scale set using a specialized image, see [Create scale set instances from a specialized image](instance-specialized-image-version-powershell.md). Once you have a generalized image, you can create a virtual machine scale set using the [New-AzVmss](/powershell/module/az.compute/new-azvmss) cmdlet.
virtual-machine-scale-sets Instance Specialized Image Version Cli https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/instance-specialized-image-version-cli.md
# Create a scale set using a specialized image version with the Azure CLI
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ Create a scale set from a [specialized image version](../virtual-machines/shared-image-galleries.md#generalized-and-specialized-images) stored in a Shared Image Gallery. If you want to create a scale set using a generalized image version, see [Create a scale set from a generalized image](instance-generalized-image-version-cli.md). If you choose to install and use the CLI locally, this tutorial requires that you are running the Azure CLI version 2.4.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI]( /cli/azure/install-azure-cli).
virtual-machine-scale-sets Instance Specialized Image Version Powershell https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/instance-specialized-image-version-powershell.md
# Create a scale set from a specialized image using PowerShell
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale sets
+ Create a VM from a specialized image version stored in a [Shared Image Gallery](../virtual-machines/shared-image-galleries.md) using Azure PowerShell. If want to create a scale set using a generalized image version, see [Create scale set instances from a generalized image version](instance-generalized-image-version-powershell.md). Once you have a specialized image in your gallery, you can create a virtual machine scale set using the [New-AzVmss](/powershell/module/az.compute/new-azvmss) cmdlet.
virtual-machine-scale-sets Orchestration Modes Api Comparison https://github.com/MicrosoftDocs/azure-docs/commits/master/articles/virtual-machine-scale-sets/orchestration-modes-api-comparison.md
+
+ Title: Orchestration modes API comparison
+description: Learn about the API differences between the Uniform and Flexible orchestration modes.
++++ Last updated : 08/05/2021++++
+# Preview: Orchestration modes API comparison
+
+This article compares the API differences between Uniform and [Flexible orchestration](..\virtual-machines\flexible-virtual-machine-scale-sets.md) modes for virtual machine scale sets. To learn more about Uniform and Flexible virtual machine scale sets, see [orchestration modes](virtual-machine-scale-sets-orchestration-modes.md).
+
+> [!IMPORTANT]
+> Virtual machine scale sets in Flexible orchestration mode is currently in public preview. An opt-in procedure is needed to use the public preview functionality described below.
+> This preview version is provided without a service level agreement and is not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
++
+## Instance view
+
+| Uniform API | Flexible alternative |
+|-|-|
+| Virtual machine scale sets Instance View | Get instance view on individual VMs; Use Resource Graph to query power state |
++
+## Scale set lifecycle batch operations
+
+| Uniform API | Flexible alternative |
+|-|-|
+| Virtual machine scale sets VM Lifecycle Batch Operations: | Invoke Single VM API on specific instances: |
+| [Deal