Updates from: 07/13/2022 01:07:49
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
Previously updated : 06/03/2022 Last updated : 07/12/2022 # Monitor Azure AD B2C with Azure Monitor
To create the custom authorization and delegation in Azure Lighthouse, we use an
| Region | Select the region where the resource will be deployed. | | Msp Offer Name | A name describing this definition. For example, _Azure AD B2C Monitoring_. It's the name that will be displayed in Azure Lighthouse. The **MSP Offer Name** must be unique in your Azure AD. To monitor multiple Azure AD B2C tenants, use different names. | | Msp Offer Description | A brief description of your offer. For example, _Enables Azure Monitor in Azure AD B2C_. |
- | Managed By Tenant Id | The **Tenant ID** of your Azure AD B2C tenant (also known as the directory ID). |
+ | Managed By Tenant ID | The **Tenant ID** of your Azure AD B2C tenant (also known as the directory ID). |
| Authorizations | Specify a JSON array of objects that include the Azure AD `principalId`, `principalIdDisplayName`, and Azure `roleDefinitionId`. The `principalId` is the **Object ID** of the B2C group or user that will have access to resources in this Azure subscription. For this walkthrough, specify the group's Object ID that you recorded earlier. For the `roleDefinitionId`, use the [built-in role](../role-based-access-control/built-in-roles.md) value for the _Contributor role_, `b24988ac-6180-42a0-ab88-20f7382dd24c`. | | Rg Name | The name of the resource group you create earlier in your Azure AD tenant. For example, _azure-ad-b2c-monitor_. |
You're ready to [create diagnostic settings](../active-directory/reports-monitor
To configure monitoring settings for Azure AD B2C activity logs: 1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure AD B2C administrative account. This account must be a member of the security group you specified in the [Select a security group](#32-select-a-security-group) step.
-1. Make sure you're using the directory that contains your Azure AD B2C tenant. Select the **Directories + subscriptions** icon in the portal toolbar.
-1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
+1. Make sure you're using the directory that contains your Azure AD B2C tenant:
+ 1. Select the **Directories + subscriptions** icon in the portal toolbar.
+ 2. On the **Portal settings | Directories + subscriptions** page, find your Azure AD B2C directory in the **Directory name** list, and then select **Switch**.
1. Select **Azure Active Directory** 1. Under **Monitoring**, select **Diagnostic settings**.
-1. If there are existing settings for the resource, you'll see a list of settings already configured. Either select **Add diagnostic setting** to add a new setting, or select **Edit** to edit an existing setting. Each setting can have no more than one of each of the destination types.
+1. If there are existing settings for the resource, you'll see a list of settings already configured. Either select **Add diagnostic setting** to add a new setting, or select **Edit settings** to edit an existing setting. Each setting can have no more than one of each of the destination types.
- ![Diagnostics settings pane in Azure portal](./media/azure-monitor/azure-monitor-portal-05-diagnostic-settings-pane-enabled.png)
+ ![Screenshot of the diagnostics settings pane in Azure portal.](./media/azure-monitor/azure-monitor-portal-05-diagnostic-settings-pane-enabled.png)
1. Give your setting a name if it doesn't already have one.
-1. Check the box for each destination to send the logs. Select **Configure** to specify their settings **as described in the following table**.
-1. Select **Send to Log Analytics**, and then select the **Name of workspace** you created earlier (`AzureAdB2C`).
1. Select **AuditLogs** and **SignInLogs**.
+1. Select **Send to Log Analytics Workspace**, and then:
+ 1. Under **Subscription**, select your subscription.
+ 2. Under **Log Analytics Workspace**, select the name of the workspace you created earlier such as `AzureAdB2C`.
+ > [!NOTE] > Only the **AuditLogs** and **SignInLogs** diagnostic settings are currently supported for Azure AD B2C tenants.
Now you can configure your Log Analytics workspace to visualize your data and co
Log queries help you to fully use the value of the data collected in Azure Monitor Logs. A powerful query language allows you to join data from multiple tables, aggregate large sets of data, and perform complex operations with minimal code. Virtually any question can be answered and analysis performed as long as the supporting data has been collected, and you understand how to construct the right query. For more information, see [Get started with log queries in Azure Monitor](../azure-monitor/logs/get-started-queries.md).
-1. From **Log Analytics workspace**, select **Logs**
+1. From **Log Analytics workspace** window, select **Logs**
1. In the query editor, paste the following [Kusto Query Language](/azure/data-explorer/kusto/query/) query. This query shows policy usage by operation over the past x days. The default duration is set to 90 days (90d). Notice that the query is focused only on the operation where a token/code is issued by policy. ```kusto
Workbooks provide a flexible canvas for data analysis and the creation of rich v
Follow the instructions below to create a new workbook using a JSON Gallery Template. This workbook provides a **User Insights** and **Authentication** dashboard for Azure AD B2C tenant.
-1. From the **Log Analytics workspace**, select **Workbooks**.
+1. From the **Log Analytics workspace** window, select **Workbooks**.
1. From the toolbar, select **+ New** option to create a new workbook. 1. On the **New workbook** page, select the **Advanced Editor** using the **</>** option on the toolbar.
The workbook will display reports in the form of a dashboard.
## Create alerts
-Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can create alerts based on specific performance metrics or when certain events occur. You can also create alerts on absence of an event, or a number of events are occur within a particular time window. For example, alerts can be used to notify you when average number of sign in exceeds a certain threshold. For more information, see [Create alerts](../azure-monitor/alerts/alerts-log.md).
+Alerts are created by alert rules in Azure Monitor and can automatically run saved queries or custom log searches at regular intervals. You can create alerts based on specific performance metrics or when certain events occur. You can also create alerts on absence of an event, or a number of events occur within a particular time window. For example, alerts can be used to notify you when average number of sign in exceeds a certain threshold. For more information, see [Create alerts](../azure-monitor/alerts/alerts-log.md).
Use the following instructions to create a new Azure Alert, which will send an [email notification](../azure-monitor/alerts/action-groups.md#configure-notifications) whenever there's a 25% drop in the **Total Requests** compared to previous period. Alert will run every 5 minutes and look for the drop in the last hour compared to the hour before it. The alerts are created using Kusto query language.
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
Once you've added the app ID and secrete, use the following steps to add the Azu
1. Navigate to `/.auth/login/aadb2c`. The `/.auth/login` points the Azure Static app login endpoint. The `aadb2c` references to your [OpenID Connect identity provider](#31-add-an-openid-connect-identity-provider). The following URL demonstrates an Azure Static app login endpoint: `https://witty-island-11111111.azurestaticapps.net/.auth/login/aadb2c`. 1. Complete the sign up or sign in process.
-1. In your browser debugger, [run the following JavaScript in the Console](/microsoft-edge/devtools-guide-chromium/console/console-javascript.md). The JavaScript code will present information about the sign in user.
+1. In your browser debugger, [run the following JavaScript in the Console](/microsoft-edge/devtools-guide-chromium/console/console-javascript). The JavaScript code will present information about the sign in user.
```javascript async function getUserInfo() {
Once you've added the app ID and secrete, use the following steps to add the Azu
## Next steps * After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out [Accessing user information in Azure Static Web Apps](../static-web-apps/user-information.md).
-* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-azure-static-app-options.md).
+* Learn how to [customize and enhance the Azure AD B2C authentication experience for your web app](enable-authentication-azure-static-app-options.md).
active-directory-b2c Configure Authentication In Azure Web App File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md
# Configure authentication in an Azure Web App configuration file by using Azure AD B2C
-This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [File-based configuration in Azure App Service authentication](/app-service/configure-authentication-file-based.md) article.
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [File-based configuration in Azure App Service authentication](/azure/app-service/configure-authentication-file-based) article.
## Overview
From your server code, the provider-specific tokens are injected into the reques
## Next steps
-* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/app-service/configure-authentication-user-identities).
-* Lear how to [Work with OAuth tokens in Azure App Service authentication](/app-service/configure-authentication-oauth-tokens).
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/azure/app-service/configure-authentication-user-identities).
+* Lear how to [Work with OAuth tokens in Azure App Service authentication](/azure/app-service/configure-authentication-oauth-tokens).
active-directory-b2c Configure Authentication In Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md
# Configure authentication in an Azure Web App by using Azure AD B2C
-This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [configure your App Service or Azure Functions app to login using an OpenID Connect provider](/app-service/configure-authentication-provider-openid-connect.md) article.
+This article explains how to add Azure Active Directory B2C (Azure AD B2C) authentication functionality to an Azure Web App. For more information, check out the [configure your App Service or Azure Functions app to login using an OpenID Connect provider](/azure/app-service/configure-authentication-provider-openid-connect) article.
## Overview
To register your application, follow these steps:
1. For the **Client Secret** provide the Web App (client) secret from [step 2.2](#step-22-create-a-client-secret). > [!TIP]
- > Your client secret will be stored as an app setting to ensure secrets are stored in a secure fashion. You can update that setting later to use [Key Vault references](/app-service/app-service-key-vault-references.md) if you wish to manage the secret in Azure Key Vault.
+ > Your client secret will be stored as an app setting to ensure secrets are stored in a secure fashion. You can update that setting later to use [Key Vault references](/azure/app-service/app-service-key-vault-references) if you wish to manage the secret in Azure Key Vault.
1. Keep the rest of the settings with the default values. 1. Press the **Add** button to finish setting up the identity provider.
From your server code, the provider-specific tokens are injected into the reques
## Next steps
-* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/app-service/configure-authentication-user-identities).
-* Lear how to [Work with OAuth tokens in Azure App Service authentication](/app-service/configure-authentication-oauth-tokens).
+* After successful authentication, you can show display name on the navigation bar. To view the claims that the Azure AD B2C token returns to your app, check out the [Work with user identities in Azure App Service authentication](/azure/app-service/configure-authentication-user-identities).
+* Lear how to [Work with OAuth tokens in Azure App Service authentication](/azure/app-service/configure-authentication-oauth-tokens).
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Previously updated : 10/29/2021 Last updated : 07/12/2022
You learn how to register an application in the next tutorial.
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. Switch to the directory that contains your subscription:
+1. Make sure you're using the directory that contains your subscription:
+ 1. In the Azure portal toolbar, select the **Directories + subscriptions** filter icon. ![Directories + subscriptions filter icon](media/tutorial-create-tenant/directories-subscription-filter-icon.png)
- 1. Find the directory that contains your subscription and select the **Switch** button next to it. Switching a directory reloads the portal.
+ 1. Find the directory that contains your subscription and select the **Switch** button next to it. Switching a directory reloads the portal. If the directory that contains your subscription has the **Current** label next to it, you don't need to do anything.
- ![Directories + subscriptions with Switch button](media/tutorial-create-tenant/switch-directory.png)
+ ![Screenshot of the directories and subscriptions window.](media/tutorial-create-tenant/switch-directory.png)
1. Add **Microsoft.AzureActiveDirectory** as a resource provider for the Azure subscription you're using ([learn more](../azure-resource-manager/management/resource-providers-and-types.md?WT.mc_id=Portal-Microsoft_Azure_Support#register-resource-provider-1)):
active-directory-b2c Validation Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/validation-technical-profile.md
Following example uses these validation technical profiles:
```xml <ValidationTechnicalProfiles> <ValidationTechnicalProfile ReferenceId="login-NonInteractive" ContinueOnError="false" />
- <ValidationTechnicalProfile ReferenceId="REST-ReadProfileFromCustomertsDatabase" ContinueOnError="true" >
+ <ValidationTechnicalProfile ReferenceId="REST-ReadProfileFromCustomersDatabase" ContinueOnError="true" >
<Preconditions> <Precondition Type="ClaimsExist" ExecuteActionsIf="false"> <Value>userType</Value>
active-directory Application Provisioning Quarantine Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-provisioning/application-provisioning-quarantine-status.md
After the first failure, the first retry happens within the next 2 hours (usuall
- The fifth retry happens 48 hours after the first failure. - The sixth retry happens 72 hours after the first failure. - The seventh retry happens 96 hours after the first failure.-- The eigth retry happens 120 hours after the first failure.
+- The eighth retry happens 120 hours after the first failure.
This cycle is repeated every 24 hours until the 30th day when retries are stopped and the job is disabled.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Previously updated : 03/02/2022 Last updated : 07/12/2022 -
By setting up federation with Google, you can allow invited users to sign in to
## What is the experience for the Google user?
-When a Google user redeems your invitation, their experience varies depending on whether they're already signed in to Google:
+You can invite a Google user to B2B collaboration in various ways. For example, you can [add them to your directory via the Azure portal](b2b-quickstart-add-guest-users-portal.md). When they redeem your invitation, their experience varies depending on whether they're already signed in to Google:
- Guest users who aren't signed in to Google will be prompted to do so. - Guest users who are already signed in to Google will be prompted to choose the account they want to use. They must choose the account you used to invite them.
active-directory Road To The Cloud Implement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-implement.md
You can enrich user attributes in Azure AD to make more user attributes availabl
* App provisioning - The data source of app provisioning is Azure AD and necessary user attributes must be in there.
-* Application authorization - Token issued by Azure AD can include claims generated from user attributes.
-
-* Application can make authorization decision based on the claims in token.
+* Application authorization - Token issued by Azure AD can include claims generated from user attributes so that applications can make authorization decision based on the claims in token.
* Group membership population and maintenance - Dynamic groups enables dynamic population of group membership based on user attributes such as department information.
active-directory Road To The Cloud Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md
The five states have exit criteria to help you determine where your environment
The content then provides more detailed guidance organized to help with intentional changes to people, process, and technology to:
-* Establish Azure AD capabilities
+* Establish Azure AD footprint
* Implement a cloud-first approach
In enterprise-sized organizations, IAM transformation, or even transformation fr
[ ![Diagram that shows five elements, each depicting a possible network architecture. Options include cloud attached, hybrid, cloud first, AD minimized, and 100% cloud.](media/road-to-cloud-posture/road-to-the-cloud-five-states.png) ](media/road-to-cloud-posture/road-to-the-cloud-five-states.png#lightbox)
+>[!NOTE]
+> The states in this diagram represent a logical progression of cloud transformation.
+ **State 1 Cloud attached** - In this state, organizations have created an Azure AD tenant to enable user productivity and collaboration tools and the tenant is fully operational. Most companies that use Microsoft products and services in their IT environment are already in or beyond this state. In this state operational costs may be higher because there's an on-premises environment and cloud environment to maintain and make interactive. Also, people must have expertise in both environments to support their users and the organization. In this state: * Devices are joined to AD and managed using group policy and or on-premises device management tools.
In enterprise-sized organizations, IAM transformation, or even transformation fr
The transformation between the states is similar to moving locations:
-* **Establish new location** - You purchase your destination and establish connectivity between the current location and the new location. This enables you to maintain your productivity and ability to operate. In this content, the activities are described in **[Establish Azure AD capabilities](road-to-the-cloud-establish.md)**. The results transition you to State 2.
+* **Establish new location** - You purchase your destination and establish connectivity between the current location and the new location. This enables you to maintain your productivity and ability to operate. In this content, the activities are described in **[Establish Azure AD footprint](road-to-the-cloud-establish.md)**. The results transition you to State 2.
* **Limit new items in old location** - You stop investing in the old location and set policy to stage new items in new location. In this content, the activities are described in **[Implement cloud-first approach](road-to-the-cloud-implement.md)**. The activities set the foundation to migrate at scale and reach State 3.
As a migration of IAM to Azure AD is started, organizations must determine the p
:::image type="content" source="media/road-to-cloud-posture/road-to-the-cloud-migration.png" alt-text="Table depicting three major milestones that organizations move through when implementing an AD to Azure AD migration. These include Establish Azure AD capabilities, Implement cloud-first approach, and Move workloads to the cloud." border="false":::
-## Establish Azure AD capabilities
-
-* **Initialize tenant** - Create your new Azure AD tenant that supports the vision for your end-state deployment.
-
-* **Secure tenant** - Adopt a [Zero Trust](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/) approach and a security model that [protects your tenant from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md) early in your journey.
+* **Establish Azure AD footprint**: Initialize your new Azure AD tenant to supports the vision for your end-state deployment. Adopt a [Zero Trust](https://www.microsoft.com/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/) approach and a security model that [protects your tenant from on-premises compromise](../fundamentals/protect-m365-from-on-premises-attacks.md) early in your journey.
-## Implement cloud-first approach
-Establish a policy that mandates all new devices, apps and services should be cloud-first. New applications and services using legacy protocols (NTLM, Kerberos, LDAP etc.) should be by exception only.
+* **Implement cloud-first approach**: Establish a policy that mandates all new devices, apps and services should be cloud-first. New applications and services using legacy protocols (NTLM, Kerberos, LDAP etc.) should be by exception only.
-## Transition to the cloud
-Shift the management and integration of users, apps and devices away from on-premises and over to cloud-first alternatives. Optimize user provisioning by taking advantage of [cloud-first provisioning capabilities](../governance/what-is-provisioning.md) that integrate with Azure AD.
+* **Transition to the cloud**: Shift the management and integration of users, apps and devices away from on-premises and over to cloud-first alternatives. Optimize user provisioning by taking advantage of [cloud-first provisioning capabilities](../governance/what-is-provisioning.md) that integrate with Azure AD.
The transformation changes how users accomplish tasks and how support teams provide end-user support. Initiatives or projects should be designed and implemented in a manner that minimizes the impact on user productivity. As part of the transformation, self-service IAM capabilities are introduced. Some portions of the workforce more easily adapt to the self-service user environment prevalent in cloud-based businesses.
active-directory What Is Identity Lifecycle Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/what-is-identity-lifecycle-management.md
The typical process for establishing identity lifecycle management in an organiz
2. Connect those systems of record with one or more directories and databases used by applications, and resolve any inconsistencies between the directories and the systems of record. For example, a directory may have obsolete data, such as an account for a former employee, that is no longer needed.
-3. Determine what processes can be used to supply authoritative information in the absence of a system of record. For example, if there are digital identities but visitors, but the organization has no database for visitors, then it may be necessary to find an alternate way to determine when an digital identity for a visitor is no longer needed.
+3. Determine what processes can be used to supply authoritative information in the absence of a system of record. For example, if there are digital identities for visitors, but the organization has no database for visitors, then it may be necessary to find an alternate way to determine when an digital identity for a visitor is no longer needed.
-4. Configure that changes from the system of record or other processes are replicated to each of the directories or databases that require an update.
+4. Ensure that changes from the system of record or other processes are replicated to each of the directories or databases that require an update.
## Identity lifecycle management for representing employees and other individuals with an organizational relationship When planning identity lifecycle management for employees, or other individuals with an organizational relationship such as a contractor or student, many organizations model the "join, move, and leave" process. These are: - Join - when an individual comes into scope of needing access, an identity is needed by those applications, so a new digital identity may need to be created if one is not already available
- - Move - when an individual moves between boundaries, that require additional access authorizations to be added or removed to their digital identity
- - Leave- when an individual leaves the scope of needing access, access may need to be removed, and subsequently the identity may no longer by required by applications other than for audit or forensics purposes
+ - Move - when an individual moves between boundaries that require additional access authorizations to be added or removed to their digital identity
+ - Leave- when an individual leaves the scope of needing access, access may need to be removed, and subsequently the identity may no longer be required by applications other than for audit or forensics purposes
-So for example, if a new employee joins your organization, who has never been affiliated with your organization before, that employee will require a new digital identity, represented as a user account in Azure AD. The creation of this account would fall into a "Joiner" process, which could be automated if there was a system of record such as Workday that could indicate when the new employee starts work. Later, if your organization has an employee move from say, Sales to Marketing, they would fall into a "Mover" process. This would require removing the access rights they had in the Sales organization which they no longer require, and granting them rights in the Marketing organization that they new require.
+So for example, if a new employee joins your organization and that employee has never been affiliated with your organization before, that employee will require a new digital identity, represented as a user account in Azure AD. The creation of this account would fall into a "Joiner" process, which could be automated if there was a system of record such as Workday that could indicate when the new employee starts work. Later, if your organization has an employee move from say, Sales to Marketing, they would fall into a "Mover" process. This would require removing the access rights they had in the Sales organization which they no longer require, and granting them rights in the Marketing organization that they new require.
## Identity lifecycle management for guests
active-directory How To Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-password-hash-synchronization.md
If your server has been locked down according to Federal Information Processing
3. Go to the configuration/runtime node at the end of the file. 4. Add the following node: `<enforceFIPSPolicy enabled="false"/>` 5. Save your changes.
+6. Reboot for the changes to take effect.
For reference, this snippet is what it should look like:
active-directory Reference Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adconnectivitytools.md
# Azure AD Connect: ADConnectivityTools PowerShell Reference
-The following documentation provides reference information for the ADConnectivityTools.psm1 PowerShell Module that is included with Azure AD Connect.
+The following documentation provides reference information for the ADConnectivityTools PowerShell Module that is included with Azure AD Connect in `C:\Program Files\Microsoft Azure Active Directory Connect\Tools\ADConnectivityTool.psm1`.
## Confirm-DnsConnectivity
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
Users in this role can read settings and administrative information across Micro
>- [Azure Information Protection](/azure/information-protection/what-is-information-protection) - Global Reader is supported [for central reporting](/azure/information-protection/reports-aip) only, and when your Azure AD organization isn't on the [unified labeling platform](/azure/information-protection/faqs#how-can-i-determine-if-my-tenant-is-on-the-unified-labeling-platform). > - [SharePoint](https://admin.microsoft.com/sharepoint) - Global Reader currently can't access SharePoint using PowerShell. > - [Power Platform admin center](https://admin.powerplatform.microsoft.com) - Global Reader is not yet supported in the Power Platform admin center.
+> - Microsoft Purview doesn't support the Global Reader role.
> > These features are currently in development. >
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
az aks create -n myAKSCluster2 -g myResourceGroup --enable-addons azure-keyvault
Or update an existing cluster with the add-on enabled: ```azurecli-interactive
-az aks update -g myResourceGroup -n myAKSCluster2 --enable-secret-rotation
+az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation
``` To specify a custom rotation interval, use the `rotation-poll-interval` flag: ```azurecli-interactive
-az aks update -g myResourceGroup -n myAKSCluster2 --enable-secret-rotation --rotation-poll-interval 5m
+az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 5m
``` To disable autorotation, use the flag `disable-secret-rotation`: ```azurecli-interactive
-az aks update -g myResourceGroup -n myAKSCluster2 --disable-secret-rotation
+az aks addon update -g myResourceGroup -n myAKSCluster2 -a azure-keyvault-secrets-provider --disable-secret-rotation
``` ### Sync mounted content with a Kubernetes secret
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
In the following example, the rate limit of 10 calls per 60 seconds is keyed by
| Name | Description | Required | Default | | - | -- | -- | - | | calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. Policy expression is allowed. | Yes | N/A |
-| counter-key | The key to use for the rate limit policy. | Yes | N/A |
+| counter-key | The key to use for the rate limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
| increment-condition | The boolean expression specifying if the request should be counted towards the rate (`true`). | No | N/A | | increment-count | The number by which the counter is increased per request. | No | 1 | | renewal-period | The length in seconds of the sliding window during which the number of allowed requests should not exceed the value specified in `calls`. Policy expression is allowed. Maximum allowed value: 300 seconds. | Yes | N/A |
In the following example, the quota is keyed by the caller IP address.
| - | | - | - | | bandwidth | The maximum total number of kilobytes allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A | | calls | The maximum total number of calls allowed during the time interval specified in the `renewal-period`. | Either `calls`, `bandwidth`, or both together must be specified. | N/A |
-| counter-key | The key to use for the quota policy. | Yes | N/A |
+| counter-key | The key to use for the quota policy. For each key value, a single counter is used for all scopes at which the policy is configured. | Yes | N/A |
| increment-condition | The boolean expression specifying if the request should be counted towards the quota (`true`) | No | N/A | | renewal-period | The time period in seconds after which the quota resets. When it's set to `0` the period is set to infinite. | Yes | N/A | | first-period-start | The starting date and time for quota renewal periods, in the following format: `yyyy-MM-ddTHH:mm:ssZ` as specified by the ISO 8601 standard. | No | `0001-01-01T00:00:00Z` |
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
az network public-ip create \
## Create the backend servers
-A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install IIS on the virtual machines to test the application gateway.
+A backend can have NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant back-ends like Azure App Service. In this example, you create two virtual machines to use as backend servers for the application gateway. You also install NGINX on the virtual machines to test the application gateway.
#### Create two virtual machines
applied-ai-services Compose Custom Models Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/compose-custom-models-preview.md
To get started, you'll need the following resources:
:::image border="true" type="content" source="media/containers/keys-and-endpoint.png" alt-text="Still photo showing how to access resource key and endpoint URL."::: > [!TIP]
- > For more information, see* [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
+ > For more information, see [**create a Form Recognizer resource**](create-a-form-recognizer-resource.md).
* **An Azure storage account.** If you don't know how to create an Azure storage account, follow the [Azure Storage quickstart for Azure portal](../../storage/blobs/storage-quickstart-blobs-portal.md). You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Previously updated : 06/06/2022 Last updated : 07/11/2022 recommendations: false
Custom neural models currently only support key-value pairs and selection marks,
With the release of API version **2022-06-30-preview**, custom neural models will support tabular fields (tables):
-* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
-* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
+* Models trained with API version 2022-06-30-preview or later will accept tabular field labels.
+* Documents analyzed with custom neural models using API version 2022-06-30-preview or later will produce tabular fields aggregated across the tables.
* The results can be found in the ```analyzeResult``` object's ```documents``` array that is returned following an analysis operation. Tabular fields support **cross page tables** by default:
Tabular fields are also useful when extracting repeating information within a do
## Supported regions
-For the **2022-06-30-preview**, custom neural models can only be trained in the following Azure regions:
-
-* AustraliaEast
-* BrazilSouth
-* CanadaCentral
-* CentralIndia
-* CentralUS
-* EastUS
-* EastUS2
-* FranceCentral
-* JapanEast
-* JioIndiaWest
-* KoreaCentral
-* NorthEurope
-* SouthCentralUS
-* SoutheastAsia
-* UKSouth
-* WestEurope
-* WestUS
-* WestUS2
-* WestUS3
+As of August 01 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
+
+* Brazil South
+* Canada Central
+* Central India
+* Japan East
+* North Europe
+* South Central US
+* Southeast Asia
> [!TIP]
-> You can copy a model trained in one of the select regions listed above to **any other region** and use it accordingly.
+> You can [copy a model](disaster-recovery.md) trained in one of the select regions listed above to **any other region** and use it accordingly.
## Best practices
-Custom neural models differ from custom template models in a few different ways. The custom template or model relies on a consistent visual template to extract the labeled data. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model and test to determine if it supports your functional needs.
+Custom neural models differ from custom template models in a few different ways. The custom template or model relies on a consistent visual template to extract the labeled data. Custom neural models support structured, semi-structured, and unstructured documents to extract fields. When you're choosing between the two model types, start with a neural model, and test to determine if it supports your functional needs.
### Dealing with variations
Value tokens/words of one field must be either
Values in training cases should be diverse and representative. For example, if a field is named "date", values for this field should be a date. synthetic value like a random string can affect model performance. - ## Current Limitations * The model doesn't recognize values split across page boundaries.
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
} } ```+ ## Next steps * Train a custom model:
https://{endpoint}/formrecognizer/documentModels:build?api-version=2022-06-30
* View the REST API: > [!div class="nextstepaction"]
- > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
+ > [Form Recognizer API v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v3-0-preview-2/operations/AnalyzeDocument)
applied-ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-model-overview.md
A composed model is created by taking a collection of custom models and assignin
| **Model ID** | **Text extraction** | **Language detection** | **Selection Marks** | **Tables** | **Paragraphs** | **Paragraph roles** | **Key-Value pairs** | **Fields** | |:--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |🆕 [prebuilt-read](concept-read.md#data-extraction) | ✓ | ✓ | | | ✓ | | | |
-|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | | ✓ | | ✓ | | | ✓ |
-|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | | ✓ | ✓ | ✓ | | ✓ | |
-| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | | |
+|🆕 [prebuilt-tax.us.w2](concept-w2.md#field-extraction) | ✓ | | ✓ | | ✓ | | | ✓ |
+|🆕 [prebuilt-document](concept-general-document.md#data-extraction)| ✓ | | ✓ | ✓ | ✓ | | ✓ | |
+| [prebuilt-layout](concept-layout.md#data-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | Γ£ô | | |
| [prebuilt-invoice](concept-invoice.md#field-extraction) | Γ£ô | | Γ£ô | Γ£ô | Γ£ô | | Γ£ô | Γ£ô | | [prebuilt-receipt](concept-receipt.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô | | [prebuilt-idDocument](concept-id-document.md#field-extraction) | Γ£ô | | | | Γ£ô | | | Γ£ô |
applied-ai-services Deploy Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/deploy-label-tool.md
> > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview. > [!NOTE]
applied-ai-services Build Custom Model V3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/how-to-guides/build-custom-model-v3.md
The Form Recognizer Studio provides and orchestrates all the API calls required
1. On the next step in the workflow, choose or create a Form Recognizer resource before you select continue. > [!IMPORTANT]
- > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md).
+ > Custom neural models models are only available in a few regions. If you plan on training a neural model, please select or create a resource in one of [these supported regions](../concept-custom-neural.md#supported-regions).
:::image type="content" source="../media/how-to/studio-select-resource.png" alt-text="Screenshot: Select the Form Recognizer resource.":::
applied-ai-services Label Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/label-tool.md
keywords: document processing
> > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview. In this article, you'll use the Form Recognizer REST API with the Sample Labeling tool to train a custom model with manually labeled data.
applied-ai-services Try V3 Form Recognizer Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/quickstarts/try-v3-form-recognizer-studio.md
* An active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/). * A [**Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**Cognitive Services multi-service**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource.
+> [!TIP]
+> Create a Cognitive Services resource if you plan to access multiple cognitive services under a single endpoint/key. For Form Recognizer access only, create a Form Recognizer resource. Please note that you'll need a single-service resource if you intend to use [Azure Active Directory authentication](../../../active-directory/authentication/overview-authentication.md).
+ ## Prebuilt models Prebuilt models help you add Form Recognizer features to your apps without having to build, train, and publish your own models. You can choose from several prebuilt models, each of which has its own set of supported data fields. The choice of model to use for the analyze operation depends on the type of document to be analyzed. The following prebuilt models are currently supported by Form Recognizer:
applied-ai-services Supervised Table Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/supervised-table-tags.md
Previously updated : 07/23/2021 Last updated : 07/11/2022 #Customer intent: As a user of the Form Recognizer custom model service, I want to ensure I'm training my model in the best way.
> > * For an enhanced experience and advanced model quality, try the [Form Recognizer v3.0 Studio (preview)](https://formrecognizer.appliedai.azure.com/studio). > * The v3.0 Studio supports any model trained with v2.1 labeled data.
-> * You can refer to the API migration guide for detailed information about migrating from v2.1 to v3.0.
+> * You can refer to the [API migration guide](v3-migration-guide.md) for detailed information about migrating from v2.1 to v3.0.
> * *See* our [**REST API**](quickstarts/try-v3-rest-api.md) or [**C#**](quickstarts/try-v3-csharp-sdk.md), [**Java**](quickstarts/try-v3-java-sdk.md), [**JavaScript**](quickstarts/try-v3-javascript-sdk.md), or [Python](quickstarts/try-v3-python-sdk.md) SDK quickstarts to get started with the V3.0 preview. In this article, you'll learn how to train your custom template model with table tags (labels). Some scenarios require more complex labeling than simply aligning key-value pairs. Such scenarios include extracting information from forms with complex hierarchical structures or encountering items that not automatically detected and extracted by the service. In these cases, you can use table tags to train your custom template model.
Here are some examples of when using table tags would be appropriate:
## Label your table tag data
-* If your project has a table tag, you can open the labeling panel and populate the tag as you would label key-value fields.
+* If your project has a table tag, you can open the labeling panel, and populate the tag as you would label key-value fields.
:::image type="content" source="media/table-labeling.png" alt-text="Label with table tags"::: ## Next steps
applied-ai-services How To Cache Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/immersive-reader/how-to-cache-token.md
This article demonstrates how to cache the authentication token in order to impr
## Using ASP.NET
-Import the **Microsoft.IdentityModel.Clients.ActiveDirectory** NuGet package, which is used to acquire a token. Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
+Import the **Microsoft.Identity.Client** NuGet package, which is used to acquire a token.
+
+Create a confidential client application property.
+
+```csharp
+private IConfidentialClientApplication _confidentialClientApplication;
+private IConfidentialClientApplication ConfidentialClientApplication
+{
+ get {
+ if (_confidentialClientApplication == null) {
+ _confidentialClientApplication = ConfidentialClientApplicationBuilder.Create(ClientId)
+ .WithClientSecret(ClientSecret)
+ .WithAuthority($"https://login.windows.net/{TenantId}")
+ .Build();
+ }
+
+ return _confidentialClientApplication;
+ }
+}
+```
+
+Next, use the following code to acquire an `AuthenticationResult`, using the authentication values you got when you [created the Immersive Reader resource](./how-to-create-immersive-reader.md).
> [!IMPORTANT] > The [Microsoft.IdentityModel.Clients.ActiveDirectory](https://www.nuget.org/packages/Microsoft.IdentityModel.Clients.ActiveDirectory) NuGet package and Azure AD Authentication Library (ADAL) have been deprecated. No new features have been added since June 30, 2020. We strongly encourage you to upgrade, see the [migration guide](../../active-directory/develop/msal-migration.md) for more details. ```csharp
-private async Task<AuthenticationResult> GetTokenAsync()
+public async Task<string> GetTokenAsync()
{
- AuthenticationContext authContext = new AuthenticationContext($"https://login.windows.net/{TENANT_ID}");
- ClientCredential clientCredential = new ClientCredential(CLIENT_ID, CLIENT_SECRET);
- AuthenticationResult authResult = await authContext.AcquireTokenAsync("https://cognitiveservices.azure.com/", clientCredential);
- return authResult;
+ const string resource = "https://cognitiveservices.azure.com/";
+
+ var authResult = await ConfidentialClientApplication.AcquireTokenForClient(
+ new[] { $"{resource}/.default" })
+ .ExecuteAsync()
+ .ConfigureAwait(false);
+
+ return authResult.AccessToken;
} ```
automation Automation Runbook Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md
Python 3 runbooks are supported in the following Azure global infrastructures:
* For Python 2, Windows Hybrid Runbook Workers are supported with [python 2.7](https://www.python.org/downloads/release/latest/python2) installed. * For Python 3 Cloud Jobs, Python 3.8 version is supported. Scripts and packages from any 3.x version might work if the code is compatible across different versions. * For Python 3 Hybrid jobs on Windows machines, you can choose to install any 3.x version you may want to use.
-* For Python 3 Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. We recommend installing 3.6 on Linux machines. However, different versions should also work if there are no breaking changes in method signatures or contracts between versions of Python 3.
+* For Python 3 Hybrid jobs on Linux machines, we depend on the Python 3 version installed on the machine to run DSC OMSConfig and the Linux Hybrid Worker. Different versions should work if there are no breaking changes in method signatures or contracts between versions of Python 3.
### Limitations
azure-app-configuration Concept Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/concept-github-action.md
Input parameters specify data used by the action during runtime. The following
|-|-|-| | configurationFile | Yes | Relative path to the configuration file in the repository. Glob patterns are supported and can include multiple files. | | format | Yes | File format of the configuration file. Valid formats are: JSON, YAML, properties. |
-| connectionString | Yes | Connection string for the App Configuration instance. The connection string should be stored as a secret in the GitHub repository, and only the secret name should be used in the workflow. |
+| connectionString | Yes | Read-write connection string for the App Configuration instance. The connection string should be stored as a secret in the GitHub repository, and only the secret name should be used in the workflow. |
| separator | Yes | Separator used when flattening the configuration file to key-value pairs. Valid values are: . , ; : - _ __ / | | prefix | No | Prefix to be added to the start of keys. | | label | No | Label used when setting key-value pairs. If unspecified, a null label is used. |
azure-arc Clean Up Past Installation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/clean-up-past-installation.md
+
+ Title: Clean up past installations
+description: Describes how to remove Azure Arc-enabled data controller and associated resources from past installations.
++++++ Last updated : 07/11/2022+++
+# Clean up from past installations
+
+If you installed the data controller in the past and later deleted the data controller, there may be some cluster level objects that would still need to be deleted.
+
+This article describes how to delete these cluster level objects.
+
+## Replace values in sample script
+
+For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
+
+## Run script to remove artifacts
+
+Run the following commands to delete the data controller cluster level objects:
+
+> [!NOTE]
+> Not all of these objects will exist in your environment. The objects in your environment depend on which version of the Arc data controller was installed
+
+```console
+# Clean up azure arc data service artifacts
+
+# Custom resource definitions (CRD)
+kubectl delete crd datacontrollers.arcdata.microsoft.com
+kubectl delete crd postgresqls.arcdata.microsoft.com
+kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com
+kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com
+kubectl delete crd dags.sql.arcdata.microsoft.com
+kubectl delete crd exporttasks.tasks.arcdata.microsoft.com
+kubectl delete crd monitors.arcdata.microsoft.com
+kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}.
+
+# Cluster roles and role bindings
+kubectl delete clusterrole arcdataservices-extension
+kubectl delete clusterrole arc:cr-arc-metricsdc-reader
+kubectl delete clusterrole arc:cr-arc-dc-watch
+kubectl delete clusterrole cr-arc-webhook-job
+kubectl delete clusterrole {namespace}:cr-upgrade-worker
+kubectl delete clusterrole {namespace}:cr-deployer
+kubectl delete clusterrolebinding {namespace}:crb-arc-metricsdc-reader
+kubectl delete clusterrolebinding {namespace}:crb-arc-dc-watch
+kubectl delete clusterrolebinding crb-arc-webhook-job
+kubectl delete clusterrolebinding {namespace}:crb-upgrade-worker
+kubectl delete clusterrolebinding {namespace}:crb-deployer
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get clusterrolebinding'
+
+# API services
+# Up to May 2021 release
+kubectl delete apiservice v1alpha1.arcdata.microsoft.com
+kubectl delete apiservice v1alpha1.sql.arcdata.microsoft.com
+
+# June 2021 release
+kubectl delete apiservice v1beta1.arcdata.microsoft.com
+kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com
+
+# GA/July 2021 release
+kubectl delete apiservice v1.arcdata.microsoft.com
+kubectl delete apiservice v1.sql.arcdata.microsoft.com
+
+# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get mutatingwebhookconfiguration'
+kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{namespace}
+```
+
+## Next steps
+
+[Start by creating a Data Controller](create-data-controller-indirect-cli.md)
+
+Already created a Data Controller? [Create an Azure Arc-enabled SQL Managed Instance](create-sql-managed-instance.md)
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
Title: Create a Data Controller using Kubernetes tools
-description: Create a Data Controller using Kubernetes tools
+ Title: Create a data controller using Kubernetes tools
+description: Create a data controller using Kubernetes tools
Last updated 11/03/2021
-# Create Azure Arc data controller using Kubernetes tools
+# Create Azure Arc-enabled data controller using Kubernetes tools
+A data controller manages Azure Arc-enabled data services for a Kubernetes cluster. This article describes how to use Kubernetes tools to create a data controller.
-## Prerequisites
-
-Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information.
+Creating the data controller has the following high level steps:
-To create the Azure Arc data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
-
-[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
+1. Create the namespace and bootstrapper service
+1. Create the data controller
> [!NOTE]
-> Some of the steps to create the Azure Arc data controller that are indicated below require Kubernetes cluster administrator permissions. If you are not a Kubernetes cluster administrator, you will need to have the Kubernetes cluster administrator perform these steps on your behalf.
-
-### Cleanup from past installations
-
-If you installed the Azure Arc data controller in the past on the same cluster and deleted the Azure Arc data controller, there may be some cluster level objects that would still need to be deleted.
+> For simplicity, the steps below assume that you are a Kubernetes cluster administrator. For production deployments or more secure environments, it is recommended to follow the security best practices of "least privilege" when deploying the data controller by granting only specific permissions to users and service accounts involved in the deployment process.
-For some of the tasks, you'll need to replace `{namespace}` with the value for your namespace. Substitute the name of the namespace the data controller was deployed in into `{namespace}`. If unsure, get the name of the `mutatingwebhookconfiguration` using `kubectl get clusterrolebinding`.
-Run the following commands to delete the Azure Arc data controller cluster level objects:
-
-```console
-# Cleanup azure arc data service artifacts
-
-# Note: not all of these objects will exist in your environment depending on which version of the Arc data controller was installed
-
-# Custom resource definitions (CRD)
-kubectl delete crd datacontrollers.arcdata.microsoft.com
-kubectl delete crd postgresqls.arcdata.microsoft.com
-kubectl delete crd sqlmanagedinstances.sql.arcdata.microsoft.com
-kubectl delete crd sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com
-kubectl delete crd dags.sql.arcdata.microsoft.com
-kubectl delete crd exporttasks.tasks.arcdata.microsoft.com
-kubectl delete crd monitors.arcdata.microsoft.com
-kubectl delete crd activedirectoryconnectors.arcdata.microsoft.com
-
-# Substitute the name of the namespace the data controller was deployed in into {namespace}.
-
-# Cluster roles and role bindings
-kubectl delete clusterrole arcdataservices-extension
-kubectl delete clusterrole arc:cr-arc-metricsdc-reader
-kubectl delete clusterrole arc:cr-arc-dc-watch
-kubectl delete clusterrole cr-arc-webhook-job
-kubectl delete clusterrole {namespace}:cr-upgrade-worker
-kubectl delete clusterrolebinding {namespace}:crb-arc-metricsdc-reader
-kubectl delete clusterrolebinding {namespace}:crb-arc-dc-watch
-kubectl delete clusterrolebinding crb-arc-webhook-job
-kubectl delete clusterrolebinding {namespace}:crb-upgrade-worker
-
-# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get clusterrolebinding'
-
-# API services
-# Up to May 2021 release
-kubectl delete apiservice v1alpha1.arcdata.microsoft.com
-kubectl delete apiservice v1alpha1.sql.arcdata.microsoft.com
-
-# June 2021 release
-kubectl delete apiservice v1beta1.arcdata.microsoft.com
-kubectl delete apiservice v1beta1.sql.arcdata.microsoft.com
-
-# GA/July 2021 release
-kubectl delete apiservice v1.arcdata.microsoft.com
-kubectl delete apiservice v1.sql.arcdata.microsoft.com
-
-# Substitute the name of the namespace the data controller was deployed in into {namespace}. If unsure, get the name of the mutatingwebhookconfiguration using 'kubectl get mutatingwebhookconfiguration'
-kubectl delete mutatingwebhookconfiguration arcdata.microsoft.com-webhook-{namespace}
-
-```
+## Prerequisites
-## Overview
+Review the topic [Plan an Azure Arc-enabled data services deployment](plan-azure-arc-data-services.md) for overview information.
-Creating the Azure Arc data controller has the following high level steps:
+To create the data controller using Kubernetes tools you will need to have the Kubernetes tools installed. The examples in this article will use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or `helm` if you are familiar with those tools and Kubernetes yaml/json.
- > [!IMPORTANT]
- > Some of the steps below require Kubernetes cluster administrator permissions.
+[Install the kubectl tool](https://kubernetes.io/docs/tasks/tools/install-kubectl/)
-1. Create the custom resource definitions for the Arc data controller, Azure SQL managed instance, and PostgreSQL Hyperscale.
-1. Create a namespace in which the data controller will be created.
-1. Create the bootstrapper service including the replica set, service account, role, and role binding.
-1. Create a secret for the data controller administrator username and password.
-1. Create the webhook deployment job, cluster role and cluster role binding.
-1. Create the data controller.
+## Create the namespace and bootstrapper service
+The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller.
-## Create the custom resource definitions
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper-unified.yaml), and replace the placeholder `{{NAMESPACE}}` in *all the places* in the file with the desired namespace name, for example: `arc`.
-Run the following command to create the custom resource definitions.
+> [!IMPORTANT]
+> The bootstrapper-unified.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:
+- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).
+- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) named `arc-private-registry` for your private container registry.
+- Change the image URL for the bootstrapper image in the bootstrap.yaml file.
+- Replace `arc-private-registry` in the bootstrap.yaml file if a different name was used for the image pull secret.
- > [!IMPORTANT]
- > Requires Kubernetes cluster administrator permissions.
+Run the following command to create the namespace and bootstrapper service with the edited file.
```console
-kubectl create -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
+kubectl apply --namespace arc -f bootstrapper-unified.yaml
```
-## Create a namespace in which the data controller will be created
-
-Run a command similar to the following to create a new, dedicated namespace in which the data controller will be created. In this example and the remainder of the examples in this article, a namespace name of `arc` will be used. If you choose to use a different name, then use the same name throughout.
+Verify that the bootstrapper pod is running using the following command.
```console
-kubectl create namespace arc
+kubectl get pod --namespace arc -l app=bootstrapper
```
-If you are using OpenShift, you will need to edit the `openshift.io/sa.scc.supplemental-groups` and `openshift.io/sa.scc.uid-range` annotations on the namespace using `kubectl edit namespace <name of namespace>`. Change these existing annotations to match these _specific_ UID and fsGroup IDs/ranges.
-
-```console
-openshift.io/sa.scc.supplemental-groups: 1000700001/10000
-openshift.io/sa.scc.uid-range: 1000700001/10000
-```
-
-If other people will be using this namespace that are not cluster administrators, we recommend creating a namespace admin role and granting that role to those users through a role binding. The namespace admin should have full permissions on the namespace. More granular roles and example role bindings can be found on the [Azure Arc GitHub repository](https://github.com/microsoft/azure_arc/tree/main/arc_data_services/deploy/yaml/rbac).
-
-## Create the bootstrapper service
-The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller, SQL managed instances, or PostgreSQL Hyperscale server groups.
+If the status is not _Running_, run the command a few times until the status is _Running_.
-Run the following command to create a bootstrapper service, a service account for the bootstrapper service, and a role and role binding for the bootstrapper service account.
-
-```console
-kubectl create --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper.yaml
-```
-
-Verify that the bootstrapper pod is running using the following command. You may need to run it a few times until the status changes to `Running`.
+## Create the data controller
-```console
-kubectl get pod --namespace arc
-```
+Now you are ready to create the data controller itself.
-The bootstrapper.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment does not have access directly to the Microsoft Container Registry, you can do the following:
-- Follow the steps to [pull the container images from the Microsoft Container Registry and push them to a private container registry](offline-deployment.md).-- [Create an image pull secret](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-lin) for your private container registry.-- Add an image pull secret to the bootstrapper container. See example below.-- Change the image location for the bootstrapper image. See example below.-
-The example below assumes that you created a image pull secret name `arc-private-registry`.
-
-```yaml
-#Just showing only the relevant part of the bootstrapper.yaml template file here
- spec:
- serviceAccountName: sa-bootstrapper
- nodeSelector:
- kubernetes.io/os: linux
- imagePullSecrets:
- - name: arc-private-registry #Create this image pull secret if you are using a private container registry
- containers:
- - name: bootstrapper
- image: mcr.microsoft.com/arcdata/arc-bootstrapper:v1.1.0_2021-11-02 #Change this registry location if you are using a private container registry.
- imagePullPolicy: Always
-```
+First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
-## Create secrets for the metrics and logs dashboards
+### Create the metrics and logs dashboards user names and passwords
-You can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges.
+At the top of the file, you can specify a user name and password that is used to authenticate to the metrics and logs dashboards as an administrator. Choose a secure password and share it with only those that need to have these privileges.
A Kubernetes secret is stored as a base64 encoded string - one for the username and one for the password.
echo -n '<your string to encode here>' | base64
# echo -n 'example' | base64 ```
-Once you have encoded the usernames and passwords you can create a file based on the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/controller-login-secret.yaml) and replace the usernames and passwords with your own.
-
-Then run the following command to create the secret.
-
-```console
-kubectl create --namespace arc -f <path to your data controller secret file>
-
-#Example
-kubectl create --namespace arc -f C:\arc-data-services\controller-login-secret.yaml
-```
-
-## Create certificates for logs and metrics dashboards
-
-Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify during Kubernetes native tools deployment](monitor-certificates.md).
-
-## Create the webhook deployment job, cluster role and cluster role binding
-
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/web-hook.yaml) locally on your computer so that you can modify some of the settings.
+### Create certificates for logs and metrics dashboards
-Edit the file and replace `{{namespace}}` in all places with the name of the namespace you created in the previous step. **Save the file.**
+Optionally, you can create SSL/TLS certificates for the logs and metrics dashboards. Follow the instructions at [Specify SSL/TLS certificates during Kubernetes native tools deployment](monitor-certificates.md).
-Run the following command to create the cluster role and cluster role bindings.
+### Edit the data controller configuration
- > [!IMPORTANT]
- > Requires Kubernetes cluster administrator permissions.
-
-```console
-kubectl create -n arc -f <path to the edited template file on your computer>
-```
--
-## Create the data controller
-
-Now you are ready to create the data controller itself.
-
-First, create a copy of the [template file](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/data-controller.yaml) locally on your computer so that you can modify some of the settings.
-
-Edit the following as needed:
+Edit the data controller configuration as needed:
**REQUIRED** - **location**: Change this to be the Azure location where the _metadata_ about the data controller will be stored. Review the [list of available regions](overview.md#supported-regions).
Edit the following as needed:
- **name**: The default name of the data controller is `arc`, but you can change it if you want. - **displayName**: Set this to the same value as the name attribute at the top of the file. - **registry**: The Microsoft Container Registry is the default. If you are pulling the images from the Microsoft Container Registry and [pushing them to a private container registry](offline-deployment.md), enter the IP address or DNS name of your registry here.-- **dockerRegistry**: The image pull secret to use to pull the images from a private container registry if required.
+- **dockerRegistry**: The secret to use to pull the images from a private container registry if required.
- **repository**: The default repository on the Microsoft Container Registry is `arcdata`. If you are using a private container registry, enter the path the folder/repository containing the Azure Arc-enabled data services container images. - **imageTag**: The current latest version tag is defaulted in the template, but you can change it if you want to use an older version. - **logsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the logs UI certificate. - **metricsui-certificate-secret**: The name of the secret created on the Kubernetes cluster for the metrics UI certificate.
-The following example shows a completed data controller yaml file. Update the example for your environment, based on your requirements, and the information above.
+The following example shows a completed data controller yaml.
:::code language="yaml" source="~/azure_arc_sample/arc_data_services/deploy/yaml/data-controller.yaml":::
Save the edited file on your local computer and run the following command to cre
kubectl create --namespace arc -f <path to your data controller file> #Example
-kubectl create --namespace arc -f C:\arc-data-services\data-controller.yaml
+kubectl create --namespace arc -f data-controller.yaml
``` ## Monitoring the creation status Creating the controller will take a few minutes to complete. You can monitor the progress in another terminal window with the following commands:
-> [!NOTE]
-> The example commands below assume that you created a data controller and Kubernetes namespace with the name `arc`. If you used a different namespace/data controller name, you can replace `arc` with your name.
- ```console
-kubectl get datacontroller/arc --namespace arc
+kubectl get datacontroller --namespace arc
``` ```console kubectl get pods --namespace arc ```
-You can also check on the creation status of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
+You can also check on the creation status or logs of any particular pod by running a command like below. This is especially useful for troubleshooting any issues.
```console kubectl describe pod/<pod name> --namespace arc
+kubectl logs <pod name> --namespace arc
#Example: #kubectl describe pod/control-2g7bl --namespace arc
+#kubectl logs control-2g7b1 --namespace arc
``` ## Troubleshooting creation problems
If you encounter any troubles with creation, please see the [troubleshooting gui
## Next steps - [Create a SQL managed instance using Kubernetes-native tools](./create-sql-managed-instance-using-kubernetes-native-tools.md)-- [Create a PostgreSQL Hyperscale server group using Kubernetes-native tools](./create-postgresql-hyperscale-server-group-kubernetes-native-tools.md)
+- [Create a PostgreSQL Hyperscale server group using Kubernetes-native tools](./create-postgresql-hyperscale-server-group-kubernetes-native-tools.md)
azure-arc Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/point-in-time-restore.md
The backups are stored under `/var/opt/mssql/backups/archived/<dbname>/<datetime
Point-in-time restore to Azure Arc-enabled SQL Managed Instance has the following limitations: -- Point-in-time restore of a whole Azure Arc-enabled SQL Managed Instance is not possible. -- An Azure Arc-enabled SQL managed instance that is deployed with high availability does not currently support point-in-time restore.-- You can only restore to the same Azure Arc-enabled SQL managed instance.-- Dropping and creating different databases with same names isn't handled properly at this time.-- Providing a future date when executing the restore operation using ```--dry-run``` will result in an error---
+- Point-in-time restore is database level feature, not an instance level feature. You cannot restore the entire instance with Point-in-time restore.
+- You can only restore to the same Azure Arc-enabled SQL managed instance from where the backup was taken.
## Next steps
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## July 12, 2022
+
+This release is published July 12, 2022
+
+### Image tag
+
+`v1.9.0_2022-07-12`
+
+For complete release version information, see [Version log](version-log.md#july-12-2022).
+
+### Miscellaneous
+
+- Extended the disk metrics reported in monitoring dashboards to include more queue length stats and more counters for IOPS. All disks are in scope for data collection that start with `vd` or `sd` now.
+
+### Arc-enabled SQL Managed Instance
+
+- Added buffer cache hit ratio to `collectd` and surface it in monitoring dashboards.
+- Improvements to the formatting of the legends on some dashboards.
+- Added process level CPU and memory metrics to the monitoring dashboards for the SQL managed instance process.
+- `syncSecondaryToCommit` property is now available to be viewed and edited in Azure portal and Azure Data Studio.
+- Added ability to set the DNS name for the readableSecondaries service in Azure CLI and Azure portal.
+- Now collecting the `agent.log`, `security.log` and `sqlagentstartup.log` for Arc-enabled SQL Managed instance to ElasticSearch so they're searchable via Kibana and optionally uploading them to Azure Log Analytics.
+- There are more additional notifications when provisioning new SQL managed instances is blocked due to not exporting/uploading billing data to Azure.
+
+### Data controller
+
+- Permissions required to deploy the Arc data controller have been reduced to a least-privilege level.
+- When deployed via the Azure CLI, the Arc data controller is now installed via a K8s job that uses a helm chart to do the installation. There's no change to the user experience.
+ ## June 14, 2022 This release is published June 14, 2022.
Both custom resource definitions (CRD) for PostgreSQL have been consolidated int
|February 2021 and prior| postgresql-11s.arcdata.microsoft.com<br/>postgresql-12s.arcdata.microsoft.com | |Beginning March 2021 | postgresqls.arcdata.microsoft.com
-You will delete the previous CRDs as you cleanup past installations. See [Cleanup from past installations](create-data-controller-using-kubernetes-native-tools.md#cleanup-from-past-installations).
- ### Azure Arc-enabled SQL Managed Instance - You can now create a SQL Managed Instance from the Azure portal in the direct connected mode.
azure-arc Upgrade Data Controller Indirect Kubernetes Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-data-controller-indirect-kubernetes-tools.md
Title: Upgrade indirectly connected Azure Arc data controller using Kubernetes tools
-description: Article describes how to upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
+ Title: Upgrade indirectly connected data controller for Azure Arc using Kubernetes tools
+description: Article describes how to upgrade an indirectly connected data controller for Azure Arc using Kubernetes tools
Last updated 07/07/2022
-# Upgrade an indirectly connected Azure Arc data controller using Kubernetes tools
+# Upgrade an indirectly connected Azure Arc-enabled data controller using Kubernetes tools
This article explains how to upgrade an indirectly connected Azure Arc-enabled data controller with Kubernetes tools.
During a data controller upgrade, portions of the data control plane such as Cus
In this article, you'll apply a .yaml file to:
-1. Specify a service account.
-1. Set the cluster roles.
-1. Set the cluster role bindings.
-1. Set the job.
+1. Create the service account for running upgrade.
+1. Upgrade the bootstrapper.
+1. Upgrade the data controller.
> [!NOTE] > Some of the data services tiers and modes are generally available and some are in preview.
In this article, you'll apply a .yaml file to:
## Prerequisites
-Prior to beginning the upgrade of the Azure Arc data controller, you'll need:
+Prior to beginning the upgrade of the data controller, you'll need:
- To connect and authenticate to a Kubernetes cluster - An existing Kubernetes context selected
You need an indirectly connected data controller with the `imageTag: v1.0.0_2021
## Install tools
-To upgrade the Azure Arc data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
+To upgrade the data controller using Kubernetes tools, you need to have the Kubernetes tools installed.
The examples in this article use `kubectl`, but similar approaches could be used with other Kubernetes tools such as the Kubernetes dashboard, `oc`, or helm if you're familiar with those tools and Kubernetes yaml/json.
Found 2 valid versions. The current datacontroller version is <current-version>
... ```
-## Create or download .yaml file
-
-To upgrade the data controller, you'll apply a yaml file to the Kubernetes cluster. The example file for the upgrade is available in GitHub at <https://github.com/microsoft/azure_arc/blob/main/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml>.
-
-You can download the file - and other Azure Arc related demonstration files - by cloning the repository. For example:
-
-```azurecli
-git clone https://github.com/microsoft/azure-arc
-```
-
-For more information, see [Cloning a repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository) in the GitHub docs.
-
-The following steps use files from the repository.
-
-In the yaml file, you'll replace ```{{namespace}}``` with your namespace.
- ## Upgrade data controller This section shows how to upgrade an indirectly connected data controller.
This section shows how to upgrade an indirectly connected data controller.
### Upgrade
-You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the Azure Arc data controller.
-
-### Specify the service account
-
-The upgrade requires an elevated service account for the upgrade job.
-
-To specify the service account:
-
-1. Describe the service account in a .yaml file. The following example sets a name for `ServiceAccount` as `sa-arc-upgrade-worker`:
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="2-4":::
-
-1. Edit the file as needed.
+You'll need to connect and authenticate to a Kubernetes cluster and have an existing Kubernetes context selected prior to beginning the upgrade of the data controller.
-### Set the cluster roles
-A cluster role (`ClusterRole`) grants the service account permission to perform the upgrade.
+### Create the service account for running upgrade
-1. Describe the cluster role and rules in a .yaml file. The following example defines a cluster role for `arc:cr-upgrade-worker` and allows all API groups, resources, and verbs.
+ > [!IMPORTANT]
+ > Requires Kubernetes permissions for creating service account, role binding, cluster role, cluster role binding, and all the RBAC permissions being granted to the service account.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="7-9":::
+Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/arcdata-deployer.yaml), and replace the placeholder `{{NAMESPACE}}` in the file with the namespace of the data controller, for example: `arc`. Run the following command to create the deployer service account with the edited file.
-1. Edit the file as needed.
-
-### Set the cluster role binding
-
-A cluster role binding (`ClusterRoleBinding`) links the service account and the cluster role.
-
-1. Describe the cluster role binding in a .yaml file. The following example describes a cluster role binding for the service account.
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="20-21":::
-
-1. Edit the file as needed.
+```console
+kubectl apply --namespace arc -f arcdata-deployer.yaml
+```
-### Specify the job
-A job creates a pod to execute the upgrade.
+### Upgrade the bootstrapper
-1. Describe the job in a .yaml file. The following example creates a job called `arc-bootstrapper-upgrade-job`.
+The following command creates a job for upgrading the bootstrapper and related Kubernetes objects.
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="31-48":::
+ > [!IMPORTANT]
+ > The yaml file in the following command defaults to mcr.microsoft.com/arcdata. Please save a copy of the yaml file and update it to a use a different registry/repository if necessary.
-1. Edit the file for your environment.
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/upgrade/yaml/bootstrapper-upgrade-job.yaml
+```
### Upgrade the data controller
-Specify the image tag to upgrade the data controller to.
-
- :::code language="yaml" source="~/azure_arc_sample/arc_data_services/upgrade/yaml/upgrade-indirect-k8s.yaml" range="50-56":::
-
-### Apply the resources
+The following command patches the image tag to upgrade the data controller.
-Run the following kubectl command to apply the resources to your cluster.
-
-``` bash
-kubectl apply -n <namespace> -f upgrade-indirect-k8s.yaml
+```console
+kubectl apply --namespace arc -f https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/upgrade/yaml/data-controller-upgrade.yaml
``` + ## Monitor the upgrade status You can monitor the progress of the upgrade with kubectl.
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## July 12, 2022
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.9.0_2022-07-12`|
+|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2<br/>|
+|Azure Resource Manager (ARM) API version|2022-03-01-preview (No change)|
+|`arcdata` Azure CLI extension version|1.4.3 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc enabled Kubernetes helm chart extension version|1.2.20031002|
+|Arc Data extension for Azure Data Studio<br/>`arc`<br/>`azcli`|<br/>1.3.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/arc-1.3.0.vsix))</br>1.3.0 ([Download](https://azuredatastudioarcext.blob.core.windows.net/stage/azcli-1.3.0.vsix))|
+ ## June 14, 2022 |Component|Value|
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
Example connection string values are truncated for readability.
> [!NOTE] > You can use application settings to override host.json setting values without having to change the host.json file itself. This is helpful for scenarios where you need to configure or modify specific host.json settings for a specific environment. This also lets you change host.json settings without having to republish your project. To learn more, see the [host.json reference article](functions-host-json.md#override-hostjson-values). Changes to function app settings require your function app to be restarted.
+> [!IMPORTANT]
+> Do not use an [instrumentation key](../azure-monitor/app/separate-resources.md#about-resources-and-instrumentation-keys) and a [connection string](../azure-monitor/app/sdk-connection-string.md#overview) simultaneously. Whichever was set last will take precedence.
+ ## APPINSIGHTS_INSTRUMENTATIONKEY The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_INSTRUMENTATIONKEY` or `APPLICATIONINSIGHTS_CONNECTION_STRING`. When Application Insights runs in a sovereign cloud, use `APPLICATIONINSIGHTS_CONNECTION_STRING`. For more information, see [How to configure monitoring for Azure Functions](configure-monitoring.md).
The instrumentation key for Application Insights. Only use one of `APPINSIGHTS_I
||| |APPINSIGHTS_INSTRUMENTATIONKEY|`55555555-af77-484b-9032-64f83bb83bb`| + ## APPLICATIONINSIGHTS_CONNECTION_STRING The connection string for Application Insights. Use `APPLICATIONINSIGHTS_CONNECTION_STRING` instead of `APPINSIGHTS_INSTRUMENTATIONKEY` in the following cases:
azure-functions Functions Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-best-practices.md
Premium plan is the recommended plan for reducing colds starts while maintaining
## Monitor effectively
-Azure Functions offers built-in integration with Azure Application Insights to monitor your function execution and traces written from your code. To learn more, see [Monitor Azure Functions](functions-monitoring.md). Azure Monitor also provides facilities for monitoring the health of the function app itself. To learn more, see [Using Azure Monitor Metric with Azure Functions](monitor-metrics.md).
+Azure Functions offers built-in integration with Azure Application Insights to monitor your function execution and traces written from your code. To learn more, see [Monitor Azure Functions](functions-monitoring.md). Azure Monitor also provides facilities for monitoring the health of the function app itself. To learn more, see [Monitoring with Azure Monitor](monitor-functions.md).
You should be aware of the following considerations when using Application Insights integration to monitor your functions:
azure-functions Functions Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-monitoring.md
Title: Monitor Azure Functions
-description: Learn how to use Azure Application Insights with Azure Functions to monitor function execution.
+ Title: Monitor executions in Azure Functions
+description: Learn how to use Azure Application Insights with Azure Functions to monitor function executions.
ms.assetid: 501722c3-f2f7-4224-a220-6d59da08a320 Previously updated : 10/14/2020 Last updated : 07/05/2022 # Customer intent: As a developer, I want to understand what facilities are provided to help me monitor my functions so I can know if they're running correctly.
-# Monitor Azure Functions
+# Monitor executions in Azure Functions
-[Azure Functions](functions-overview.md) offers built-in integration with [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) to monitor functions. This article provides an overview of the monitoring capabilities provided by Azure for monitoring Azure Functions.
+[Azure Functions](functions-overview.md) offers built-in integration with [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) to monitor functions executions. This article provides an overview of the monitoring capabilities provided by Azure for monitoring Azure Functions.
Application Insights collects log, performance, and error data. By automatically detecting performance anomalies and featuring powerful analytics tools, you can more easily diagnose issues and better understand how your functions are used. These tools are designed to help you continuously improve performance and usability of your functions. You can even use Application Insights during local function app project development. For more information, see [What is Application Insights?](../azure-monitor/app/app-insights-overview.md). As Application Insights instrumentation is built into Azure Functions, you need a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key is added to your application settings as you create your function app resource in Azure. If your function app doesn't already have this key, you can [set it manually](configure-monitoring.md#enable-application-insights-integration).
+You can also monitor the function app itself by using Azure Monitor. To learn more, see [Monitoring Azure Functions with Azure Monitor](monitor-functions.md).
+ ## Application Insights pricing and limits You can try out Application Insights integration with Azure Functions for free featuring a daily limit to how much data is processed for free.
Log streams can be viewed both in the portal and in most local development envir
## Diagnostic logs
-_This feature is in preview._
- Application Insights lets you export telemetry data to long-term storage or other analysis services. Because Functions also integrates with Azure Monitor, you can also use diagnostic settings to send telemetry data to various destinations, including Azure Monitor logs. To learn more, see [Monitoring Azure Functions with Azure Monitor Logs](functions-monitor-log-analytics.md). ## Scale controller logs
-_This feature is in preview._
- The [Azure Functions scale controller](./event-driven-scaling.md#runtime-scaling) monitors instances of the Azure Functions host on which your app runs. This controller makes decisions about when to add or remove instances based on current performance. You can have the scale controller emit logs to Application Insights to better understand the decisions the scale controller is making for your function app. You can also store the generated logs in Blob storage for analysis by another service. To enable this feature, you add an application setting named `SCALE_CONTROLLER_LOGGING_ENABLED` to your function app settings. To learn how, see [Configure scale controller logs](configure-monitoring.md#configure-scale-controller-logs). ## Azure Monitor metrics
-In addition to log-based telemetry data collected by Application Insights, you can also get data about how the function app is running from [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). To learn more, see [Using Azure Monitor Metric with Azure Functions](monitor-metrics.md).
+In addition to log-based telemetry data collected by Application Insights, you can also get data about how the function app is running from [Azure Monitor Metrics](../azure-monitor/essentials/data-platform-metrics.md). To learn more, see [Monitoring with Azure Monitor](monitor-functions.md).
## Report issues
azure-functions Functions Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-powershell.md
The following table shows the PowerShell versions available to each major versio
| Functions version | PowerShell version | .NET version | |-|--||
-| 4.x (recommended) | PowerShell 7.2 (preview)<br/>PowerShell 7 (recommended) | .NET 6 |
+| 4.x (recommended) | PowerShell 7.2<br/>PowerShell 7 (recommended) | .NET 6 |
| 3.x | PowerShell 7<br/>PowerShell Core 6 | .NET Core 3.1<br/>.NET Core 2.1 | | 2.x | PowerShell Core 6 | .NET Core 2.2 |
azure-functions Monitor Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions-reference.md
+
+ Title: Monitoring Azure Functions data reference
+description: Important reference material needed when you monitor Azure Functions
+++ Last updated : 07/05/2022++
+# Monitoring Azure Functions data reference
+
+This reference applies to the use of Azure Monitor for monitoring function apps hosted in Azure Functions. See [Monitoring function app with Azure Monitor](monitor-functions.md) for details on using Azure Monitor to collect and analyze monitoring data from your function apps.
+
+See [Monitor Azure Functions](functions-monitoring.md) for details on using Application Insights to collect and analyze log data from individual functions in your function app.
+
+## Metrics
+
+This section lists all the automatically collected platform metrics collected for Azure Functions.
+
+### Azure Functions specific metrics
+
+There are two metrics specific to Functions that are of interest:
+
+| Metric | Description |
+| - | - |
+| **FunctionExecutionCount** | Function execution count indicates the number of times your function app has executed. This value correlates to the number of times a function runs in your app. |
+| **FunctionExecutionUnits** | Function execution units are a combination of execution time and your memory usage. Memory data isn't a metric currently available through Azure Monitor. However, if you want to optimize the memory usage of your app, can use the performance counter data collected by Application Insights. This metric isn't currently supported for Premium and Dedicated (App Service) plans running on Linux.|
+
+These metrics are used specifically when [estimating Consumption plan costs](functions-consumption-costs.md).
+
+### General App Service metrics
+
+Aside from Azure Functions specific metrics, the App Service platform implements more metrics, which you can use to monitor function apps. For the complete list, see [metrics available to App Service apps](../app-service/web-sites-monitor.md#understand-metrics) and [Monitoring App Service data reference](../app-service/monitor-app-service-reference.md#metrics).
+
+## Metric Dimensions
+
+For more information on what metric dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
+
+Azure Functions doesn't have any metrics that contain dimensions.
+
+## Resource logs
+
+This section lists the types of resource logs you can collect for your function apps.
+
+| Log type | Description |
+|-|-|
+| FunctionAppLogs | Function app logs |
+
+For more information, see [Monitoring App Service data reference](../app-service/monitor-app-service-reference.md#resource-logs).
+
+For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
+
+## Azure Monitor Logs tables
+
+Azure Functions uses Kusto tables from Azure Monitor Logs. You can query the [FunctionAppLogs table](/azure/azure-monitor/reference/tables/functionapplogs) with Log Analytics. For more information, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype#app-services).
+
+## Activity log
+
+The following table lists the operations related to Azure Functions that may be created in the Activity log.
+
+| Operation | Description |
+|:|:|
+|Microsoft.web/sites/functions/listkeys/action | Return the [keys for the function](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+|Microsoft.Web/sites/host/listkeys/action | Return the [host keys for the function app](functions-bindings-http-webhook-trigger.md#authorization-keys).|
+|Microsoft.Web/sites/host/sync/action | [Sync triggers](functions-deployment-technologies.md#trigger-syncing) operation.|
+|Microsoft.Web/sites/start/action| Function app started. |
+|Microsoft.Web/sites/stop/action| Function app stopped.|
+|Microsoft.Web/sites/write| Change a function app setting, such as runtime version or enable remote debugging.|
+
+You may also find logged operations that relate to the underlying App Service behaviors. For a more complete list, see [Resource Provider Operations](/azure/role-based-access-control/resource-provider-operations#microsoftweb).
+
+For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
+
+## See Also
+
+* See [Monitoring Azure Functions](monitor-functions.md) for a description of monitoring Azure Functions.
+* See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
azure-functions Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-functions.md
+
+ Title: Monitoring Azure Functions
+description: Start here to learn how to monitor function apps running in Azure Functions using Azure Monitor
+++ Last updated : 07/05/2022++
+# Monitoring Azure Functions
+
+When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+
+This article describes the monitoring data generated by apps hosted in Azure Functions. Azure Functions uses [Azure Monitor](../azure-monitor/overview.md) to monitor the health of your function apps. If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, see [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+
+Azure Functions uses Application Insights to collect and analyze log data from individual function executions in your function app. For more information, see [Monitor functions in Azure](functions-monitoring.md).
+
+## Monitoring data
+
+Azure Functions collects the same kinds of monitoring data as other Azure resources that are described in [Azure Monitor data collection](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
+
+See [Monitoring Azure Functions data reference](monitor-functions-reference.md) for detailed information on the metrics and logs metrics created by Azure Functions.
+
+## Collection and routing
+
+Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
+
+See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for *Azure Functions* are listed in [Azure Functions monitoring data reference](monitor-functions-reference.md#resource-logs).
+
+The metrics and logs you can collect are discussed in the following sections.
+
+## Analyzing metrics
+
+You can analyze metrics for *Azure Functions* with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md) for details on using this tool.
+
+For a list of the platform metrics collected for Azure Functions, see [Monitoring *Azure Functions* data reference metrics](monitor-functions-reference.md#metrics).
+
+For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+
+The following examples use Monitor Metrics to help estimate the cost of running your function app on a Consumption plan. To learn more about estimating Consumption plan costs, see [Estimating Consumption plan costs](functions-consumption-costs.md).
++
+## Analyzing logs
+
+Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
+
+All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md).
+
+The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+
+For a list of the types of resource logs collected for Azure Functions, see [Monitoring Azure Functions data reference](monitor-functions-reference.md#resource-logs)
+
+For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Functions data reference](monitor-functions-reference.md#azure-monitor-logs-tables)
+
+### Sample Kusto queries
+
+> [!IMPORTANT]
+> When you select **Logs** from the Azure Functions menu, Log Analytics is opened with the query scope set to the current resource. This means that log queries will only include data from that resource. If you want to run a query that includes data from other resources or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
+
+Following are queries that you can use to help you monitor your Azure Function.
+
+The following sample query can help you monitor all your functions app logs:
+
+```Kusto
+FunctionAppLogs
+| project TimeGenerated, HostInstanceId, Message, _ResourceId
+| order by TimeGenerated desc
+```
+
+The following sample query can help you monitor a specific functions app's logs:
+
+```Kusto
+FunctionAppLogs
+| where FunctionName == "<Function name>"
+| order by TimeGenerated desc
+```
+
+The following sample query can help you monitor exceptions on a specific functions app's logs:
+
+```Kusto
+FunctionAppLogs
+| where ExceptionDetails != ""
+| where FunctionName == "<Function name>"
+| order by TimeGenerated desc
+```
+
+## Alerts
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
+
+If you're creating or running an application that run on Functions [Azure Monitor Application Insights](../azure-monitor/overview.md#application-insights) may offer other types of alerts.
+
+The following table lists common and recommended alert rules for Functions.
+
+| Alert type | Condition | Examples |
+|:|:|:|
+| Metric | Average connections| When number of connections exceed a set value|
+| Metric | HTTP 404| When HTTP 404 responses exceed a set value|
+| Metric | HTTP Server Errors| When HTTP 5xx errors exceed a set value|
+| Activity Log | Create or Update Web App | When app is created or updated|
+| Activity Log | Delete Web App | When app is deleted|
+| Activity Log | Restart Web App| When app is restarted|
+| Activity Log | Stop Web App| When app is stopped|
+
+## Next steps
+
+For more information about monitoring Azure Functions, see the following articles:
+
+* [Monitor Azure Functions](functions-monitoring.md) - details how-to monitor a function app.
+* [Monitoring Azure Functions data reference](monitor-functions-reference.md) - reference of the metrics, logs, and other important values created by your function app.
+* [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) - details monitoring Azure resources.
+* [Analyze Azure Functions telemetry in Application Insights](analyze-telemetry-data.md) - details how-to view and query the data being collected from a function app.
azure-functions Monitor Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/monitor-metrics.md
- Title: Using Monitor Metrics with Azure Functions
-description: Learn how to use Azure Monitor Metrics to view and query for Azure Functions telemetry data collected by and stored in Azure Application Insights.
- Previously updated : 07/4/2021
-# Customer intent: As a developer, I want to view and query the data being collected from my function app so I can know if it's running correctly and to make improvements.
--
-# Using Azure Monitor Metric with Azure Functions
-
-Azure Functions integrates with Azure Monitor Metrics to let you analyze the metrics generated by your function app during execution. To learn more, see the [Azure Monitor Metrics overview](../azure-monitor/essentials/data-platform-metrics.md). These metrics indicate how your function app is running on the App Service platform. You can review resource consumption data used to estimate consumption plan costs. To investigate detailed telemetry from your function executions, including log data, you should also use [Application Insights](functions-monitoring.md) in Azure Monitor.
-
-## Available metrics
-
-Azure Monitor collects numeric data from a set of monitored resources, which are entered into a time series database. Azure Monitor collects metrics specific to both Functions and the underlying App Service resources.
-
-### Functions-specific metrics
-
-There are two metrics specific to Functions that are of interest:
-
-| Metric | Description |
-| - | - |
-| **FunctionExecutionCount** | Function execution count indicates the number of times your function app has executed. This value correlates to the number of times a function runs in your app. |
-| **FunctionExecutionUnits** | Function execution units are a combination of execution time and your memory usage. Memory data isn't a metric currently available through Azure Monitor. However, if you want to optimize the memory usage of your app, can use the performance counter data collected by Application Insights. This metric isn't currently supported for Premium and Dedicated (App Service) plans running on Linux.|
-
-These metrics are used specifically when [estimating Consumption plan costs](functions-consumption-costs.md).
-
-### General App Service metrics
-
-Aside from function-specific metrics, the App Service platform implements more metrics, which you can use to monitor function apps. For the complete list, see [metrics available to App Service apps](../app-service/web-sites-monitor.md#understand-metrics).
-
-## Accessing metrics
-
-You can use either [Azure Monitor metrics explorer](../azure-monitor/essentials/metrics-getting-started.md) in the [Azure portal](https://portal.azure.com) or REST APIs to get Monitor Metrics data.
-
-The following examples use Monitor Metrics to help estimate the cost of running your function app on a Consumption plan. To learn more about estimating Consumption plan costs, see [Estimating Consumption plan costs](functions-consumption-costs.md).
--
-To learn more about using Monitor Explorer to view metrics, see [Getting started with Azure Metrics Explorer](../azure-monitor/essentials/metrics-getting-started.md).
-
-## Next steps
-
-Learn more about monitoring Azure Functions:
-
-+ [Monitor Azure Functions](functions-monitoring.md)
-+ [Analyze Azure Functions telemetry in Application Insights](analyze-telemetry-data.md)
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MMA/OMS) for Windows
![Flow diagram that shows the steps involved in agent migration and how the migration tools help in generating DCRs and tracking the entire migration process.](media/azure-monitor-agent-migration/mma-to-ama-migration-steps.png) > [!IMPORTANT]
-> Do not remove the legacy agents if being used by other [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features) (like Microsoft Defender for Cloud, Sentinel, VM Insights, etc). You can use the migration helper to discover which solutions and services you use today that depend on the legacy agents.
+> Do not remove the legacy agents if being used by other [Azure solutions or services](./azure-monitor-agent-overview.md#supported-services-and-features). Use the migration helper to discover which solutions/services you use today.
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)]
-## Installing and using AMA Migration Helper (preview)
+## Using AMA Migration Helper (preview)
AMA Migration Helper is a workbook-based Azure Monitor solution that helps you **discover what to migrate** and **track progress** as you move from Log Analytics Agent to Azure Monitor Agent. Use this single pane of glass view to expedite and track the status of your agent migration journey.
-To set up the AMA Migration Helper workbook in the Azure portal:
+You can access the workbook [here](https://ms.portal.azure.com/#view/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Monitor/ConfigurationId/community-Workbooks%2FAzure%20Monitor%20-%20Agents%2FAgent%20Migration%20Tracker/Type/workbook/WorkbookTemplateName/AMA%20Migration%20Helper), or find it on the [Azure portal (preview)](https://portal.azure.com/?feature.includePreviewTemplates=true) under **Monitor** > **Workbooks** > **Public Templates** > **Azure Monitor essentials** > **AMA Migration Helper**.
-1. From the **Monitor** menu, select **Workbooks** > **+ New** > **Advanced Editor** (**</>**).
-1. Copy and paste the content from the [AMA Migration Helper file in the Azure Monitor Community GitHub repository](https://github.com/microsoft/AzureMonitorCommunity/tree/master/Azure%20Services/Azure%20Monitor/Agents/Migration%20Tools/Migration%20Helper%20Workbook) into the editor.
-1. Select **Apply** to load the workbook.
-1. Select **Done Editing**.
-
- YouΓÇÖre now ready to use the workbook.
-
-1. Select the **Subscriptions** and **Workspaces** dropdowns to view relevant information.
-
- :::image type="content" source="media/azure-monitor-migration-tools/ama-migration-helper.png" lightbox="media/azure-monitor-migration-tools/ama-migration-helper.png" alt-text="Screenshot of the Azure Monitor Agent Migration Helper workbook. The screenshot highlights the Subscription and Workspace dropdowns and shows the Azure Virtual Machines tab, on which you can track which agent is deployed on each virtual machine.":::
## Installing and using DCR Config Generator (preview) Azure Monitor Agent relies only on [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) for configuration, whereas Log Analytics Agent inherits its configuration from Log Analytics workspaces.
azure-monitor Alerts Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-overview.md
For stateful alerts, the alert is considered resolved when:
|Alert type |The alert is resolved when | ||| |Metric alerts|The alert condition isn't met for three consecutive checks.|
-|Log alerts|The alert condition isn't met for 30 minutes for a specific evaluation period (to account for log ingestion delay), and <br>the alert condition isn't met for three consecutive checks.|
+|Log alerts| In log alerts, the alert is resolved at different rates based on the frequency of the alert:<ul> <li>**1 minute**: The alert condition isn't met for 10 minutes.</li> <li>**5-15 minutes**: The alert condition isn't met for three frequency periods.</li> <li>**15 minutes - 11 hours**: The alert condition isn't met for two frequency periods.</li> <li>**11 to 12 hours**: The alert condition isn't met for one frequency period.</li></ul>|
When the alert is considered resolved, the alert rule sends out a resolved notification using webhooks or email and the monitor state in the Azure portal is set to resolved.
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
provider.register();
// Create an exporter instance. const exporter = new AzureMonitorTraceExporter({
- instrumentationKey: "<Your Connection String>"
+ connectionString: "<Your Connection String>"
}); // Add the exporter to the provider.
azure-monitor Change Analysis Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-enable.md
ms.contributor: cawa Previously updated : 07/05/2022 Last updated : 07/11/2022
The Change Analysis service:
- Identify relevant changes in the troubleshooting or monitoring context. Register the `Microsoft.ChangeAnalysis` resource provider with an Azure Resource Manager subscription to make the tracked properties and proxied settings change data available. The `Microsoft.ChangeAnalysis` resource is automatically registered as you either: -- Enter the Web App **Diagnose and Solve Problems** tool, or
+- Enter any UI entry point, like the Web App **Diagnose and Solve Problems** tool, or
- Bring up the Change Analysis standalone tab. In this guide, you'll learn the two ways to enable Change Analysis for web app in-guest changes: - For one or a few web apps, enable Change Analysis via the UI. - For a large number of web apps (for example, 50+ web apps), enable Change Analysis using the provided PowerShell script.
-## Enable Change Analysis via the Azure portal UI
+## Enable web app in-guest change collection via Azure Portal
For web app in-guest changes, separate enablement is required for scanning code files within a web app. For more information, see [Change Analysis in the Diagnose and solve problems tool](change-analysis-visualizations.md#diagnose-and-solve-problems-tool) section.
For web app in-guest changes, separate enablement is required for scanning code
:::image type="content" source="./media/change-analysis/change-analysis-on-2.png" alt-text="Screenshot of the Enable Change Analysis user interface expanded.":::
-You can also view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
-- The change types over time.-- Details on those changes. -
-By default, the graph displays changes from within the past 24 hours help with immediate problems.
-- ## Enable Change Analysis at scale using PowerShell If your subscription includes several web apps, run the following script to enable *all web apps* in your subscription.
azure-monitor Change Analysis Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/change/change-analysis-visualizations.md
ms.contributor: cawa Previously updated : 04/18/2022 Last updated : 07/11/2022
The UI supports selecting multiple subscriptions to view resource changes. Use t
## Diagnose and solve problems tool
+From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
+
+### Diagnose and solve problems tool for Web App
+ Azure Monitor's Change Analysis is: - A standalone detector in the Web App **Diagnose and solve problems** tool. - Aggregated in **Application Crashes** and **Web App Down detectors**.
-From your resource's overview page in Azure portal, select **Diagnose and solve problems** the left menu. As you enter the Diagnose and Solve Problems tool, the **Microsoft.ChangeAnalysis** resource provider will automatically be registered.
+You can view change data via the **Web App Down** and **Application Crashes** detectors. The graph summarizes:
+- The change types over time.
+- Details on those changes.
+
+By default, the graph displays changes from within the past 24 hours help with immediate problems.
+ ### Diagnose and solve problems tool for Virtual Machines
azure-monitor Container Insights Enable Aks Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks-policy.md
Monitoring Addon Custom Policy can be assigned at either the subscription or res
## Next steps - Learn more about [Azure Policy](../../governance/policy/overview.md).-- Learn how [remediation security works](../../governance/policy/how-to/remediate-resources.md#how-remediation-security-works).
+- Learn how [remediation access control works](../../governance/policy/how-to/remediate-resources.md#how-remediation-access-control-works).
- Learn more about [Container insights](./container-insights-overview.md).-- Install the [Azure CLI](/cli/azure/install-azure-cli).
+- Install the [Azure CLI](/cli/azure/install-azure-cli).
azure-monitor Profiler Servicefabric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/profiler/profiler-servicefabric.md
Title: Profile live Azure Service Fabric apps with Application Insights
-description: Enable Profiler for a Service Fabric application
+ Title: Enable Profiler for Azure Service Fabric applications
+description: Profile live Azure Service Fabric apps with Application Insights
Previously updated : 08/06/2018- Last updated : 06/23/2022
-# Profile live Azure Service Fabric applications with Application Insights
+# Enable Profiler for Azure Service Fabric applications
-You can also deploy Application Insights Profiler on these
-* [Azure App Service](profiler.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Cloud Services](profiler-cloudservice.md?toc=/azure/azure-monitor/toc.json)
-* [Azure Virtual Machines](profiler-vm.md?toc=/azure/azure-monitor/toc.json)
+Application Insights Profiler is included with Azure Diagnostics. You can install the Azure Diagnostics extension by using an Azure Resource Manager template for your Service Fabric cluster. Get a [template that installs Azure Diagnostics on a Service Fabric Cluster](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json).
-## Set up the environment deployment definition
+In this article, you will:
-Application Insights Profiler is included with Azure Diagnostics. You can install the Azure Diagnostics extension by using an Azure Resource Manager template for your Service Fabric cluster. Get a [template that installs Azure Diagnostics on a Service Fabric Cluster](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json).
+- Add the Application Insights Profiler property to your Azure Resource Manager template.
+- Deploy your Service Fabric cluster with the Application Insights Profiler instrumentation key.
+- Enable Application Insights on your Service Fabric application.
+- Redeploy your Service Fabric cluster to enable Profiler.
+
+## Prerequisites
+
+- Profiler supports .NET Framework, .NET Core, and .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer applications.
+ - Verify you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later.
+ - Confirm that the deployed OS is `Windows Server 2012 R2` or later.
+- [An Azure Service Fabric managed cluster](../../service-fabric/quickstart-managed-cluster-portal.md).
-To set up your environment, take the following actions:
+## Create deployment template
-1. Profiler supports .NET Framework and .Net Core. If you're using .NET Framework, make sure you're using [.NET Framework 4.6.1](/dotnet/framework/migration-guide/how-to-determine-which-versions-are-installed) or later. It's sufficient to confirm that the deployed OS is `Windows Server 2012 R2` or later. Profiler supports .NET Core [LTS](https://dotnet.microsoft.com/platform/support/policy/dotnet-core) and newer applications.
+1. In your Service Fabric managed cluster, navigate to where you've implemented the [Azure Resource Manager template](https://github.com/Azure/azure-docs-json-samples/blob/master/application-insights/ServiceFabricCluster.json).
-1. Search for the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) extension in the deployment template file.
+1. Locate the `WadCfg` tags in the [Azure Diagnostics](../agents/diagnostics-extension-overview.md) extension in the deployment template file.
1. Add the following `SinksConfig` section as a child element of `WadCfg`. Replace the `ApplicationInsightsProfiler` property value with your own Application Insights instrumentation key:
- ```json
- "SinksConfig": {
- "Sink": [
- {
- "name": "MyApplicationInsightsProfilerSink",
- "ApplicationInsightsProfiler": "00000000-0000-0000-0000-000000000000"
- }
- ]
- }
- ```
+ ```json
+ "settings": {
+ "WadCfg": {
+ "SinksConfig": {
+ "Sink": [
+ {
+ "name": "MyApplicationInsightsProfilerSinkVMSS",
+ "ApplicationInsightsProfiler": "YOUR_APPLICATION_INSIGHTS_INSTRUMENTATION_KEY"
+ }
+ ]
+ },
+ },
+ }
+ ```
+
+ For information about adding the Diagnostics extension to your deployment template, see [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md).
+
+## Deploy your Service Fabric cluster
+
+After updating the `WadCfg` with your instrumentation key, deploy your Service Fabric cluster.
+
+Application Insights Profiler will be installed and enabled when the Azure Diagnostics extension is installed.
+
+## Enable Application Insights on your Service Fabric application
- For information about adding the Diagnostics extension to your deployment template, see [Use monitoring and diagnostics with a Windows VM and Azure Resource Manager templates](../../virtual-machines/extensions/diagnostics-template.md?toc=/azure/virtual-machines/windows/toc.json).
+For Profiler to collect profiles for your requests, your application must be tracking operations with Application Insights.
-1. Deploy your Service Fabric cluster by using your Azure Resource Manager template.
- If your settings are correct, Application Insights Profiler will be installed and enabled when the Azure Diagnostics extension is installed.
+- **For stateless APIs**, you can refer to instructions for [tracking requests for profiling](./profiler-trackrequests.md).
+- **For tracking custom operations in other kinds of apps**, see [track custom operations with Application Insights .NET SDK](../app/custom-operations-tracking.md).
-1. Add Application Insights to your Service Fabric application.
- For Profiler to collect profiles for your requests, your application must be tracking operations with Application Insights. For stateless APIs, you can refer to instructions for [tracking Requests for profiling](profiler-trackrequests.md?toc=/azure/azure-monitor/toc.json). For more information about tracking custom operations in other kinds of apps, see [track custom operations with Application Insights .NET SDK](../app/custom-operations-tracking.md).
+Redeploy your application once you've enabled Application Insights.
-1. Redeploy your application.
+## Generate traffic and view Profiler traces
+1. Launch an [availability test](../app/monitor-web-app-availability.md) to generate traffic to your application.
+1. Wait 10 to 15 minutes for traces to be sent to the Application Insights instance.
+1. View the [Profiler traces](./profiler-overview.md) via the Application Insights instance the Azure portal.
## Next steps
-* Generate traffic to your application (for example, launch an [availability test](../app/monitor-web-app-availability.md)). Then, wait 10 to 15 minutes for traces to start to be sent to the Application Insights instance.
-* See [Profiler traces](profiler-overview.md?toc=/azure/azure-monitor/toc.json) in the Azure portal.
-* For help with troubleshooting Profiler issues, see [Profiler troubleshooting](profiler-troubleshooting.md?toc=/azure/azure-monitor/toc.json).
+For help with troubleshooting Profiler issues, see [Profiler troubleshooting](./profiler-troubleshooting.md).
azure-monitor Workbooks Composite Bar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-composite-bar.md
Title: Azure Monitor workbook composite bar renderer
-description: Learn about all the Azure Monitor workbook Composite Bar Renderer visualization.
+ Title: Azure Workbooks composite bar renderer
+description: Learn about all the Azure Workbooks composite bar renderer visualizations.
ibiza
Last updated 07/05/2022
# Composite bar renderer
-Workbook allows rendering data using composite bar, a bar made up of multiple bars.
+With Azure Workbooks, data can be rendered by using the composite bar. This bar is made up of multiple bars.
-The image below shows the composite bar for database status representing how many servers are online, offline, and in recovering state.
+The following image shows the composite bar for database status. It shows how many servers are online, offline, and in a recovering state.
-![Screenshot of composite bar for database status.](./media/workbooks-composite-bar/database-status.png)
+![Screenshot that shows the composite bar for database status.](./media/workbooks-composite-bar/database-status.png)
-Composite bar renderer is supported for grids, tiles, and graphs visualizations.
+The composite bar renderer is supported for grid, tile, and graph visualizations.
-## Adding composite bar renderer
+## Add the composite bar renderer
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Select **Add** and then **Add query**.
-3. Set *Data source* to "JSON" and *Visualization* to "Grid".
-4. Add the following JSON data.
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add query**.
+1. Set **Data source** to `JSON` and set **Visualization** to `Grid`.
+1. Add the following JSON data:
-```json
-[
- {"sub":"X", "server": "A", "online": 20, "recovering": 3, "offline": 4, "total": 27},
- {"sub":"X", "server": "B", "online": 15, "recovering": 8, "offline": 5, "total": 28},
- {"sub":"Y", "server": "C", "online": 25, "recovering": 4, "offline": 5, "total": 34},
- {"sub":"Y", "server": "D", "online": 18, "recovering": 6, "offline": 9, "total": 33}
-]
-```
+ ```json
+ [
+ {"sub":"X", "server": "A", "online": 20, "recovering": 3, "offline": 4, "total": 27},
+ {"sub":"X", "server": "B", "online": 15, "recovering": 8, "offline": 5, "total": 28},
+ {"sub":"Y", "server": "C", "online": 25, "recovering": 4, "offline": 5, "total": 34},
+ {"sub":"Y", "server": "D", "online": 18, "recovering": 6, "offline": 9, "total": 33}
+ ]
+ ```
-5. Run query.
-6. Select **Column Settings** to open the settings.
-7. Select "total" from *Columns* and choose "Composite Bar" for *Column renderer*.
-8. Set the following settings under *Composite Bar Settings*.
+1. Run the query.
+1. Select **Column Settings** to open the settings pane.
+1. Under **Columns**, select `total`. For **Column renderer**, select `Composite Bar`.
+1. Under **Composite Bar Settings**, set the following settings:
-| Column Name | Color |
-|-|--|
-| online | Green |
-| recovering | Yellow |
-| offline | Red (Bright) |
+ | Column Name | Color |
+ |-|--|
+ | online | Green |
+ | recovering | Yellow |
+ | offline | Red (Bright) |
-9. Add Label:`["online"] of ["total"] are healthy`
-10. In the column settings for online, offline, and recovering you can set column renderer to "Hidden" (Optional).
-11. Select *Labels* at the top and update label for the total column as "Database Status" (Optional).
-12. Select on **Apply**
+1. For **Label**, enter `["online"] of ["total"] are healthy`.
+1. In the column settings for **online**, **offline**, and **recovering**, you can set **Column renderer** to `Hidden` (optional).
+1. Select the **Labels** tab and update the label for the total column as `Database Status` (optional).
+1. Select **Apply**.
-The composite bar settings will look like the screenshot below:
+The composite bar settings will look like the following screenshot:
-![Screenshot of composite bar column settings with settings described above.](./media/workbooks-composite-bar/composite-bar-settings.png)
+![Screenshot that shows composite bar column settings with the preceding settings.](./media/workbooks-composite-bar/composite-bar-settings.png)
-The composite bar with the settings above:
+The composite bar with the preceding settings:
-![Screenshot of composite bar.](./media/workbooks-composite-bar/composite-bar.png)
+![Screenshot that shows the composite bar.](./media/workbooks-composite-bar/composite-bar.png)
## Composite bar settings
-Select column name and corresponding color to render that column in that color as a part of composite bar. You can insert, delete, and move rows up and down.
+Select the column name and corresponding color to render the column in that color as part of a composite bar. You can insert, delete, and move rows up and down.
### Label
-Composite bar label is displayed at the top of the composite bar. You can use a mix of static text, columns, and parameter. If Label is empty, the value of the current columns is displayed as the label. In the previous example if we left the label field black the value of total columns would be displayed.
+The composite bar label is displayed at the top of the composite bar. You can use a mix of static text, columns, and parameters. If **Label** is empty, the value of the current columns is displayed as the label. In the previous example, if we left the **Label** field blank, the value of total columns would be displayed.
Refer to columns with `["columnName"]`. Refer to parameters with `{paramName}`.
-Both column name and parameter name are case sensitive. You can also make labels a link by selecting "Make this item as a link" and then add link settings.
+Both the column name and parameter name are case sensitive. You can also make labels a link by selecting **Make this item a link** and then adding link settings.
### Aggregation
-Aggregations are useful for Tree/Group By visualizations. The data for a column for the group row is decided by the aggregation set for that column. There are three types of aggregations applicable for composite bars: None, Sum, and Inherit.
+Aggregations are useful for Tree/Group By visualizations. The data for a column for the group row is decided by the aggregation set for that column. Three types of aggregations are applicable for composite bars: None, Sum, and Inherit.
To add Group By settings: 1. In column settings, go to the column you want to add settings to.
-2. In *Tree/Group By Settings* under *Tree type*, select **Group By**
-3. Select the field you would like to group by.
+1. In **Tree/Group By Settings**, under **Tree type**, select **Group By**.
+1. Select the field you want to group by.
-![Screenshot of group by settings.](./media/workbooks-composite-bar/group-by-settings.png)
+ ![Screenshot that shows Group By settings.](./media/workbooks-composite-bar/group-by-settings.png)
#### None
-None aggregation means display no results for that column for the group rows.
+The setting of **None** for aggregation means that no results are displayed for that column for the group rows.
-![Screenshot of composite bar with none aggregation.](./media/workbooks-composite-bar/none.png)
+![Screenshot that shows the composite bar with the None setting for aggregation.](./media/workbooks-composite-bar/none.png)
#### Sum
-If aggregation is set as Sum, then the column in the group row will show the composite bar by using the sum of the columns used to render it. The label will also use the sum of the columns referred in it.
+If aggregation is set as **Sum**, the column in the group row shows the composite bar by using the sum of the columns used to render it. The label will also use the sum of the columns referred to in it.
-In the example below the online, offline, and recovering all have max aggregation set to them and the aggregation for the total column is sum.
+In the following example, **online**, **offline**, and **recovering** all have max aggregation set to them and the aggregation for the total column is **Sum**.
-![Screenshot of composite bar with sum aggregation.](./media/workbooks-composite-bar/sum.png)
+![Screenshot that shows the composite bar with the Sum setting for aggregation.](./media/workbooks-composite-bar/sum.png)
#### Inherit
-If aggregation is set as inherit, then the column in the group row will show the composite bar by using the aggregation set by users for the columns used to render it. The columns used in label also use the aggregation set by the user. If the current column renderer is composite bar and is refereed in the label (like "total" in the example above), then sum is used as the aggregation for that column.
+If aggregation is set as **Inherit**, the column in the group row shows the composite bar by using the aggregation set by users for the columns used to render it. The columns used in **Label** also use the aggregation set by the user. If the current column renderer is **Composite Bar** and is referred to in the label (like **total** in the preceding example), then **Sum** is used as the aggregation for that column.
-In the example below, the online, offline, and recovering all have max aggregation set to them and the aggregation for total column is inherit.
+In the following example, **online**, **offline**, and **recovering** all have max aggregation set to them and the aggregation for total column is **Inherit**.
-![Screenshot of composite bar with inherit aggregation.](./media/workbooks-composite-bar/inherit.png)
+![Screenshot that shows the composite bar with the inherit setting for aggregation.](./media/workbooks-composite-bar/inherit.png)
## Sorting
-For grid visualizations, the sorting of the rows for the column with the composite bar renderer works based on the value that is the sum of the columns used to render the composite bar computer dynamically. In the previous examples, the value used for sorting is the sum of the online, recovering, and the offline columns for that particular row.
-
-## Tiles visualization
-
-1. Select **Add** and *add query*.
-2. Change the data source to JSON enter the data from the [previous example](#adding-composite-bar-renderer).
-3. Change visualization to *Tiles*.
-4. Run query.
-5. Select **Tile Settings**.
-6. Select *Left* in Tile fields.
-7. Enter the settings below under *Field Settings*.
- 1. Use column: "server".
- 2. Column renderer: "Text".
-8. Select *Bottom* in Tile fields.
-9. Enter the settings below under *Field Settings*.
- 1. Use column: "total".
- 2. Column renderer: "Composite Bar".
- 3. Enter Set the following settings under "Composite Bar Settings".
-
- | Column Name | Color |
- |-|--|
- | online | Green |
- | recovering | Yellow |
- | offline | Red (Bright) |
-
- 4. Add Label:`["online"] of ["total"] are healthy`.
-10. Select **Apply**.
+For grid visualizations, the sorting of the rows for the column with the composite bar renderer works based on the value that's the sum of the columns used to render the composite bar computer dynamically. In the previous examples, the value used for sorting is the sum of the **online**, **recovering**, and **offline** columns for that particular row.
+
+## Tile visualizations
+
+To make a composite bar renderer for a tile visualization:
+
+1. Select **Add** > **Add query**.
+1. Change the data source to `JSON`. Enter the data from the [previous example](#add-the-composite-bar-renderer).
+1. Change **Visualization** to `Tiles`.
+1. Run the query.
+1. Select **Tile Settings**.
+1. Under **Tile fields**, select **Left**.
+1. Under **Field settings**, set the following settings:
+ 1. **Use column**: `server`
+ 1. **Column renderer**: `Text`
+1. Under **Tile fields**, select **Bottom**.
+1. Under **Field settings**, set the following settings:
+ 1. **Use column**: `total`
+ 1. **Column renderer**: `Composite Bar`
+ 1. Under **Composite Bar Settings**, set the following settings:
+
+ | Column Name | Color |
+ |-|--|
+ | online | Green |
+ | recovering | Yellow |
+ | offline | Red (Bright) |
+
+ 1. For **Label**, enter `["online"] of ["total"] are healthy`.
+1. Select **Apply**.
Composite bar settings for tiles:
-![Screenshot of composite bar tile settings with settings described above.](./media/workbooks-composite-bar/tiles-settings.png)
-
-The Composite bar view for Tiles with the above settings will look like this:
-
-![Screenshot of composite bar tiles.](./media/workbooks-composite-bar/composite-bar-tiles.png)
-
-## Graphs visualization
-
-To make a composite bar renderer for Graphs visualization (type Hive Clusters), follow the instructions below.
-
-1. Select **Add** and *add query*.
-2. Change the data source to JSON enter the data from the [previous example](#adding-composite-bar-renderer).
-3. Change visualization to *Graphs*.
-4. Run query.
-5. Select **Graph Settings**.
-6. Select *Center Content* in Node Format Settings.
-7. Enter the settings below under *Field Settings*.
- 1. Use column: "total".
- 2. Column renderer: "Composite Bar".
- 3. Enter the following settings under *Composite Bar Settings*.
-
- |Column Name | Color |
- |-|--|
- | online | Green |
- | recovering | Yellow |
- | offline | Red (Bright) |
-
- 4. Add Label:`["online"] of ["total"] are healthy`.
-9. Enter the settings below under *Layout Settings*.
- 1. Graph Type: **Hive Clusters**.
- 2. Node ID select: "server".
- 3. Group By Field: "None".
- 4. Node Size: 100.
- 5. Margin between hexagons: 5.
- 6. Coloring Type type: **None**.
+![Screenshot that shows composite bar tile settings with the preceding settings.](./media/workbooks-composite-bar/tiles-settings.png)
+
+The composite bar view for tiles with the preceding settings will look like this example:
+
+![Screenshot that shows composite bar tiles.](./media/workbooks-composite-bar/composite-bar-tiles.png)
+
+## Graph visualizations
+
+To make a composite bar renderer for a graph visualization (type Hive Clusters):
+
+1. Select **Add** > **Add query**.
+2. Change **Data source** to `JSON`. Enter the data from the [previous example](#add-the-composite-bar-renderer).
+1. Change **Visualization** to `Graphs`.
+1. Run the query.
+1. Select **Graph Settings**.
+1. Under **Node Format Settings**, select **Center Content**.
+1. Under **Field settings**, set the following settings:
+ 1. **Use column**: `total`
+ 1. **Column renderer**: `Composite Bar`
+ 1. Under **Composite Bar Settings**, set the following settings:
+
+ |Column Name | Color |
+ |-|--|
+ | online | Green |
+ | recovering | Yellow |
+ | offline | Red (Bright) |
+
+ 1. For **Label**, enter `["online"] of ["total"] are healthy`.
+1. Under **Layout Settings**, set the following settings:
+ 1. **Graph Type**: `Hive Clusters`
+ 1. **Node ID**: `server`
+ 1. **Group By Field**: `None`
+ 1. **Node Size**: `100`
+ 1. **Margin between hexagons**: `5`
+ 1. **Coloring Type**: `None`
1. Select **Apply**.
-
+ Composite bar settings for graphs:
-![Screenshot of composite bar graph settings with settings described above.](./media/workbooks-composite-bar/graphs-settings.png)
+![Screenshot that shows composite bar graph settings with the preceding settings.](./media/workbooks-composite-bar/graphs-settings.png)
-The Composite bar view for Graph with the above settings will look like this:
+The composite bar view for a graph with the preceding settings will look like this example:
-![Screenshot of composite bar graphs with hive clusters.](./media/workbooks-composite-bar/composite-bar-graphs.png)
+![Screenshot that shows composite bar graphs with hive clusters.](./media/workbooks-composite-bar/composite-bar-graphs.png)
## Next steps
+[Get started with Azure Workbooks](workbooks-getting-started.md)
azure-monitor Workbooks Graph Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md
Title: Azure Monitor workbook graph visualizations
-description: Learn about all the Azure Monitor workbook graph visualizations.
+ Title: Azure Workbooks graph visualizations
+description: Learn about all the Azure Workbooks graph visualizations.
ibiza
Last updated 07/05/2022
# Graph visualizations
-Workbooks support visualizing arbitrary graphs based on data from logs to show the relationships between monitoring entities.
+Azure Workbooks graph visualizations support visualizing arbitrary graphs based on data from logs to show the relationships between monitoring entities.
-The graph below show data flowing in/out of a computer via various ports to/from external computers. It is colored by type (computer vs. port vs. external IP) and the edge sizes correspond to the amount of data flowing in-between. The underlying data comes from KQL query targeting VM connections.
+The following graph shows data flowing in and out of a computer via various ports to and from external computers. It's colored by type, for example, computer vs. port vs. external IP. The edge sizes correspond to the amount of data flowing in between. The underlying data comes from KQL query targeting VM connections.
-[![Screenshot of tile summary view](./media/workbooks-graph-visualizations/graph.png)](./media/workbooks-graph-visualizations/graph.png#lightbox)
+[![Screenshot that shows a tile summary view.](./media/workbooks-graph-visualizations/graph.png)](./media/workbooks-graph-visualizations/graph.png#lightbox)
-## Adding a graph
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-2. Use the **Add query** link to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights) and the resources to target.
-4. Use the Query editor to enter the KQL for your analysis.
+## Add a graph
+
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Use the **Add query** link to add a log query control to the workbook.
+1. For **Query type**, select **Logs**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis.
```kusto let data = dependencies
The graph below show data flowing in/out of a computer via various ports to/from
| union (links) ```
-5. Set the visualization to **Graph**
-6. Select the **Graph Settings** button to open the settings pane
-7. In _Layout Fields_ at the bottom, set:
- * Node Id: `Id`
- * Source Id: `SourceId`
- * Target Id: `TargetId`
- * Edge Label: `None`
- * Edge Size: `Calls`
- * Node Size: `None`
- * Coloring Type: `Categorical`
- * Node Color Field: `Kind`
- * Color palette: `Pastel`
-8. In _Node Format Settings_ at the top, set:
- * _Top Content_- Use Column: `Name`, Column Renderer: `Text`
- * _Center Content_- Use Column: `Calls`, Column Renderer: `Big Number`, Color Palette: `None`
- * _Bottom Content_- Use Column: `Kind`, Column Renderer: `Text`
-9. Select the _Save and Close_ button at the bottom of the pane.
-
-[![Screenshot of tile summary view with the above query and settings.](./media/workbooks-graph-visualizations/graph-settings.png)](./media/workbooks-graph-visualizations/graph-settings.png#lightbox)
+1. Set **Visualization** to **Graph**.
+1. Select **Graph Settings** to open the **Graph Settings** pane.
+1. In **Node Format Settings** at the top, set:
+ * **Top Content**
+ - **Use column**: `Name`
+ * **Column renderer**: `Text`
+ * **Center Content**
+ - **Use column**: `Calls`
+ * **Column renderer**: `Big Number`
+ * **Color palette**: `None`
+ * **Bottom Content**
+ - **Use column**: `Kind`
+ * **Column renderer**: `Text`
+1. In **Layout Settings** at the bottom, set:
+ * **Node ID**: `Id`
+ * **Source ID**: `SourceId`
+ * **Target ID**: `TargetId`
+ * **Edge Label**: `None`
+ * **Edge Size**: `Calls`
+ * **Node Size**: `None`
+ * **Coloring Type**: `Categorical`
+ * **Node Color Field**: `Kind`
+ * **Color palette**: `Pastel`
+1. Select **Save and Close** at the bottom of the pane.
+
+[![Screenshot that shows a tile summary view with the preceding query and settings.](./media/workbooks-graph-visualizations/graph-settings.png)](./media/workbooks-graph-visualizations/graph-settings.png#lightbox)
## Graph settings
-| Setting | Explanation |
+| Setting | Description |
|:-|:-|
-| `Node Id` | Selects a column that provides the unique ID of nodes on the graph. Value of the column can be string or a number. |
-| `Source Id` | Selects a column that provides the IDs of source nodes for edges on the graph. Values must map to a value in the _Node Id_ column. |
-| `Target Id` | Selects a column that provides the IDs of target nodes for edges on the graph. Values must map to a value in the _Node Id_ column. |
+| `Node ID` | Selects a column that provides the unique ID of nodes on the graph. The value of the column can be a string or a number. |
+| `Source ID` | Selects a column that provides the IDs of source nodes for edges on the graph. Values must map to a value in the `Node Id` column. |
+| `Target ID` | Selects a column that provides the IDs of target nodes for edges on the graph. Values must map to a value in the `Node Id` column. |
| `Edge Label` | Selects a column that provides edge labels on the graph. |
-| `Edge Size` | Selects a column that provides the metric on which the edge widths will be based on. |
-| `Node Size` | Selects a column that provides the metric on which the node areas will be based on. |
+| `Edge Size` | Selects a column that provides the metric on which the edge widths will be based. |
+| `Node Size` | Selects a column that provides the metric on which the node areas will be based. |
| `Coloring Type` | Used to choose the node coloring scheme. | ## Node coloring types
-| Coloring Type | Explanation |
+| Coloring type | Description |
|:- |:| | `None` | All nodes have the same color. |
-| `Categorical` | Nodes are assigned colors based on the value or category from a column in the result set. In the example above, the coloring is based on the column _Kind_ of the result set. Supported palettes are `Default`, `Pastel`, and `Cool tone`. |
+| `Categorical` | Nodes are assigned colors based on the value or category from a column in the result set. In the preceding example, the coloring is based on the column `Kind` of the result set. Supported palettes are `Default`, `Pastel`, and `Cool tone`. |
| `Field Based` | In this type, a column provides specific RGB values to use for the node. Provides the most flexibility but usually requires more work to enable. | ## Node format settings
-Graphs authors can specify what content goes to the different parts of a node: top, left, center, right, and bottom. Graphs can use any of renderers workbooks supports (text, big number, spark lines, icon, etc.).
+You can specify what content goes to the different parts of a node: top, left, center, right, and bottom. Graphs can use any renderers' workbook supports like text, big numbers, spark lines, and icons.
-## Field based node coloring
+## Field-based node coloring
-1. Switch the workbook to edit mode by clicking on the _Edit_ toolbar item.
-2. Use the **Add query** link to add a log query control to the workbook.
-3. Select the query type as **Log**, resource type (for example, Application Insights), and the resources to target.
-4. Use the Query editor to enter the KQL for your analysis.
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Use the **Add query** link to add a log query control to the workbook.
+1. For **Query type**, select **Logs**. For **Resource type**, select, for example, **Application Insights**, and select the resources to target.
+1. Use the query editor to enter the KQL for your analysis.
```kusto let data = dependencies
Graphs authors can specify what content goes to the different parts of a node: t
nodes | union (links) ```
-5. Set the visualization to *Graph*
-6. Select the **Graph Settings** button to open the settings pane.
-7. In *Layout Fields* at the bottom, set:
- * Node Id:`Id`
- * Source Id: `SourceId`
- * Target Id: `TargetId`
- * Edge Label: `None`
- * Edge Size: `Calls`
- * Node Size: `Node`
- * Coloring Type: `Field Based`
- * Node Color Field: `Color`
-8. In *Node Format Settings* at the top, enter the following.
- * In *Top Content*, set:
- * Use Column: `Name`.
- * Column renderer: `Text`.
- * In *Center Content*, set:
- * Use column: `Calls`
- * Column Renderer: `Big Number`
- * Color palette: `None`
- * In *Bottom Content*, set:
- * Use column: `Kind`
- * Column renderer: `Text`.
-9. Select the **Save and Close button** at the bottom of the pane.
-
-[![Screenshot showing the creation of a graph visualization with field base node coloring.](./media/workbooks-graph-visualizations/graph-field-based.png)](./media/workbooks-graph-visualizations/graph-field-based.png#lightbox)
+
+1. Set **Visualization** to `Graph`.
+1. Select **Graph Settings** to open the **Graph Settings** pane.
+1. In **Node Format Settings** at the top, set:
+ * **Top Content**:
+ * **Use column**: `Name`
+ * **Column renderer**: `Text`
+ * **Center Content**:
+ * **Use column**: `Calls`
+ * **Column renderer**: `Big Number`
+ * **Color palette**: `None`
+ * **Bottom Content**:
+ * **Use column**: `Kind`
+ * **Column renderer**: `Text`
+1. In **Layout Settings** at the bottom, set:
+ * **Node ID**:`Id`
+ * **Source ID**: `SourceId`
+ * **Target ID**: `TargetId`
+ * **Edge Label**: `None`
+ * **Edge Size**: `Calls`
+ * **Node Size**: `Node`
+ * **Coloring Type**: `Field Based`
+ * **Node Color Field**: `Color`
+1. Select **Save and Close** at the bottom of the pane.
+
+[![Screenshot that shows the creation of a graph visualization with field-based node coloring.](./media/workbooks-graph-visualizations/graph-field-based.png)](./media/workbooks-graph-visualizations/graph-field-based.png#lightbox)
## Next steps
-* Graphs also support Composite bar renderer. To learn more visit the [Composite Bar documentation](workbooks-composite-bar.md).
+* Graphs also support the composite bar renderer. To learn more, see [Composite bar renderer](workbooks-composite-bar.md).
* Learn more about the [data sources](workbooks-data-sources.md) you can use in workbooks.
azure-monitor Workbooks Honey Comb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md
Title: Azure Monitor workbook honey comb visualizations
-description: Learn about Azure Monitor workbook honey comb visualizations.
+ Title: Azure Workbooks honeycomb visualizations
+description: Learn about Azure Workbooks honeycomb visualizations.
ibiza
Last updated 07/05/2022
-# Honey comb visualizations
-
-Honey combs allow high density views of metrics or categories that can optionally be grouped as clusters. They are useful in visually identifying hotspots and drilling in further.
-
-The image below shows the CPU utilization of virtual machines across two subscriptions. Each cell represents a virtual machine and the color/label represents its average CPU utilization (reds are hot machines). The virtual machines are clustered by subscription.
-
-[![Screenshot shows the CPU utilization of virtual machines across two subscriptions](.\media\workbooks-honey-comb\cpu-example.png)](.\media\workbooks-honey-comb\cpu-example.png#lightbox)
-
-## Adding a honey comb
-
-1. Switch the workbook to edit mode by clicking on the Edit toolbar item.
-2. Use **Add** at the bottom then *Add query* to add a log query control to the workbook.
-3. Select the "Logs" as the *Data source*, "Log Analytics" as the *Resource type*, and for *Resource* point to a workspace that has virtual machine performance log.
-4. Use the query editor to enter the KQL for your analysis.
-
-```kusto
- Perf
-| where CounterName == 'Available MBytes'
-| summarize CounterValue = avg(CounterValue) by Computer, _ResourceId
-| extend ResourceGroup = extract(@'/subscriptions/.+/resourcegroups/(.+)/providers/microsoft.compute/virtualmachines/.+', 1, _ResourceId)
-| extend ResourceGroup = iff(ResourceGroup == '', 'On-premises computers', ResourceGroup), Id = strcat(_ResourceId, '::', Computer)
-```
-
-5. Run query.
-6. Set the *visualization* to "Graph".
-7. Select **Graph Settings**.
- 1. In *Layout Fields* at the bottom, set:
- 1. Graph type: **Hive Clusters**.
- 2. Node Id:`Id`.
- 3. Group by: `None`.
- 4. Node Size: 100.
- 5. Margin between hexagons: `5`.
- 6. Coloring type: **Heatmap**.
- 7. Node Color Field: `CouterValue`.
- 8. Color palette: `Red to Green`.
- 9. Minimum value: `100`.
- 10. Maximum value: `2000`.
- 2. In *Node Format Settings* at the top, set:
- 1. Top Content:
- 1. Use Column: `Computer`.
- 2. Column Renderer" `Text`.
- 9. Center Content:
- 1. Use Column: `CounterValue`.
- 2. Column Renderer: `Big Number`.
- 3. Color Palette: `None`.
- 4. Check the *Custom number formatting* box.
- 5. Units: `Megabytes`.
- 6. Maximum fractional digits: `1`.
-8. Select **Save and Close** button at the bottom of the pane.
-
-[![Screenshot of query control, graph settings, and honey comb with the above query and settings](.\media\workbooks-honey-comb\available-memory.png)](.\media\workbooks-honey-comb\available-memory.png#lightbox)
-
-## Honey comb layout settings
-
-| Setting | Explanation |
+# Honeycomb visualizations
+
+Azure Workbooks honeycomb visualizations provide high-density views of metrics or categories that can optionally be grouped as clusters. They're useful for visually identifying hotspots and drilling in further.
+
+The following image shows the CPU utilization of virtual machines across two subscriptions. Each cell represents a virtual machine. The color/label represents its average CPU utilization. Red cells are hot machines. The virtual machines are clustered by subscription.
+
+[![Screenshot that shows the CPU utilization of virtual machines across two subscriptions.](.\media\workbooks-honey-comb\cpu-example.png)](.\media\workbooks-honey-comb\cpu-example.png#lightbox)
+
+## Add a honeycomb
+
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add query** to add a log query control to the workbook.
+1. For **Data source**, select **Logs**. For **Resource type**, select **Log Analytics**. For **Resource**, point to a workspace that has a virtual machine performance log.
+1. Use the query editor to enter the KQL for your analysis.
+
+ ```kusto
+ Perf
+ | where CounterName == 'Available MBytes'
+ | summarize CounterValue = avg(CounterValue) by Computer, _ResourceId
+ | extend ResourceGroup = extract(@'/subscriptions/.+/resourcegroups/(.+)/providers/microsoft.compute/virtualmachines/.+', 1, _ResourceId)
+ | extend ResourceGroup = iff(ResourceGroup == '', 'On-premises computers', ResourceGroup), Id = strcat(_ResourceId, '::', Computer)
+ ```
+
+1. Run the query.
+1. Set **Visualization** to `Graph`.
+1. Select **Graph Settings**.
+ 1. In **Node Format Settings** at the top, set:
+ 1. **Top Content**
+ - **Use column**: `Computer`
+ - **Column renderer**: `Text`
+ 1. **Center Content**
+ - **Use column**: `CounterValue`
+ - **Column renderer**: `Big Number`
+ - **Color palette**: `None`
+ - Select the **Custom number formatting** checkbox.
+ - **Units**: `Megabytes`
+ - **Maximum fractional digits**: `1`
+ 1. In **Layout Settings** at the bottom, set:
+ - **Graph Type**: `Hive Clusters`
+ - **Node ID**: `Id`
+ - **Group By Field**: `None`
+ - **Node Size**: 100
+ - **Margin between hexagons**: `5`
+ - **Coloring Type**: `Heatmap`
+ - **Node Color Field**: `CounterValue`
+ - **Color palette**: `Red to Green`
+ - **Minimum value**: `100`
+ - **Maximum value**: `2000`
+
+1. Select **Save and Close** at the bottom of the pane.
+
+[![Screenshot that shows query control, graph settings, and honeycomb with the preceding query and settings.](.\media\workbooks-honey-comb\available-memory.png)](.\media\workbooks-honey-comb\available-memory.png#lightbox)
+
+## Honeycomb layout settings
+
+| Setting | Description |
|:- |:-|
-| `Node Id` | Selects a column that provides the unique ID of nodes. Value of the column can be string or a number. |
+| `Node ID` | Selects a column that provides the unique ID of nodes. The value of the column can be a string or a number. |
| `Group By Field` | Select the column to cluster the nodes on. |
-| `Node Size` | Sets the size of the hexagonal cells. Use with the `Margin between hexagons` property to customize the look of the honey comb chart. |
-| `Margin between hexagons` | Sets the gap between the hexagonal cells. Use with the `Node size` property to customize the look of the honey comb chart. |
-| `Coloring type` | Selects the scheme to use to color the node. |
-| `Node Color Field` | Selects a column that provides the metric on which the node areas will be based on. |
+| `Node Size` | Sets the size of the hexagonal cells. Use with the `Margin between hexagons` property to customize the look of the honeycomb chart. |
+| `Margin between hexagons` | Sets the gap between the hexagonal cells. Use with the `Node size` property to customize the look of the honeycomb chart. |
+| `Coloring Type` | Selects the scheme to use to color the node. |
+| `Node Color Field` | Selects a column that provides the metric on which the node areas will be based. |
## Node coloring types
-| Coloring Type | Explanation |
+| Coloring type | Description |
|:- |:-| | `None` | All nodes have the same color. |
-| `Categorical` | Nodes are assigned colors based on the value or category from a column in the result set. In the example above, the coloring is based on the column _Kind_ of the result set. Supported palettes are `Default`, `Pastel`, and `Cool tone`. |
-| `Heatmap` | In this type, the cells are colored based on a metric column and a color palette. This provides a simple way to highlight metrics spreads across cells. |
-| `Thresholds` | In this type, cell colors are set by threshold rules (for example, _CPU > 90% => Red, 60% > CPU > 90% => Yellow, CPU < 60% => Green_) |
+| `Categorical` | Nodes are assigned colors based on the value or category from a column in the result set. In the preceding example, the coloring is based on the column `Kind` of the result set. Supported palettes are `Default`, `Pastel`, and `Cool tone`. |
+| `Heatmap` | In this type, the cells are colored based on a metric column and a color palette. Color coding provides a simple way to highlight metrics spread across cells. |
+| `Thresholds` | In this type, cell colors are set by threshold rules, for example, _CPU > 90% => Red, 60% > CPU > 90% => Yellow, CPU < 60% => Green_. |
| `Field Based` | In this type, a column provides specific RGB values to use for the node. Provides the most flexibility but usually requires more work to enable. | ## Node format settings
-Honey comb authors can specify what content goes to the different parts of a node: top, left, center, right, and bottom. Authors are free to use any of the renderers workbooks supports (text, big number, spark lines, icon, etc.).
+You can specify what content goes to the different parts of a node: top, left, center, right, and bottom. You're free to use any of the renderers supported by workbooks like text, big numbers, spark lines, and icons.
## Next steps -- Learn how to create a [Composite bar renderer in workbooks](workbooks-composite-bar.md).
+- Learn how to create a [composite bar renderer in workbooks](workbooks-composite-bar.md).
- Learn how to [import Azure Monitor log data into Power BI](../logs/log-powerbi.md).
azure-monitor Workbooks Map Visualizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-map-visualizations.md
Title: Azure Monitor workbook map visualizations
-description: Learn about Azure Monitor workbook map visualizations.
+ Title: Azure Workbooks map visualizations
+description: Learn about Azure Workbooks map visualizations.
ibiza
Last updated 07/05/2022
# Map visualization
-The map visualization aids in pin-pointing issues in specific regions and showing high level aggregated views of the monitoring data by providing capability to aggregate all the data mapped to each location/country/region.
+Azure Workbooks map visualizations aid in pinpointing issues in specific regions and showing high-level aggregated views of the monitoring data. Maps aggregate all the data mapped to each location, country, or region.
-The screenshot below shows the total transactions and end-to-end latency for different storage accounts. Here the size is determined by the total number of transactions and the color metrics below the map show the end-to-end latency. Upon first observation, the number of transactions in the **West US** region are small compared to the **East US** region, but the end-to-end latency for the **West US** region is higher than the **East US** region. This provides initial insight that something is amiss for **West US**.
+The following screenshot shows the total transactions and end-to-end latency for different storage accounts. Here the size is determined by the total number of transactions. The color metrics below the map show the end-to-end latency.
-![Screenshot of Azure location map](./media/workbooks-map-visualizations/map-performance-example.png)
+At first glance, the number of transactions in the **West US** region is small compared to the **East US** region. But the end-to-end latency for the **West US** region is higher than the **East US** region. This information provides initial insight that something is amiss for **West US**.
-## Adding a map
+![Screenshot that shows an Azure location map.](./media/workbooks-map-visualizations/map-performance-example.png)
-Map can be visualized if the underlying data/metrics has Latitude/Longitude information, Azure resource information, Azure location information or country/region, name, or country/region code.
+## Add a map
-### Using Azure location
+A map can be visualized if the underlying data or metrics have:
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Select **Add** then *Add query*.
-3. Change the *Data Source* to `Azure Resource Graph` then pick any subscription that has storage account.
-4. Enter the query below for your analysis and the select **Run Query**.
+- Latitude/longitude information.
+- Azure resource information.
+- Azure location information.
+- Country/region, name, or country/region code.
+
+### Use an Azure location
+
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add query**.
+1. Change **Data Source** to **Azure Resource Graph**. Then select any subscription that has a storage account.
+1. Enter the following query for your analysis and then select **Run Query**.
```kusto where type =~ 'microsoft.storage/storageaccounts' | summarize count() by location ```
-5. Set *Size* to `Large`.
-6. Set the *Visualization* to `Map`.
-7. All the settings will be autopopulated. For custom settings, select the **Map Settings** button to open the settings pane.
-8. Below is a screenshot of the map visualization that shows storage accounts for each Azure region for the selected subscription.
-
-![Screenshot of Azure location map with the above query](./media/workbooks-map-visualizations/map-azure-location-example.png)
-
-### Using Azure resource
-
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Select **Add** then *Add Metric*.
-3. Use a subscription that has storage accounts.
-4. Change *Resource Type* to `storage account` and in *Resource* select multiple storage accounts.
-5. Select **Add Metric** and add a Transaction metric.
- 1. Namespace: `Account`
- 2. Metric: `Transactions`
- 3. Aggregation: `Sum`
-
- ![Screenshot of transaction metric](./media/workbooks-map-visualizations/map-transaction-metric.png)
-1. Select **Add Metric** and add Success E2E Latency metric.
- 1. Namespace: `Account`
- 1. Metric: `Success E2E Latency`
- 1. Aggregation: `Average`
-
- ![Screenshot of success end-to-end latency metric](./media/workbooks-map-visualizations/map-e2e-latency-metric.png)
-1. Set *Size* to `Large`.
-1. Set *Visualization* size to `Map`.
-1. In **Map Settings** set the following settings:
- 1. Location info using: `Azure Resource`
- 1. Azure resource field: `Name`
- 1. Size by: `microsoft.storage/storageaccounts-Transaction-Transactions`
- 1. Aggregation for location: `Sum of values`
- 1. Coloring Type: `Heatmap`
- 1. Color by: `microsoft.storage/storageaccounts-Transaction-SuccessE2ELatency`
- 1. Aggregation for color: `Sum of values`
- 1. Color palette: `Green to Red`
- 1. Minimum value: `0`
- 1. Metric value: `microsoft.storage/storageaccounts-Transaction-SuccessE2ELatency`
- 1. Aggregate other metrics by: `Sum of values`
- 1. Select the **custom formatting** box
- 1. Unit: `Milliseconds`
- 1. Style: `Decimal`
- 1. Maximum fractional digits: `2`
-
-### Using country/region
-
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Select **Add*, then *Add query*.
-3. Change the *Data source* to `Log`.
-4. Select *Resource type* as `Application Insights`, then pick any Application Insights resource that has pageViews data.
-5. Use the query editor to enter the KQL for your analysis and select **Run Query**.
+1. Set **Size** to `Large`.
+1. Set **Visualization** to `Map`.
+1. All the settings will be autopopulated. For custom settings, select **Map Settings** to open the settings pane.
+1. The following screenshot of the map visualization shows storage accounts for each Azure region for the selected subscription.
+
+![Screenshot that shows an Azure location map with the preceding query.](./media/workbooks-map-visualizations/map-azure-location-example.png)
+
+### Use an Azure resource
+
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add Metric**.
+1. Use a subscription that has storage accounts.
+1. Change **Resource Type** to `storage account`. In **Resource**, select multiple storage accounts.
+1. Select **Add Metric** and add a transaction metric:
+ 1. **Namespace**: `Account`
+ 1. **Metric**: `Transactions`
+ 1. **Aggregation**: `Sum`
+
+ ![Screenshot that shows a transaction metric.](./media/workbooks-map-visualizations/map-transaction-metric.png)
+1. Select **Add Metric** and add the **Success E2E Latency** metric.
+ 1. **Namespace**: `Account`
+ 1. **Metric**: `Success E2E Latency`
+ 1. **Aggregation**: `Average`
+
+ ![Screenshot that shows a success end-to-end latency metric.](./media/workbooks-map-visualizations/map-e2e-latency-metric.png)
+1. Set **Size** to `Large`.
+1. Set **Visualization** to `Map`.
+1. In **Map Settings**, set:
+ 1. **Location info using**: `Azure Resource`
+ 1. **Azure resource field**: `Name`
+ 1. **Size by**: `microsoft.storage/storageaccounts-Transaction-Transactions`
+ 1. **Aggregation for location**: `Sum of values`
+ 1. **Coloring type**: `Heatmap`
+ 1. **Color by**: `microsoft.storage/storageaccounts-Transaction-SuccessE2ELatency`
+ 1. **Aggregation for color**: `Sum of values`
+ 1. **Color palette**: `Green to Red`
+ 1. **Minimum value**: `0`
+ 1. **Metric value**: `microsoft.storage/storageaccounts-Transaction-SuccessE2ELatency`
+ 1. **Aggregate other metrics by**: `Sum of values`
+ 1. Select the **Custom formatting** checkbox.
+ 1. **Unit**: `Milliseconds`
+ 1. **Style**: `Decimal`
+ 1. **Maximum fractional digits**: `2`
+
+### Use country/region
+
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add query**.
+1. Change **Data source** to `Log`.
+1. Select **Resource type** as `Application Insights`. Then select any Application Insights resource that has `pageViews` data.
+1. Use the query editor to enter the KQL for your analysis and select **Run Query**.
```kusto pageViews
Map can be visualized if the underlying data/metrics has Latitude/Longitude info
| limit 20 ```
-6. Set the size values to `Large`.
-7. Set the visualization to `Map`.
-8. All the settings will be autopopulated. For custom settings, select **Map Settings**.
+1. Set **Size** to `Large`.
+1. Set **Visualization** to `Map`.
+1. All the settings will be autopopulated. For custom settings, select **Map Settings**.
-### Using latitude/location
+### Use latitude/location
-1. Switch the workbook to edit mode by selecting **Edit** in the toolbar.
-2. Select **Add*, then *Add query*.
-3. Change the *Data source* to `JSON`.
-1. Enter the JSON data in below in the query editor and select **Run Query**.
-1. Set the *Size* values to `Large`.
-1. Set the *Visualization* to `Map`.
-1. In **Map Settings** under "Metric Settings", set *Metric Label* to `displayName` then select **Save and Close**.
+1. Switch the workbook to edit mode by selecting **Edit**.
+1. Select **Add** > **Add query**.
+1. Change **Data source** to `JSON`.
+1. Enter the JSON data in the query editor and select **Run Query**.
+1. Set **Size** values to `Large`.
+1. Set **Visualization** to `Map`.
+1. In **Map Settings** under **Metric Settings**, set **Metric Label** to `displayName`. Then select **Save and Close**.
-The map visualization below shows users for each latitude and longitude location with the selected label for metrics.
+The following map visualization shows users for each latitude and longitude location with the selected label for metrics.
-![Screenshot of a map visualization that shows users for each latitude and longitude location with the selected label for metrics](./media/workbooks-map-visualizations/map-latitude-longitude-example.png)
+![Screenshot that shows a map visualization that shows users for each latitude and longitude location with the selected label for metrics.](./media/workbooks-map-visualizations/map-latitude-longitude-example.png)
## Map settings
+Map settings include layout, color, and metrics.
+ ### Layout settings
-| Setting | Explanation |
+| Setting | Description |
|:-|:-|
-| `Location info using` | Select a way to get the location of items shown on the map. <br> **Latitude/Longitude**: Select this option if there are columns with latitude and longitude information. Each row with latitude and longitude data will be shown as distinct item on the map. <br> **Azure location**: Select this option if there is a column that has Azure Location (eastus, westeurope, centralindia, etc.) information. Specify that column and it will fetch the corresponding latitude and longitude for each Azure location, group same location rows(based on Aggregation specified) together to show the locations on the map. <br> **Azure resource**: Select this option if there is a column that has Azure resource information (storage account, cosmosdb account, etc.). Specify that column and it will fetch the corresponding latitude and longitude for each Azure resource, group same location (Azure location) rows (based on Aggregation specified) together to show the locations on the map. <br> **Country/Region**: Select this option if there is a column that has country/region name/code (US, United States, IN, India, CN, China) information. Specify that column and it will fetch the corresponding latitude and longitude for each Country/Region/Code and group the rows together with same Country-Region Code/Country-Region name to show the locations on the map. Country Name and Country code won't be grouped together as a single entity on the map.
-| `Latitude/Longitude` | These two options will be visible if Location Info field value is: Latitude/Longitude. Select the column that has latitude in the latitude field and longitude in the longitude field respectively. |
-| `Azure location field` | This option will be visible if Location Info field value is: Azure location. Select the column that the Azure location information. |
-| `Azure resource field` | This option will be visible if Location Info field value is: Azure resource. Select the column that the Azure resource information. |
-| `Country/Region field` | This option will be visible if Location Info field value is: Country or region. Select the column that the Country/Region information. |
-| `Size by` | This option controls the size of the items shown on the map. Size depends on value in the column specified by the user. Currently, radius of the circle is directly proportional to the square root of the column's value. If 'None...' is selected, all the circles will show the default region size.|
-| `Aggregation for location` | This field specifies how to aggregate the **size by** columns that has same Azure Location/Azure Resource/Country-Region. |
-| `Minimum region size` | This field specifies what is the minimum radius of the item shown on the map. This is used when there is a significant difference between the size by column's values, therefore smaller items are hardly visible on the map. |
-| `Maximum region size` | This field specifies what is the maximum radius of the item shown on the map. This is used when the size by column's values are extremely large and they are covering huge area of the map.|
-| `Default region size` | This field specifies what is the default radius of the item shown on the map. The default radius is used when the Size By column is 'None...' or the value is 0.|
+| `Location info using` | Select a way to get the location of items shown on the map. <br> **Latitude/Longitude**: Select this option if there are columns with latitude and longitude information. Each row with latitude and longitude data will be shown as a distinct item on the map. <br> **Azure location**: Select this option if there's a column that has Azure location (eastus, westeurope, centralindia) information. Specify that column and it will fetch the corresponding latitude and longitude for each Azure location. It will group the same location rows together based on the aggregation specified to show the locations on the map. <br> **Azure resource**: Select this option if there's a column that has Azure resource information such as an Azure Storage account and an Azure Cosmos DB account. Specify that column and it will fetch the corresponding latitude and longitude for each Azure resource. It will group the same location (Azure location) rows together based on the aggregation specified to show the locations on the map. <br> **Country/Region**: Select this option if there's a column that has country/region name/code (US, United States, IN, India, CN, China) information. Specify that column and it will fetch the corresponding latitude and longitude for each country/region/code. It will group rows together with the same Country-Region Code/Country-Region Name to show the locations on the map. Country Name and Country Code won't be grouped together as a single entity on the map.
+| `Latitude/Longitude` | These two options will be visible if the `Location Info` field value is **Latitude/Longitude**. Select the column that has latitude in the `Latitude` field and longitude in the `Longitude` field, respectively. |
+| `Azure location field` | This option will be visible if the `Location Info` field value is **Azure location**. Select the column that has the Azure location information. |
+| `Azure resource field` | This option will be visible if the `Location Info` field value is **Azure resource**. Select the column that has the Azure resource information. |
+| `Country/Region field` | This option will be visible if the `Location Info` field value is **Country/Region**. Select the column that has the country/region information. |
+| `Size by` | This option controls the size of the items shown on the map. Size depends on the value in the column specified by the user. Currently, the radius of the circle is directly proportional to the square root of the column's value. If **None** is selected, all the circles will show the default region size.|
+| `Aggregation for location` | This field specifies how to aggregate the `Size by` columns that have the same Azure Location/Azure Resource/Country-Region. |
+| `Minimum region size` | This field specifies the minimum radius of the item shown on the map. It's used when there's a significant difference between the `Size by` column's values, which causes smaller items to be hardly visible on the map. |
+| `Maximum region size` | This field specifies the maximum radius of the item shown on the map. It's used when the `Size by` column's values are extremely large and they're covering a huge area of the map.|
+| `Default region size` | This field specifies the default radius of the item shown on the map. The default radius is used when the `Size by` column is **None** or the value is 0.|
| `Minimum value` | The minimum value used to compute region size. If not specified, the minimum value will be the smallest value after aggregation. | | `Maximum value` | The maximum value used to compute region size. If not specified, the maximum value will be the largest value after aggregation.|
-| `Opacity of items on Map` | This field specifies how transparent are the items shown on the map. Opacity of 1 means, no transparency, where opacity of 0 means, items won't be visible on the map. If there are too many items on the map, opacity can be set to low value so that all the overlapping items are visible.|
+| `Opacity of items on Map` | This field specifies the transparency of items shown on the map. Opacity of 1 means no transparency. Opacity of 0 means that items won't be visible on the map. If there are too many items on the map, opacity can be set to a low value so that all the overlapping items are visible.|
### Color settings
-| Coloring Type | Explanation |
+| Coloring type | Description |
|:- |:-| | `None` | All nodes have the same color. |
-| `Thresholds` | In this type, cell colors are set by threshold rules (for example, _CPU > 90% => Red, 60% > CPU > 90% => Yellow, CPU < 60% => Green_). <ul><li> **Color by**: Value of this column will be used by Thresholds/Heatmap logic.</li> <li>**Aggregation for color**: This field specifies how to aggregate the **color by** columns that has the same Azure Location/Azure Resource/Country-Region. </li> <ul> |
-| `Heatmap` | In this type, the cells are colored based on the color palette and Color by field. This will also have same **Color by** and **Aggregation for color** options as thresholds. |
+| `Thresholds` | In this type, cell colors are set by threshold rules, for example, _CPU > 90% => Red, 60% > CPU > 90% => Yellow, CPU < 60% => Green_. <ul><li> `Color by`: The value of this column will be used by Thresholds/Heatmap logic.</li> <li>`Aggregation for color`: This field specifies how to aggregate the `Color by` columns that have the same Azure Location/Azure Resource/Country-Region. </li> <ul> |
+| `Heatmap` | In this type, the cells are colored based on the color palette and `Color by` field. This type will also have the same `Color by` and `Aggregation for color` options as thresholds. |
### Metric settings
-| Setting | Explanation |
+
+| Setting | Description |
|:- |:-|
-| `Metric Label` | This option will be visible if Location Info field value is: Latitude/Longitude. Using this feature, user can pick the label to show for metrics shown below the map. |
-| `Metric Value` | This field specifies metric value to be shown below the map. |
+| `Metric Label` | This option will be visible if the `Location Info` field value is **Latitude/Longitude**. With this feature, you can pick the label to show for the metrics shown below the map. |
+| `Metric Value` | This field specifies a metric value to be shown below the map. |
| `Create 'Others' group after` | This field specifies the limit before an "Others" group is created. |
-| `Aggregate 'Others' metrics by` | This field specifies the aggregation used for "Others" group if it is shown. |
-| `Custom formatting` | Use this field to set units, style, and formatting options for number values. This is same as [grid's custom formatting](../visualize/workbooks-grid-visualizations.md#custom-formatting).|
+| `Aggregate 'Others' metrics by` | This field specifies the aggregation used for an "Others" group if it's shown. |
+| `Custom formatting` | Use this field to set units, style, and formatting options for number values. This setting is the same as a [grid's custom formatting](../visualize/workbooks-grid-visualizations.md#custom-formatting).|
## Next steps -- Learn how to create [honey comb visualizations in workbooks](../visualize/workbooks-honey-comb.md).
+Learn how to create [honeycomb visualizations in workbooks](../visualize/workbooks-honey-comb.md).
azure-monitor Workbooks Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-templates.md
Title: Azure Workbooks templates
-description: Learn how to use workbooks templates.
+description: Learn how to use Azure Workbooks templates.
Last updated 07/05/2022
-# Azure Workbook templates
+# Azure Workbooks templates
-Workbook templates are curated reports designed for flexible reuse by multiple users and teams. Opening a template creates a transient workbook populated with the content of the template. Workbooks are visible in green and Workbook templates are visible in purple.
+Azure Workbooks templates are curated reports designed for flexible reuse by multiple users and teams. When you open a template, a transient workbook is created and populated with the content of the template. Workbooks are visible in green. Workbook templates are visible in purple.
-You can adjust the template-based workbook parameters and perform analysis without fear of breaking the future reporting experience for colleagues. If you open a template, make some adjustments, and then select the save icon, you will be saving the template as a workbook, which would then show in green, leaving the original template untouched.
+You can adjust the template-based workbook parameters and perform analysis without fear of breaking the future reporting experience for colleagues. If you open a template, make some adjustments, and save it, the template is saved as a workbook. This workbook appears in green. The original template is left untouched.
-The design and architecture of templates is also different from saved workbooks. Saving a workbook creates an associated Azure Resource Manager resource, whereas the transient workbook created when opening a template doesn't have a unique resource associated with it. The resources associated with a workbook affect who has access to that workbook. Learn more about [Azure workbooks access control](workbooks-overview.md#access-control).
+The design and architecture of templates is also different from saved workbooks. Saving a workbook creates an associated Azure Resource Manager resource. But the transient workbook that's created when you open a template doesn't have a unique resource associated with it. The resources associated with a workbook affect who has access to that workbook. Learn more about [Azure Workbooks access control](workbooks-overview.md#access-control).
## Explore a workbook template Select **Application Failure Analysis** to see one of the default application workbook templates.
- :::image type="content" source="./media/workbooks-overview/failure-analysis.png" alt-text="Screenshot of application failure analysis template." border="false" lightbox="./media/workbooks-overview/failure-analysis.png":::
+ :::image type="content" source="./media/workbooks-overview/failure-analysis.png" alt-text="Screenshot that shows the Application Failure Analysis template." border="false" lightbox="./media/workbooks-overview/failure-analysis.png":::
-Opening the template creates a temporary workbook for you to be able to interact with. By default, the workbook opens in reading mode, which displays only the information for the intended analysis experience created by the original template author.
+When you open the template, a temporary workbook is created that you can interact with. By default, the workbook opens in read mode. Read mode displays only the information for the intended analysis experience that was created by the original template author.
-You can adjust the subscription, targeted apps, and the time range of the data you want to display. Once you have made those selections, the grid of HTTP Requests is also interactive, and selecting an individual row changes the data rendered in the two charts at the bottom of the report.
+You can adjust the subscription, targeted apps, and the time range of the data you want to display. After you make those selections, the grid of HTTP Requests is also interactive. Selecting an individual row changes the data rendered in the two charts at the bottom of the report.
-## Editing a template
+## Edit a template
-To understand how this workbook template is put together, you need to swap to editing mode by selecting **Edit**.
+To understand how this workbook template is put together, switch to edit mode by selecting **Edit**.
- :::image type="content" source="./media/workbooks-overview/edit.png" alt-text="Screenshot of edit button in workbooks." border="false" :::
+ :::image type="content" source="./media/workbooks-overview/edit.png" alt-text="Screenshot that shows the Edit button." border="false" :::
-Once you have switched to editing mode, you will notice **Edit** boxes to the right, corresponding with each individual aspect of your workbook.
+**Edit** buttons on the right correspond with each individual aspect of your workbook.
- :::image type="content" source="./media/workbooks-overview/edit-mode.png" alt-text="Screenshot of Edit button." border="false" lightbox="./media/workbooks-overview/edit-mode.png":::
+ :::image type="content" source="./media/workbooks-overview/edit-mode.png" alt-text="Screenshot that shows Edit buttons." border="false" lightbox="./media/workbooks-overview/edit-mode.png":::
-If we select the edit button immediately under the grid of request data, we can see that this part of our workbook consists of a Kusto query against data from an Application Insights resource.
+If you select the **Edit** button immediately under the grid of requested data, you can see that this part of the workbook consists of a Kusto query against data from an Application Insights resource.
- :::image type="content" source="./media/workbooks-overview/kusto.png" alt-text="Screenshot of underlying Kusto query." border="false" lightbox="./media/workbooks-overview/kusto.png":::
+ :::image type="content" source="./media/workbooks-overview/kusto.png" alt-text="Screenshot that shows the underlying Kusto query." border="false" lightbox="./media/workbooks-overview/kusto.png":::
-Selecting the other **Edit** buttons on the right will reveal some of the core components that make up workbooks like markdown-based [text boxes](../visualize/workbooks-text-visualizations.md), [parameter selection](../visualize/workbooks-parameters.md) UI elements, and other [chart/visualization types](workbooks-visualizations.md).
+Select the other **Edit** buttons on the right to see some of the core components that make up workbooks, like:
-Exploring the pre-built templates in edit-mode and then modifying them to fit your needs and save your own custom workbook is an excellent way to start to learn about what is possible with Azure Monitor workbooks.
+- Markdown-based [text boxes](../visualize/workbooks-text-visualizations.md).
+- [Parameter selection](../visualize/workbooks-parameters.md) UI elements.
+- Other [chart/visualization types](workbooks-visualizations.md).
+
+Exploring the prebuilt templates in edit mode, modifying them to fit your needs, and saving your own custom workbook is a good way to start to learn about what's possible with Azure Workbooks.
azure-resource-manager Bicep Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-10-15/notebook-workspaces/list-connection-info) | | Microsoft.DomainRegistration | [listDomainRecommendations](/rest/api/appservice/domains/listrecommendations) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) |
-| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/domains/list-shared-access-keys) |
-| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/topics/list-shared-access-keys) |
+| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) |
+| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/topics/list-shared-access-keys) |
| Microsoft.EventHub/namespaces/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listkeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listkeys](/rest/api/eventhub) |
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/scope-extension-resources.md
Title: Scope on extension resource types (Bicep) description: Describes how to use the scope property when deploying extension resource types with Bicep. Previously updated : 02/11/2022 Last updated : 07/12/2022 # Set scope for extension resources in Bicep
The same requirements apply to extension resources as other resource when target
* [Management group deployments](deploy-to-management-group.md) * [Tenant deployments](deploy-to-tenant.md)
+The resourceGroup and subscription properties are only allowed on modules. These properties are not allowed on individual resources. Use modules if you want to deploy an extension resource with the scope set to a resource in a different resource group.
+
+The following example shows how to apply a lock on a storage account that resides in a different resource group.
+
+* **main.bicep:**
+
+ ```bicep
+ param resourceGroup2Name string
+ param storageAccountName string
+
+ module applyStoreLock './storageLock.bicep' = {
+ name: 'addStorageLock'
+ scope: resourceGroup(resourceGroup2Name)
+ params: {
+ storageAccountName: storageAccountName
+ }
+ }
+ ```
+
+* **storageLock.bicep:**
+
+ ```bicep
+ param storageAccountName string
+
+ resource storage 'Microsoft.Storage/storageAccounts@2021-09-01' existing = {
+ name: storageAccountName
+ }
+
+ resource storeLock 'Microsoft.Authorization/locks@2017-04-01' = {
+ scope: storage
+ name: 'storeLock'
+ properties: {
+ level: 'CanNotDelete'
+ notes: 'Storage account should not be deleted.'
+ }
+ }
+ ```
+ ## Next steps For a full list of extension resource types, see [Resource types that extend capabilities of other resources](../management/extension-resource-types.md).
azure-resource-manager Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/best-practices.md
When your template works as expected, we recommend you continue using the same A
Don't use a parameter for the API version. Resource properties and values can vary by API version. IntelliSense in a code editor can't determine the correct schema when the API version is set to a parameter. If you pass in an API version that doesn't match the properties in your template, the deployment will fail.
-Don't use variables for the API version.
+Don't use variables for the API version.
## Resource dependencies
The following information can be helpful when you work with [resources](./syntax
For more information about connecting to virtual machines, see: * [What is Azure Bastion?](../../bastion/bastion-overview.md)
- * [How to connect and sign on to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-logon.md)
- * [Setting up WinRM access for Virtual Machines in Azure Resource Manager](../../virtual-machines/windows/winrm.md)
+ * [How to connect and sign on to an Azure virtual machine running Windows](../../virtual-machines/windows/connect-rdp.md)
+ * [Setting up WinRM access for Virtual Machines in Azure Resource Manager](../../virtual-machines/windows/connect-winrm.md)
* [Connect to a Linux VM](../../virtual-machines/linux-vm-connect.md) * The `domainNameLabel` property for public IP addresses must be unique. The `domainNameLabel` value must be between 3 and 63 characters long, and follow the rules specified by this regular expression: `^[a-z][a-z0-9-]{1,61}[a-z0-9]$`. Because the `uniqueString` function generates a string that is 13 characters long, the `dnsPrefixString` parameter is limited to 50 characters.
After you've completed your template, run the test toolkit to see if there are w
## Next steps * For information about the structure of the template file, see [Understand the structure and syntax of ARM templates](./syntax.md).
-* For recommendations about how to build templates that work in all Azure cloud environments, see [Develop ARM templates for cloud consistency](./template-cloud-consistency.md).
+* For recommendations about how to build templates that work in all Azure cloud environments, see [Develop ARM templates for cloud consistency](./template-cloud-consistency.md).
azure-resource-manager Scope Extension Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/scope-extension-resources.md
Title: Scope on extension resource types description: Describes how to use the scope property when deploying extension resource types. Previously updated : 01/13/2021 Last updated : 07/11/2022
The following example creates a storage account and applies a role to it.
:::code language="json" source="~/resourcemanager-templates/azure-resource-manager/scope/storageandrole.json" highlight="56":::
+The resourceGroup and subscription properties are only allowed on nested or linked deployments. These properties are not allowed on individual resources. Use nested or linked deployments if you want to deploy an extension resource with the scope set to a resource in a different resource group.
+ ## Next steps * To understand how to define parameters in your template, see [Understand the structure and syntax of ARM templates](./syntax.md).
azure-resource-manager Template Functions Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md
The possible uses of `list*` are shown in the following table.
| Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-10-15/database-accounts/list-keys) | | Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2021-10-15/notebook-workspaces/list-connection-info) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) |
-| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/domains/list-shared-access-keys) |
-| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2021-12-01/topics/list-shared-access-keys) |
+| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) |
+| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/topics/list-shared-access-keys) |
| Microsoft.EventHub/namespaces/authorizationRules | [listKeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listKeys](/rest/api/eventhub) |
azure-video-indexer Deploy With Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/deploy-with-arm-template.md
Title: Deploy Azure Video Indexer with ARM template
-description: Learn how to create an Azure Video Indexer account by using Azure Resource Manager (ARM) template.
+ Title: Deploy Azure Video Indexer by using an ARM template
+description: Learn how to create an Azure Video Indexer account by using an Azure Resource Manager (ARM) template.
Last updated 05/23/2022
-# Tutorial: deploy Azure Video Indexer with ARM template
+# Tutorial: Deploy Azure Video Indexer by using an ARM template
-## Overview
-
-In this tutorial, you will create an Azure Video Indexer account by using Azure Resource Manager (ARM) template (preview).
-The resource will be deployed to your subscription and will create the Azure Video Indexer resource based on parameters defined in the avam.template file.
+In this tutorial, you'll create an Azure Video Indexer account by using the Azure Resource Manager template (ARM template, which is in preview). The resource will be deployed to your subscription and will create the Azure Video Indexer resource based on parameters defined in the *avam.template* file.
> [!NOTE]
-> This sample is *not* for connecting an existing Azure Video Indexer classic account to an ARM-based Azure Video Indexer account.
-> For full documentation on Azure Video Indexer API, visit the [Developer portal](https://aka.ms/avam-dev-portal) page.
-> For the latest API version for Microsoft.VideoIndexer, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
+> This sample is *not* for connecting an existing Azure Video Indexer classic account to a Resource Manager-based Azure Video Indexer account.
+>
+> For full documentation on the Azure Video Indexer API, visit the [developer portal](https://aka.ms/avam-dev-portal). For the latest API version for *Microsoft.VideoIndexer*, see the [template reference](/azure/templates/microsoft.videoindexer/accounts?tabs=bicep).
## Prerequisites
-* An Azure Media Services (AMS) account. You can create one for free through the [Create AMS Account](/azure/media-services/latest/account-create-how-to).
+You need an Azure Media Services account. You can create one for free through [Create a Media Services account](/azure/media-services/latest/account-create-how-to).
## Deploy the sample -
-### Option 1: Click the "Deploy To Azure Button", and fill in the missing parameters
+### Option 1: Select the button for deploying to Azure, and fill in the missing parameters
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fmedia-services-video-indexer%2Fmaster%2FARM-Quick-Start%2Favam.template.json) -
-### Option 2 : Deploy using PowerShell Script
+### Option 2: Deploy by using a PowerShell script
-1. Open The [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.json) file and inspect its content.
-2. Fill in the required parameters (see below)
-3. Run the Following PowerShell commands:
+1. Open the [template file](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/ARM-Quick-Start/avam.template.json) and inspect its contents.
+2. Fill in the required parameters.
+3. Run the following PowerShell commands:
- * Create a new Resource group on the same location as your Azure Video Indexer account, using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet.
+ * Create a new resource group on the same location as your Azure Video Indexer account by using the [New-AzResourceGroup](/powershell/module/az.resources/new-azresourcegroup) cmdlet.
- ```powershell
- New-AzResourceGroup -Name myResourceGroup -Location eastus
- ```
+ ```powershell
+ New-AzResourceGroup -Name myResourceGroup -Location eastus
+ ```
- * Deploy the template to the resource group using the [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) cmdlet.
+ * Deploy the template to the resource group by using the [New-AzResourceGroupDeployment](/powershell/module/az.resources/new-azresourcegroupdeployment) cmdlet.
- ```powershell
- New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./avam.template.json
- ```
+ ```powershell
+ New-AzResourceGroupDeployment -ResourceGroupName myResourceGroup -TemplateFile ./avam.template.json
+ ```
> [!NOTE]
-> If you would like to work with bicep format, see [Deploy by using Bicep](./deploy-with-bicep.md).
+> If you want to work with Bicep format, see [Deploy by using Bicep](./deploy-with-bicep.md).
## Parameters ### name * Type: string
-* Description: Specifies the name of the new Azure Video Indexer account.
-* required: true
+* Description: The name of the new Azure Video Indexer account.
+* Required: true
### location * Type: string
-* Description: Specifies the Azure location where the Azure Video Indexer account should be created.
+* Description: The Azure location where the Azure Video Indexer account should be created.
* Required: false > [!NOTE]
-> You need to deploy Your Azure Video Indexer account in the same location (region) as the associated Azure Media Services(AMS) resource exists.
+> You need to deploy your Azure Video Indexer account in the same location (region) as the associated Azure Media Services resource.
### mediaServiceAccountResourceId * Type: string
-* Description: The Resource ID of the Azure Media Services(AMS) resource.
+* Description: The resource ID of the Azure Media Services resource.
* Required: true ### managedIdentityId * Type: string
-* Description: The Resource ID of the Managed Identity used to grant access between Azure Media Services(AMS) resource and the Azure Video Indexer account.
+* Description: The resource ID of the managed identity that's used to grant access between Azure Media Services resource and the Azure Video Indexer account.
* Required: true ### tags * Type: object
-* Description: Array of objects that represents custom user tags on the Azure Video Indexer account
-
- Required: false
+* Description: The array of objects that represents custom user tags on the Azure Video Indexer account.
+* Required: false
## Reference documentation If you're new to Azure Video Indexer, see:
-* [Azure Video Indexer Documentation](./index.yml)
-* [Azure Video Indexer Developer Portal](https://api-portal.videoindexer.ai/)
-* After completing this tutorial, head to other Azure Video Indexer samples, described on [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md)
+* [Azure Video Indexer documentation](./index.yml)
+* [Azure Video Indexer developer portal](https://api-portal.videoindexer.ai/)
+
+After you complete this tutorial, head to other Azure Video Indexer samples described in [README.md](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/README.md).
If you're new to template deployment, see: * [Azure Resource Manager documentation](../azure-resource-manager/index.yml)
-* [Deploy Resources with ARM Template](../azure-resource-manager/templates/deploy-powershell.md)
-* [Deploy Resources with Bicep and Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
+* [Deploy resources with ARM templates](../azure-resource-manager/templates/deploy-powershell.md)
+* [Deploy resources with Bicep and the Azure CLI](../azure-resource-manager/bicep/deploy-cli.md)
## Next steps
-[Connect an existing classic paid Azure Video Indexer account to ARM-based account](connect-classic-account-to-arm.md)
+Connect a [classic paid Azure Video Indexer account to a Resource Manager-based account](connect-classic-account-to-arm.md).
azure-web-pubsub Howto Scale Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-scale-autoscale.md
+
+ Title: Auto scale Azure Web PubSub service
+description: Learn how to autoscale Azure WebPubSub service.
+++ Last updated : 07/12/2022+++
+# Automatically scale units of an Azure WebPubSub service
+
+> [!IMPORTANT]
+> Autoscaling is only available in Azure WebPubSub service Premium tier.
+
+Azure WebPubSub service Premium tier supports an *autoscale* feature, which is an implementation of [Azure Monitor autoscale](../azure-monitor/autoscale/autoscale-overview.md). Autoscale allows you to automatically scale the unit count for your WebPubSub service to match the actual load on the service. Autoscale can help you optimize performance and cost for your application.
+
+Azure WebPubSub adds its own [service metrics](concept-metrics.md). However, most of the user interface is shared and common to other [Azure services that support autoscaling](../azure-monitor/autoscale/autoscale-overview.md#supported-services-for-autoscale). If you're new to the subject of Azure Monitor Metrics, review [Azure Monitor Metrics aggregation and display explained](../azure-monitor/essentials/metrics-aggregation-explained.md) before digging into WebPubSub service Metrics.
+
+## Understanding autoscale in WebPubSub service
+
+Autoscale allows you to set conditions that will dynamically change the units allocated to WebPubSub service while the service is running. Autoscale conditions are based on metrics, such as **Server Load**. Autoscale can also be configured to run on a schedule, such as every day between certain hours.
+
+For example, you can implement the following scaling scenarios using autoscale.
+
+- Increase units when the **Connection Quota Utilization** above 70%.
+- Decrease units when the **Server Load** is below 20%.
+- Create a schedule to add more units during peak hours and reduce units during off hours.
+
+Multiple factors affect the performance of WebPubSub service. No one metric provides a complete view of system performance. For example, if you're sending a large number of messages you might need to scale out even though the connection quota is relatively low. The combination of both **Connection Quota Utilization** and **Server Load** gives an indication of overall system load. The following guidelines apply.
+
+- Scale out if the connection count is over 80-90%. Scaling out before your connection count is exhausted ensures that you'll have sufficient buffer to accept new connections before scale-out takes effect.
+- Scale out if the **Server Load** is over 80-90%. Scaling early ensures that the service has enough capacity to maintain performance during the scale-out operation.
+
+The autoscale operation usually takes effect 3-5 minutes after it's triggered. It's important not to change the units too often. A good rule of thumb is to allow 30 minutes from the previous autoscale before performing another autoscale operation. In some cases, you might need to experiment to find the optimal autoscale interval.
+
+## Custom autoscale settings
+
+Open the autoscale settings page:
+
+1. Go to the [Azure portal](https://portal.azure.com).
+1. Open the **WebPubSub** service page.
+1. From the menu on the left, under **Settings** choose **Scale out**.
+1. Select the **Configure** tab. If you have a Premium tier WebPubSub instance, you'll see two options for **Choose how to scale your resource**:
+ - **Manual scale**, which lets you manually change the number of units.
+ - **Custom autoscale**, which lets you create autoscale conditions based on metrics and/or a time schedule.
+
+1. Choose **Custom autoscale**. Use this page to manage the autoscale conditions for your Azure WebPubSub service.
+
+### Default scale condition
+
+When you open custom autoscale settings for the first time, you'll see the **Default** scale condition already created for you. This scale condition is executed when none of the other scale conditions match the criteria set for them. You can't delete the **Default** condition, but you can rename it, change the rules, and change the action taken by autoscale.
+
+You can't set the default condition to autoscale on a specific days or date range. The default condition only supports scaling to a unit range. To scale according to a schedule, you'll need to add a new scale condition.
+
+Autoscale doesn't take effect until you save the default condition for the first time after selecting **Custom autoscale**.
+
+## Add or change a scale condition
+
+There are two options for how to scale your Azure WebPubSub resource:
+
+- **Scale based on a metric** - Scale within unit limits based on a dynamic metric. One or more scale rules are defined to set the criteria used to evaluate the metric.
+- **Scale to specific units** - Scale to a specific number of units based on a date range or recurring schedule.
+
+### Scale based on a metric
+
+The following procedure shows you how to add a condition to increase units (scale out) when the Connection Quota Utilization is greater than 70% and decrease units (scale in) when the Connection Quota Utilization is less than 20%. Increments or decrements are done between available units.
+
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Scale based on a metric** for **Scale mode**.
+1. Select **+ Add a rule**.
+ :::image type="content" source="./media/howto-scale-autoscale/default-autoscale.png" alt-text="Screenshot of custom rule based on a metric.":::
+
+1. On the **Scale rule** page, follow these steps:
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
+ 1. Select an operator and threshold values. In this example, they're **Greater than** and **70** for **Metric threshold to trigger scale action**.
+ 1. Select an **operation** in the **Action** section. In this example, it's set to **Increase**.
+ 1. Then, select **Add**
+ :::image type="content" source="./media/howto-scale-autoscale/default-scale-out.png" alt-text="Screenshot of default autoscale rule screen.":::
+
+1. Select **+ Add a rule** again, and follow these steps on the **Scale rule** page:
+ 1. Select a metric from the **Metric name** drop-down list. In this example, it's **Connection Quota Utilization**.
+ 1. Select an operator and threshold values. In this example, they're **Less than** and **20** for **Metric threshold to trigger scale action**.
+ 1. Select an **operation** in the **Action** section. In this example, it's set to **Decrease**.
+ 1. Then, select **Add**
+ :::image type="content" source="./media/howto-scale-autoscale/default-scale-in.png" alt-text="Screenshot Connection Quota Utilization scale rule.":::
+
+1. Set the **minimum**, **maximum**, and **default** number of units.
+1. Select **Save** on the toolbar to save the autoscale setting.
+
+### Scale to specific units
+
+Follow these steps to configure the rule to scale to a specific unit range.
+
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Scale to a specific units** for **Scale mode**.
+1. For **Units**, select the number of default units.
+ :::image type="content" source="./media/howto-scale-autoscale/default-specific-units.png" alt-text="Screenshot of scale rule criteria.":::
+
+## Add more conditions
+
+The previous section showed you how to add a default condition for the autoscale setting. This section shows you how to add more conditions to the autoscale setting.
+
+1. On the **Scale out** page, select **Custom autoscale** for the **Choose how to scale your resource** option.
+1. Select **Add a scale condition** under the **Default** block.
+ :::image type="content" source="./media/howto-scale-autoscale/additional-add-condition.png" alt-text="Screenshot of custom scale rule screen.":::
+1. Confirm that the **Scale based on a metric** option is selected.
+1. Select **+ Add a rule** to add a rule to increase units when the **Connection Quota Utilization** goes above 70%. Follow steps from the [default condition](#default-scale-condition) section.
+1. Set the **minimum** and **maximum** and **default** number of units.
+1. You can also set a **schedule** on a custom condition (but not on the default condition). You can either specify start and end dates for the condition (or) select specific days (Monday, Tuesday, and so on.) of a week.
+ 1. If you select **Specify start/end dates**, select the **Timezone**, **Start date and time** and **End date and time** (as shown in the following image) for the condition to be in effect.
+ 1. If you select **Repeat specific days**, select the days of the week, timezone, start time, and end time when the condition should apply.
+
+## Next steps
+
+For more information about managing autoscale from the Azure CLI, see [**az monitor autoscale**](/cli/azure/monitor/autoscale?view=azure-cli-latest&preserve-view=true).
batch Batch Docker Container Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-docker-container-workloads.md
These images are only supported for use in Azure Batch pools and are geared for
You can also create custom images from VMs running Docker on one of the Linux distributions that is compatible with Batch. If you choose to provide your own custom Linux image, see the instructions in [Use a managed custom image to create a pool of virtual machines](batch-custom-images.md).
-For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://www.docker.com/blog/docker-enterprise-edition/).
+For Docker support on a custom image, install [Docker Community Edition (CE)](https://www.docker.com/community-edition) or [Docker Enterprise Edition (EE)](https://docker-docs.netlify.app/ee/).
Additional considerations for using a custom Linux image:
cloud-services Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/mitigate-se.md
description: In this article, learn now to mitigate speculative execution side-c
documentationcenter: '' - tags: azure-resource-manager keywords: spectre,meltdown,specter vm-windows Previously updated : 11/12/2019 Last updated : 07/12/2022
cognitive-services Overview Vision Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Computer-vision/overview-vision-studio.md
To use Vision Studio, you'll need an Azure subscription and a resource for Cogni
1. Create an Azure Subscription if you don't have one already. You can [create one for free](https://azure.microsoft.com/free/ai/). 1. Go to the [Vision Studio website](https://portal.vision.cognitive.azure.com/). If it's your first time logging in, you'll see a popup window appear that prompts you to sign in to Azure and then choose or create a Vision resource. You have the option to skip this step and do it later.+ :::image type="content" source="./Images/vision-studio-wizard-1.png" alt-text="Screenshot of Vision Studio startup wizard."::: 1. Select **Choose resource**, then select an existing resource within your subscription. If you'd like to create a new one, select **Create a new resource**. Then enter information for your new resource, such as a name, location, and resource group.
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/quickstart.md
Previously updated : 06/30/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Entity Linking using the client library and REST API
-Use this article to get started with Entity Linking using the client library and REST API. Follow these steps to try out examples code for mining text:
- ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/quickstart.md
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: using the Key Phrase Extraction client library and REST API
-Use this article to get started using Key Phrase Extraction using the client library and REST API. Follow these steps to try out examples code for mining text:
- ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/quickstart.md
Previously updated : 06/13/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: using the Language Detection client library and REST API
-Use this article to get started with Language Detection using the client library and REST API. Follow these steps to try out examples code for mining text:
- ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/quickstart.md
Previously updated : 06/27/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Detecting named entities (NER)
-Use this article to get started detecting entities in text, using the NER client library and REST API. Follow these steps to try out examples code for mining text:
- ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/quickstart.md
Previously updated : 06/13/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python zone_pivot_groups: programming-languages-text-analytics
-# Quickstart: Detect Personally Identifiable Information (PII)
-
-Use this article to get started detecting and redacting sensitive information in text, using the NER and PII client library and REST API. Follow these steps to try out examples code for mining text:
+# Quickstart: Detect Personally Identifiable Information (PII)
> [!NOTE] > This quickstart only covers PII detection in documents. To learn more about detecting PII in conversations, see [How to detect and redact PII in conversations](how-to-call-for-conversations.md).
cognitive-services Authoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/question-answering/how-to/authoring.md
Last updated 11/23/2021
The question answering Authoring API is used to automate common tasks like adding new question answer pairs, as well as creating, publishing, and maintaining projects/knowledge bases. > [!NOTE]
-> Authoring functionality is available via the REST API and [Authoring SDK (preview)](https://docs.microsoft.com/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
+> Authoring functionality is available via the REST API and [Authoring SDK (preview)](/dotnet/api/overview/azure/ai.language.questionanswering-readme-pre). This article provides examples of using the REST API with cURL. For full documentation of all parameters and functionality available consult the [REST API reference content](/rest/api/cognitiveservices/questionanswering/question-answering-projects).
## Prerequisites
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/quickstart.md
Previously updated : 06/21/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: Sentiment analysis and opinion mining
-Use this article to get started detecting sentiment and opinions in text. Follow these steps to try out examples code for mining text:
- ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/quickstart.md
Previously updated : 07/06/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python
zone_pivot_groups: programming-languages-text-analytics
# Quickstart: using document summarization and conversation summarization (preview)
-Use this article to get started with document summarization and conversation summarization using the client library and REST API. Follow these steps to try out examples code for mining text:
- ::: zone pivot="programming-language-csharp" [!INCLUDE [C# quickstart](includes/quickstarts/csharp-sdk.md)]
cognitive-services Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/quickstart.md
Previously updated : 06/27/2022 Last updated : 07/11/2022 ms.devlang: csharp, java, javascript, python
keywords: text mining, health, text analytics for health
zone_pivot_groups: programming-languages-text-analytics
-# Quickstart: using Text Analytics for health client library and REST API
-
-Use this article to get started with Text Analytics for health using the client library and REST API. Follow these steps to try out examples code for mining text:
+# Quickstart: Using Text Analytics for health client library and REST API
> [!IMPORTANT] > Fast Healthcare Interoperability Resources (FHIR) structuring is available for preview using the Language REST API. The client libraries are not currently supported.
cognitive-services Fine Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/fine-tuning.md
When you're done with your fine-tuned model, you can delete the deployment and f
### Delete your model deployment
-To delete a deployment, you can use the [Azure CLI](/cli/azure/cognitiveservices/account/deployment?view=azure-cli-latest#az-cognitiveservices-account-deployment-delete), Azure OpenAI Studio or [REST APIs](../reference.md#delete-a-deployment). here's an example of how to delete your deployment with the Azure CLI:
+To delete a deployment, you can use the [Azure CLI](/cli/azure/cognitiveservices/account/deployment?view=azure-cli-latest&preserve-view=true#az-cognitiveservices-account-deployment-delete), Azure OpenAI Studio or [REST APIs](../reference.md#delete-a-deployment). here's an example of how to delete your deployment with the Azure CLI:
```console az cognitiveservices account deployment delete --name
cognitive-services Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/how-to/managed-identity.md
In the following sections, you'll use the Azure CLI to assign roles, and obtain
- An Azure subscription - Access granted to service in the desired Azure subscription. -- Azure CLI. [Installation Guide](https://docs.microsoft.com/cli/azure/install-azure-cli)
+- Azure CLI. [Installation Guide](/cli/azure/install-azure-cli)
- The following python libraries: os, requests, json ## Sign into the Azure CLI
communication-services Credentials Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/credentials-best-practices.md
const refreshAadToken = async function (abortSignal, username) {
let account = (await publicClientApplication.getTokenCache().getAllAccounts()).find(u => u.username === username); const renewRequest = {
- scopes: ["https://auth.msft.communication.azure.com/Teams.ManageCalls"],
+ scopes: [
+ "https://auth.msft.communication.azure.com/Teams.ManageCalls",
+ "https://auth.msft.communication.azure.com/Teams.ManageChats"
+ ],
account: account, forceRefresh: forceRefresh };
const refreshAadToken = async function (abortSignal, username) {
// Make sure the token has at least 10-minute lifetime and if not, force-renew it if (tokenResponse.expiresOn < (Date.now() + (10 * 60 * 1000))) { const renewRequest = {
- scopes: ["https://auth.msft.communication.azure.com/Teams.ManageCalls"],
+ scopes: [
+ "https://auth.msft.communication.azure.com/Teams.ManageCalls",
+ "https://auth.msft.communication.azure.com/Teams.ManageChats"
+ ],
account: account, forceRefresh: true // Force-refresh the token };
communication-services Custom Teams Endpoint Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-authentication-overview.md
Before we begin:
- The Azure Communication Services resource admin needs to grant Alice permission to perform her role. Learn more about [Azure RBAC role assignment](../../../role-based-access-control/role-assignments-portal.md). Steps:
-1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The customized Teams application performs control plane logic, using artifact 'A'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
+1. Authenticate Alice using Azure Active Directory: Alice is authenticated using a standard OAuth flow with *Microsoft Authentication Library (MSAL)*. If authentication is successful, the client application receives an Azure AD access token, with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Tokens are outlined later in this article. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The customized Teams application performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This produces Azure Communication Services access token 'D' and gives Alice access. This access token can also be used for data plane actions in Azure Communication Services, like Calling.
1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's customized Teams app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about [developing custom Teams clients](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Artifacts:-- Artifact A
+- Artifact A1
- Type: Azure AD access token - Audience: _`Azure Communication Services`_ ΓÇö control plane - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
- - Permission: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_
+ - Permissions: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_, _`https://auth.msft.communication.azure.com/Teams.ManageChats`_
+- Artifact A2
+ - Type: Object ID of an Azure AD user
+ - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+- Artifact A3
+ - Type: Azure AD application ID
+ - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
- Artifact D - Type: Azure Communication Services access token - Audience: _`Azure Communication Services`_ ΓÇö data plane
Before we begin:
- Alice or her Azure AD administrator needs to give Contoso's Azure Active Directory application consent before the first attempt to sign in. Learn more about [consent](../../../active-directory/develop/consent-framework.md). Steps:
-1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's customized Teams application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
-1. Get an access token for Alice: The Contoso application performs control plane logic, using artifact 'A'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling.
+1. Authenticate Alice using the Fabrikam application: Alice is authenticated through Fabrikam's customized Teams application. A standard OAuth flow with Microsoft Authentication Library (MSAL) is used. If authentication is successful, the client application, the Contoso app in this case, receives an Azure AD access token with a value of 'A1' and an Object ID of an Azure AD user with a value of 'A2'. Token details are outlined below. Authentication from the developer perspective is explored in this [quickstart](../../quickstarts/manage-teams-identity.md).
+1. Get an access token for Alice: The Contoso application performs control plane logic, using artifacts 'A1', 'A2' and 'A3'. This generates Azure Communication Services access token 'D' for Alice within the Contoso application. This access token can be used for data plane actions in Azure Communication Services, like Calling.
1. Call Bob: Alice makes a call to Teams user Bob, with Fabrikam's customized Teams app. The call takes place via the Calling SDK with an Azure Communication Services access token. Learn more about developing custom, Teams apps [in this quickstart](../../quickstarts/voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md). Artifacts:-- Artifact A
+- Artifact A1
- Type: Azure AD access token - Audience: Azure Communication Services ΓÇö control plane - Azure AD application ID: Contoso's _`Azure AD application ID`_
- - Permission: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_
+ - Permission: _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_, _`https://auth.msft.communication.azure.com/Teams.ManageChats`_
+- Artifact A2
+ - Type: Object ID of an Azure AD user
+ - Azure AD application ID: Fabrikam's _`Azure AD application ID`_
+- Artifact A3
+ - Type: Azure AD application ID
+ - Azure AD application ID: Contoso's _`Azure AD application ID`_
- Artifact B - Type: Custom Contoso authentication artifact - Artifact C
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md
Optionally, you can also use custom Teams endpoints to integrate chat capabiliti
| Permission | Display string | Description | Admin consent required | Microsoft account supported | |: |: |: |: |: | | _`https://auth.msft.communication.azure.com/Teams.ManageCalls`_ | Manage calls in Teams | Start, join, forward, transfer, or leave Teams calls and update call properties. | No | No |
+| _`https://auth.msft.communication.azure.com/Teams.ManageChats`_ | Manage chats in Teams | Create, read, update, and delete 1:1 or group chat threads on behalf of the signed-in user. Read, send, update, and delete messages in chat threads on behalf of the signed-in user. | No | No |
### Application permissions
communication-services Troubleshooting Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/troubleshooting-info.md
chat_client = ChatClient(
## Access your server call ID
-When troubleshooting issues with the Call Automation SDK, like call recording and call management problems, you will need to collect the Server Call ID. This ID can be collected using the ```getServerCallId``` method.
+When troubleshooting issues with the Call Automation SDK, like call recording and call management problems, you'll need to collect the Server Call ID. This ID can be collected using the ```getServerCallId``` method.
#### JavaScript ```
Sometimes you also need to provide immutable resource ID of your Communication S
1. From **Resource JSON** page, copy the `immutableResourceId` value, and provide it to your support team. :::image type="content" source="./media/troubleshooting/communication-resource-id-json.png" alt-text="Screenshot of Resource JSON.":::
+## Verification of Teams license eligibility to use Azure Communication Services support for Teams users
+
+There are two ways to verify your Teams License eligibility to use Azure Communication Services support for Teams users:
+
+* **Verification via Teams web client**
+* **Checking your current Teams license via Microsoft Graph API**
+
+#### Verification via Teams web client
+To verify your Teams License eligibility via Teams web client, follow the steps listed below:
+
+1. Open your browser and navigate to [Teams web client](https://teams.microsoft.com/).
+1. Sign in with credentials that have a valid Teams license.
+1. If the authentication is successful and you remain in the https://teams.microsoft.com/ domain, then your Teams License is eligible. If authentication fails or you're redirected to the https://www.teams.live.com domain, then your Teams License isn't eligible to use Azure Communication Services support for Teams users.
+
+#### Checking your current Teams license via Microsoft Graph API
+You can find your current Teams license using [licenseDetails](https://docs.microsoft.com/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user. Follow the steps below to use the Graph Explorer tool to view licenses assigned to a user:
+
+1. Open your browser and navigate to [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer)
+1. Sign in to Graph Explorer using the credentials.
+ :::image type="content" source="./media/troubleshooting/graph-explorer-sign-in.png" alt-text="Screenshot of how to sign in to Graph Explorer.":::
+1. In the query box, enter the following API and click **Run Query** :
+ <!-- { "blockType": "request" } -->
+ ```http
+ https://graph.microsoft.com/v1.0/me/licenseDetails
+ ```
+ :::image type="content" source="./media/troubleshooting/graph-explorer-query-box.png" alt-text="Screenshot of how to enter API in Graph Explorer.":::
+
+ Or you can query for a particular user by providing the user ID using the following API:
+ <!-- { "blockType": "request" } -->
+ ```http
+ https://graph.microsoft.com/v1.0/users/{id}/licenseDetails
+ ```
+1. The **Response preview** pane displays output as follows:
+
+ Note that the response object shown here might be shortened for readability.
+ <!-- {
+ "blockType": "response",
+ "truncated": true,
+ "isCollection": true
+ } -->
+ ```http
+ {
+ "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#users('071cc716-8147-4397-a5ba-b2105951cc0b')/assignedLicenses",
+ "value": [
+ {
+ "skuId": "b05e124f-c7cc-45a0-a6aa-8cf78c946968",
+ "servicePlans":[
+ {
+ "servicePlanId":"57ff2da0-773e-42df-b2af-ffb7a2317929",
+ "servicePlanName":"TEAMS1",
+ "provisioningStatus":"Success",
+ "appliesTo":"User"
+ }
+ ]
+ }
+ ]
+ }
+ ```
+1. Find license detail where property `servicePlanName` has one of the values in the [Eligible Teams Licenses table](../quickstarts/eligible-teams-licenses.md#eligible-teams-licenses)
++ ## Calling SDK error codes The Azure Communication Services Calling SDK uses the following error codes to help you troubleshoot calling issues. These error codes are exposed through the `call.callEndReason` property after a call ends.
The Azure Communication Services Chat SDK uses the following error codes to help
| -- | | | | 401 | Unauthorized | Ensure that your Communication Services token is valid and not expired. | | 403 | Forbidden | Ensure that the initiator of the request has access to the resource. |
-| 429 | Too many requests | Ensure that your client-side application handles this scenario in a user-friendly manner. If the error persists please file a support request. |
+| 429 | Too many requests | Ensure that your client-side application handles this scenario in a user-friendly manner. If the error persists, please file a support request. |
| 503 | Service Unavailable | File a support request through the Azure portal. | ## SMS error codes
The Azure Communication Services SMS SDK uses the following error codes to help
| Error code | Description | Action to take | | -- | | | | 2000 | Message Delivered Successfully | |
-| 4000 | Message is rejected due to fraud detection | Ensure you are not exceeding the maximum number of messages allowed for your number|
+| 4000 | Message is rejected due to fraud detection | Ensure you aren't exceeding the maximum number of messages allowed for your number|
| 4001 | Message is rejected due to invalid Source/From number format| Ensure the To number is in E.164 format and From number format is in E.164 or Short code format | | 4002 | Message is rejected due to invalid Destination/To number format| Ensure the To number is in E.164 format |
-| 4003 | Message failed to deliver due to unsupported destination| Check if the destination you are trying to send to is supported |
-| 4004 | Message failed to deliver since Destination/To number does not exist| Ensure the To number you are sending to is valid |
+| 4003 | Message failed to deliver due to unsupported destination| Check if the destination you're trying to send to is supported |
+| 4004 | Message failed to deliver since Destination/To number doesn't exist| Ensure the To number you're sending to is valid |
| 4005 | Message is blocked by Destination carrier| |
-| 4006 | The Destination/To number is not reachable| Try re-sending the message at a later time |
+| 4006 | The Destination/To number isn't reachable| Try resending the message at a later time |
| 4007 | The Destination/To number has opted out of receiving messages from you| Mark the Destination/To number as opted out so that no further message attempts are made to the number|
-| 4008 | You have exceeded the maximum number of messages allowed for your profile| Ensure you are not exceeding the maximum number of messages allowed for your number or use queues to batch the messages |
-| 5000 | Message failed to deliver, Please reach out Microsoft support team for more details| File a support request through the Azure portal |
+| 4008 | You've exceeded the maximum number of messages allowed for your profile| Ensure you aren't exceeding the maximum number of messages allowed for your number or use queues to batch the messages |
+| 5000 | Message failed to deliver. Please reach out Microsoft support team for more details| File a support request through the Azure portal |
| 5001 | Message failed to deliver due to temporary unavailability of application/system| |
-| 5002 | Message Delivery Timeout| Try re-sending the message |
-| 9999 | Message failed to deliver due to unknown error/failure| Try re-sending the message |
+| 5002 | Message Delivery Timeout| Try resending the message |
+| 9999 | Message failed to deliver due to unknown error/failure| Try resending the message |
## Related information
communication-services Eligible Teams Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/eligible-teams-licenses.md
+
+ Title: Teams license requirements to use Azure Communication Services support for Teams users
+description: This article describes Teams License requirements and how users can find their current Teams license.
++++ Last updated : 06/16/2022++++++
+# Teams License requirements to use Azure Communication Services support for Teams users
+
+To use Azure Communication Services support for Teams users, you need an Azure Active Directory instance with users that have a valid Teams license. Furthermore, license must be assigned to the administrators or relevant users. This article describes the Teams license requirements to use Azure Communication Services support for Teams users.
+
+### Eligible Teams licenses
+
+Ensure that your Azure Active Directory users have at least one of the following eligible Teams licenses:
+
+| Service Plan (friendly names) | Service Plan ID |
+|: |: |
+| TEAMS1 | 57ff2da0-773e-42df-b2af-ffb7a2317929 |
+| TEAMS_FREE | 4fa4026d-ce74-4962-a151-8e96d57ea8e4 |
+| TEAMS_GOV | 304767db-7d23-49e8-a945-4a7eb65f9f28 |
+| TEAMS_GCCHIGH | 495922d5-f138-498b-8967-4acdcdfb2a74 |
+| TEAMS_AR_GCCHIGH | 9953b155-8aef-4c56-92f3-72b0487fce41 |
+| TEAMS_DOD | ec0dd2de-a877-4059-a9b8-5838b5629b2a |
+| TEAMS_AR_DOD | fd500458-c24c-478e-856c-a6067a8376cd |
+
+For more information, see [Azure AD Product names and service plan identifiers](../../active-directory/enterprise-users/licensing-service-plan-reference.md).
+
+### How to find current Teams license
+
+You can find your current Teams license using [licenseDetails](https://docs.microsoft.com/graph/api/resources/licensedetails) Microsoft Graph API that returns licenses assigned to a user.
+
+For more information on verification for eligibility, see [Verification of Teams license eligibility](../concepts/troubleshooting-info.md#verification-of-teams-license-eligibility-to-use-azure-communication-services-support-for-teams-users).
+
+## Next steps
+
+The following articles might be of interest to you:
+
+- Try [quickstart for authentication of Teams users](./manage-teams-identity.md).
+- Try [quickstart for calling to a Teams user](./voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).
+- Learn more about [Custom Teams endpoint](../concepts/teams-endpoint.md)
+- Learn more about [Teams interoperability](../concepts/teams-interop.md)
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
In this quickstart, you'll build a .NET console application to authenticate a Mi
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An active Azure Communication Services resource and connection string. For more information, see [Create an Azure Communication Services resource](./create-communication-resource.md).-- An Azure Active Directory instance with users that have a Teams license.
+- An Azure Active Directory instance with users that have a Teams license. For more information, see [Teams License requirements](./eligible-teams-licenses.md).
## Introduction
The Administrator role has extended permissions in Azure AD. Members of this rol
![Administrator actions to enable custom Teams endpoint experience](./media/teams-identities/teams-identity-admin-overview.svg) 1. The Contoso Administrator creates or selects an existing *application* in Azure Active Directory. The property *Supported account types* defines whether users from various tenants can authenticate to the application. The property *Redirect URI* redirects a successful authentication request to the Contoso *server*.
-1. The Contoso Administrator adds API permission to `Teams.ManageCalls` from Communication Services.
+1. The Contoso Administrator adds API permissions to `Teams.ManageCalls` and `Teams.ManageChats` from Communication Services.
1. The Contoso Administrator allows public client flow for the application. 1. The Contoso Administrator creates or selects existing communication services, which will be used for authentication of the exchanging requests. Azure AD user tokens will be exchanged for an access token of Teams user. For more information, see [Create and manage Communication Services resources](./create-communication-resource.md).
-1. The Fabrikam Administrator grants Communication Services `Teams.ManageCalls` permission to the Contoso application. This step is required if only Fabrikam Administrator can grant access to the application with the `Teams.ManageCalls` permission.
+1. The Fabrikam Administrator grants Communication Services `Teams.ManageCalls` and `Teams.ManageChats` permissions to the Contoso application. This step is required if only Fabrikam Administrator can grant access to the application with the `Teams.ManageCalls` and `Teams.ManageChats` permissions.
### Step 1: Create an Azure AD application registration or select an Azure AD application
-Users must be authenticated against Azure AD applications with the Azure Communication Service Teams.ManageCalls permission. If you don't have an existing application that you want to use for this quickstart, you can create a new application registration.
+Users must be authenticated against Azure AD applications with the Azure Communication Service Teams.ManageCalls and Teams.ManageChats permissions. If you don't have an existing application that you want to use for this quickstart, you can create a new application registration.
The following application settings influence the experience: - The *Supported account types* property defines whether the application is single tenant ("Accounts in this organizational directory only") or multitenant ("Accounts in any organizational directory"). For this scenario, you can use multitenant.
On the **Authentication** pane of your application, you can see a configured pla
### Step 3: Add the Communication Services permissions in the application
-The application must declare Teams.ManageCalls permission to have access to Teams calling capabilities in the Tenant. Teams user would be requesting an Azure AD user token with this permission for token exchange.
+The application must declare Teams.ManageCalls and Teams.ManageChats permissions to have access to Teams calling capabilities in the Tenant. Teams user would be requesting an Azure AD user token with this permission for token exchange.
-1.Navigate to your Azure AD app in the Azure portal and select **API permissions**
+1. Navigate to your Azure AD app in the Azure portal and select **API permissions**
1. Select **Add Permissions** 1. In the **Add Permissions** menu, select **Azure Communication Services**
-1. Select the permission **Teams.ManageCalls** and select **Add permissions**
+1. Select the permissions **Teams.ManageCalls** and **Teams.ManageCalls**, then select **Add permissions**
-![Add Teams.ManageCalls permission to the Azure Active Directory application created in previous step](./media/active-directory-permissions.png)
+![Add Teams.ManageCalls and Teams.ManageChats permission to the Azure Active Directory application created in previous step.](./media/active-directory-permissions.png)
### Step 4: Create or select a Communication Services resource
If you want to create a new Communication Services resource, see [Create and man
### Step 5: Provide Administrator consent
-Azure Active Directory tenant can be configured, to require Azure AD administrator consent for the Teams.ManageCalls permission of the application. In such a case, the Azure AD Administrator must grant permission to the Contoso application for Communication Services Teams.ManageCalls. The Fabrikam Azure AD Administrator provides consent via a unique URL.
+Azure Active Directory tenant can be configured, to require Azure AD administrator consent for the Teams.ManageCalls and Teams.ManageChats permissions of the application. In such a case, the Azure AD Administrator must grant permissions to the Contoso application for Communication Services Teams.ManageCalls and Teams.ManageChats. The Fabrikam Azure AD Administrator provides consent via a unique URL.
+
+The following roles can provide consent on behalf of a company:
+- Global admin
+- Application admin
+- Cloud application admin
+
+If you want to check roles in Azure portal, see [List Azure role assignments](../../role-based-access-control/role-assignments-list-portal.md).
To construct an Administrator consent URL, the Fabrikam Azure AD Administrator does the following steps:
The service principal of the Contoso application in the Fabrikam tenant is creat
1. Select the service principal by using the required name. 1. Go to the **Permissions** pane.
-You can see that the status of the Communication Services Teams.ManageCalls permission is *Granted for {Directory_name}*.
+You can see that the status of the Communication Services Teams.ManageCalls and Teams.ManageChats permissions are *Granted for {Directory_name}*.
## Developer actions
The developer's required actions are shown in following diagram:
![Diagram of developer actions to enable the custom Teams endpoint experience.](./media/teams-identities/teams-identity-developer-overview.svg)
-1. The Contoso developer configures the Microsoft Authentication Library (MSAL) to authenticate the user for the application that was created earlier by the Administrator for Communication Services Teams.ManageCalls permission.
+1. The Contoso developer configures the Microsoft Authentication Library (MSAL) to authenticate the user for the application that was created earlier by the Administrator for Communication Services Teams.ManageCalls and Teams.ManageChats permissions.
1. The Contoso developer initializes the Communication Services Identity SDK and exchanges the incoming Azure AD user token for the access token of Teams user via the identity SDK. The access token of Teams user is then returned to the *client application*. By using the MSAL, developers can acquire Azure AD user tokens from the Microsoft Identity platform endpoint to authenticate users and access secure web APIs. It can be used to provide secure access to Communication Services. The MSAL supports many different application architectures and platforms, including .NET, JavaScript, Java, Python, Android, and iOS.
The user represents the Fabrikam users of the Contoso application. The user expe
![Diagram of user actions to enable the custom Teams endpoint experience.](./media/teams-identities/teams-identity-user-overview.svg) 1. The Fabrikam user uses the Contoso *client application* and is prompted to authenticate.
-1. The Contoso *client application* uses the MSAL to authenticate the user against the Fabrikam Azure AD tenant for the Contoso application with Communication Services Teams.ManageCalls permission.
+1. The Contoso *client application* uses the MSAL to authenticate the user against the Fabrikam Azure AD tenant for the Contoso application with Communication Services Teams.ManageCalls and Teams.ManageChats permissions.
1. Authentication is redirected to the *server*, as defined in the property *Redirect URI* in the MSAL and the Contoso application. 1. The Contoso *server* exchanges the Azure AD user token for the access token of Teams user by using the Communication Services Identity SDK and returns the access token of Teams user to the *client application*.
confidential-computing Virtual Machine Solutions Amd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-solutions-amd.md
You can use the [Azure CLI](/cli/azure/install-azure-cli) with your confidential
To see a list of confidential VM sizes, run the following command. Replace `<vm-series>` with the series you want to use. For example, `DCASv5`, `ECASv5`, `DCADSv5`, or `ECADSv5`. The output shows information about available regions and availability zones. ```azurecli-interactive
-az vm list-skus `
- --size dc `
- --query "[?family=='standard<vm-series>Family'].{name:name,locations:locationInfo[0].location,AZ_a:locationInfo[0].zones[0],AZ_b:locationInfo[0].zones[1],AZ_c:locationInfo[0].zones[2]}" `
- --all `
+vm_series='DCASv5'
+az vm list-skus \
+ --size dc \
+ --query "[?family=='standard${vm_series}Family'].{name:name,locations:locationInfo[0].location,AZ_a:locationInfo[0].zones[0],AZ_b:locationInfo[0].zones[1],AZ_c:locationInfo[0].zones[2]}" \
+ --all \
--output table ``` For a more detailed list, run the following command instead: ```azurecli-interactive
-az vm list-skus `
- --size dc `
- --query "[?family=='standard<vm-series>Family']"
+vm_series='DCASv5'
+az vm list-skus \
+ --size dc \
+ --query "[?family=='standard${vm_series}Family']"
``` ## Deployment considerations
cosmos-db Migrate Containers Partitioned To Nonpartitioned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/sql/migrate-containers-partitioned-to-nonpartitioned.md
The following example shows a sample code to create a document with the system d
**JSON representation of the document**
+### [.NET SDK V3](#tab/dotnetv3)
+ ```csharp DeviceInformationItem = new DeviceInformationItem {
await migratedContainer.Items.ReadItemAsync<DeviceInformationItem>(
```
-For the complete sample on how to repartition the documents, see the [.Net samples][1] GitHub repository.
+### [Java SDK V4](#tab/javav4)
+
+```java
+static class Family {
+ public String id;
+ public String firstName;
+ public String lastName;
+ public String _partitionKey;
+
+ public Family(String id, String firstName, String lastName, String _partitionKey) {
+ this.id = id;
+ this.firstName = firstName;
+ this.lastName = lastName;
+ this._partitionKey = _partitionKey;
+ }
+}
+
+...
+
+CosmosDatabase cosmosDatabase = cosmosClient.getDatabase("testdb");
+CosmosContainer cosmosContainer = cosmosDatabase.getContainer("testcontainer");
+
+// Create single item
+Family family = new Family("id-1", "John", "Doe", "Doe");
+cosmosContainer.createItem(family, new PartitionKey(family._partitionKey), new CosmosItemRequestOptions());
+
+// Create items through bulk operations
+family = new Family("id-2", "Jane", "Doe", "Doe");
+CosmosItemOperation createItemOperation = CosmosBulkOperations.getCreateItemOperation(family,
+ new PartitionKey(family._partitionKey));
+cosmosContainer.executeBulkOperations(Collections.singletonList(createItemOperation));
+```
+
+For the complete sample, see the [Java samples][2] GitHub repository.
+
+## Migrate the documents
+
+While the container definition is enhanced with a partition key property, the documents within the container arenΓÇÖt auto migrated. Which means the system partition key property `/_partitionKey` path is not automatically added to the existing documents. You need to repartition the existing documents by reading the documents that were created without a partition key and rewrite them back with `_partitionKey` property in the documents.
+
+## Access documents that don't have a partition key
+
+Applications can access the existing documents that donΓÇÖt have a partition key by using the special system property called "PartitionKey.None", this is the value of the non-migrated documents. You can use this property in all the CRUD and query operations. The following example shows a sample to read a single Document from the NonePartitionKey.
+
+```java
+CosmosItemResponse<JsonNode> cosmosItemResponse =
+ cosmosContainer.readItem("itemId", PartitionKey.NONE, JsonNode.class);
+```
+
+For the complete sample on how to repartition the documents, see the [Java samples][2] GitHub repository.
++ ## Compatibility with SDKs
If new items are inserted with different values for the partition key, querying
* If all you know is the number of vcores and servers in your existing database cluster, read about [estimating request units using vCores or vCPUs](../convert-vcore-to-request-unit.md) * If you know typical request rates for your current database workload, read about [estimating request units using Azure Cosmos DB capacity planner](estimate-ru-with-capacity-planner.md)
-[1]: https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/NonPartitionContainerMigration
+[1]: https://github.com/Azure/azure-cosmos-dotnet-v3/tree/master/Microsoft.Azure.Cosmos.Samples/Usage/NonPartitionContainerMigration
+[2]: https://github.com/Azure-Samples/azure-cosmos-java-sql-api-samples/tree/main/src/main/java/com/azure/cosmos/examples/nonpartitioncontainercrud
cosmos-db Create Table Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/create-table-dotnet.md
Title: 'Quickstart: Table API with .NET - Azure Cosmos DB'
-description: This quickstart shows how to access the Azure Cosmos DB Table API from a .NET application using the Azure.Data.Tables SDK
-
+ Title: Quickstart - Azure Cosmos DB Table API for .NET
+description: Learn how to build a .NET app to manage Azure Cosmos DB Table API resources in this quickstart.
++
+ms.devlang: dotnet
Previously updated : 09/26/2021--- Last updated : 06/24/2022+
-# Quickstart: Build a Table API app with .NET SDK and Azure Cosmos DB
-
+# Quickstart: Azure Cosmos DB Table API for .NET
[!INCLUDE[appliesto-table-api](../includes/appliesto-table-api.md)]
-This quickstart shows how to access the Azure Cosmos DB [Table API](introduction.md) from a .NET application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. Because data is stored in a schemaless design, new properties (columns) are automatically added to the table when an object with a new attribute is added to the table.
-
-.NET applications can access the Cosmos DB Table API using the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) NuGet package. The [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables/) package is a [.NET Standard 2.0](/dotnet/standard/net-standard) library that works with both .NET Framework (4.7.2 and later) and .NET Core (2.0 and later) applications.
-
-## Prerequisites
-
-The sample application is written in [.NET Core 3.1](https://dotnet.microsoft.com/download/dotnet/3.1), though the principles apply to both .NET Framework and .NET Core applications. You can use either [Visual Studio](https://www.visualstudio.com/downloads/), [Visual Studio for Mac](https://visualstudio.microsoft.com/vs/mac/), or [Visual Studio Code](https://code.visualstudio.com/) as an IDE.
--
-## Sample application
-
-The sample application for this tutorial may be cloned or downloaded from the repository [https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet](https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet). Both a starter and completed app are included in the sample repository.
-
-```bash
-git clone https://github.com/Azure-Samples/msdocs-azure-data-tables-sdk-dotnet
-```
-
-The sample application uses weather data as an example to demonstrate the capabilities of the Table API. Objects representing weather observations are stored and retrieved using the Table API, including storing objects with additional properties to demonstrate the schemaless capabilities of the Table API.
--
-## 1 - Create an Azure Cosmos DB account
-
-You first need to create a Cosmos DB Tables API account that will contain the table(s) used in your application. This can be done using the Azure portal, Azure CLI, or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-Log in to the [Azure portal](https://portal.azure.com/) and follow these steps to create an Cosmos DB account.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find Cosmos DB accounts in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2-240px.png" alt-text="A screenshot showing the Create button location on the Cosmos DB accounts page in Azure." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3-240px.png" alt-text="A screenshot showing the Azure Table option as the correct option to select." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-3.png"::: |
-| [!INCLUDE [Create cosmos db account step 1](./includes/create-table-dotnet/create-cosmos-db-acct-4.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4-240px.png" alt-text="A screenshot showing how to fill out the fields on the Cosmos DB Account creation page." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-account-table-api-4.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
+This quickstart shows how to get started with the Azure Cosmos DB Table API from a .NET application. The Cosmos DB Table API is a schemaless data store allowing applications to store structured NoSQL data in the cloud. You'll learn how to create tables, rows, and perform basic tasks within your Cosmos DB resource using the [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/).
-Cosmos DB accounts are created using the [az cosmosdb create](/cli/azure/cosmosdb#az-cosmosdb-create) command. You must include the `--capabilities EnableTable` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Cosmos DB account.
+> [!NOTE]
+> The [example code snippets](https://github.com/Azure-Samples/cosmos-db-table-api-dotnet-samples) are available on GitHub as a .NET project.
-Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Cosmos DB account names must also be unique across Azure.
+[Table API reference documentation](/azure/storage/tables) | [Azure.Data.Tables Package (NuGet)](https://www.nuget.org/packages/Azure.Data.Tables/)
-Azure CLI commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with the [Azure CLI installed](/cli/azure/install-azure-cli).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurecli
-LOCATION='eastus'
-RESOURCE_GROUP_NAME='rg-msdocs-tables-sdk-demo'
-COSMOS_ACCOUNT_NAME='cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-COSMOS_TABLE_NAME='WeatherData'
-
-az group create \
- --location $LOCATION \
- --name $RESOURCE_GROUP_NAME
-
-az cosmosdb create \
- --name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --capabilities EnableTable
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-Azure Cosmos DB accounts are created using the [New-AzCosmosDBAccount](/powershell/module/az.cosmosdb/new-azcosmosdbaccount) cmdlet. You must include the `-ApiKind "Table"` option to enable table storage within your Cosmos DB. As all Azure resource must be contained in a resource group, the following code snippet also creates a resource group for the Azure Cosmos DB account.
-
-Azure Cosmos DB account names must be between 3 and 44 characters in length and may contain only lowercase letters, numbers, and the hyphen (-) character. Azure Cosmos DB account names must also be unique across Azure.
-
-Azure PowerShell commands can be run in the [Azure Cloud Shell](https://shell.azure.com) or on a workstation with [Azure PowerShell installed](/powershell/azure/install-az-ps).
-
-It typically takes several minutes for the Cosmos DB account creation process to complete.
-
-```azurepowershell
-$location = 'eastus'
-$resourceGroupName = 'rg-msdocs-tables-sdk-demo'
-$cosmosAccountName = 'cosmos-msdocs-tables-sdk-demo-123' # change 123 to a unique set of characters for a unique name
-
-# Create a resource group
-New-AzResourceGroup `
- -Location $location `
- -Name $resourceGroupName
-
-# Create an Azure Cosmos DB
-New-AzCosmosDBAccount `
- -Name $cosmosAccountName `
- -ResourceGroupName $resourceGroupName `
- -Location $location `
- -ApiKind "Table"
-```
--
+## Prerequisites
-## 2 - Create a table
+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+* [.NET 6.0](https://dotnet.microsoft.com/en-us/download)
+* [Azure Command-Line Interface (CLI)](/cli/azure/) or [Azure PowerShell](/powershell/azure/)
-Next, you need to create a table within your Cosmos DB account for your application to use. Unlike a traditional database, you only need to specify the name of the table, not the properties (columns) in the table. As data is loaded into your table, the properties (columns) will be automatically created as needed.
+### Prerequisite check
-### [Azure portal](#tab/azure-portal)
+* In a terminal or command window, run ``dotnet --list-sdks`` to check that .NET 6.x is one of the available versions.
+* Run ``az --version`` (Azure CLI) or ``Get-Module -ListAvailable AzureRM`` (Azure PowerShell) to check that you have the appropriate Azure command-line tools installed.
-In the [Azure portal](https://portal.azure.com/), complete the following steps to create a table inside your Cosmos DB account.
+## Setting up
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Create cosmos db table step 1](./includes/create-table-dotnet/create-cosmos-table-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1-240px.png" alt-text="A screenshot showing how to use the search box in the top tool bar to find your Cosmos DB account." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-1.png"::: |
-| [!INCLUDE [Create cosmos db table step 2](./includes/create-table-dotnet/create-cosmos-table-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2-240px.png" alt-text="A screenshot showing the location of the Add Table button." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-2.png"::: |
-| [!INCLUDE [Create cosmos db table step 3](./includes/create-table-dotnet/create-cosmos-table-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3-240px.png" alt-text="A screenshot showing how to New Table dialog box for an Cosmos DB table." lightbox="./media/create-table-dotnet/azure-portal-create-cosmos-db-table-api-3.png"::: |
+This section walks you through how to create an Azure Cosmos account and set up a project that uses the Table API NuGet packages.
-### [Azure CLI](#tab/azure-cli)
+### Create an Azure Cosmos DB account
-Tables in Cosmos DB are created using the [az cosmosdb table create](/cli/azure/cosmosdb/table#az-cosmosdb-table-create) command.
+This quickstart will create a single Azure Cosmos DB account using the Table API.
-```azurecli
-COSMOS_TABLE_NAME='WeatherData'
+#### [Azure CLI](#tab/azure-cli)
-az cosmosdb table create \
- --account-name $COSMOS_ACCOUNT_NAME \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_TABLE_NAME \
- --throughput 400
-```
-### [Azure PowerShell](#tab/azure-powershell)
+#### [PowerShell](#tab/azure-powershell)
-Tables in Cosmos DB are created using the [New-AzCosmosDBTable](/powershell/module/az.cosmosdb/new-azcosmosdbtable) cmdlet.
-```azurepowershell
-$cosmosTableName = 'WeatherData'
+#### [Portal](#tab/azure-portal)
-# Create the table for the application to use
-New-AzCosmosDBTable `
- -Name $cosmosTableName `
- -AccountName $cosmosAccountName `
- -ResourceGroupName $resourceGroupName
-```
-## 3 - Get Cosmos DB connection string
-
-To access your table(s) in Cosmos DB, your app will need the table connection string for the CosmosDB Storage account. The connection string can be retrieved using the Azure portal, Azure CLI or Azure PowerShell.
-
-### [Azure portal](#tab/azure-portal)
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Get cosmos db table connection string step 1](./includes/create-table-dotnet/get-cosmos-connection-string-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1-240px.png" alt-text="A screenshot showing the location of the connection strings link on the Cosmos DB page." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-1.png"::: |
-| [!INCLUDE [Get cosmos db table connection string step 2](./includes/create-table-dotnet/get-cosmos-connection-string-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2-240px.png" alt-text="A screenshot showing the which connection string to select and use in your application." lightbox="./media/create-table-dotnet/azure-portal-cosmos-db-table-connection-string-2.png"::: |
+### Get Table API connection string
-### [Azure CLI](#tab/azure-cli)
+#### [Azure CLI](#tab/azure-cli)
-To get the primary table storage connection string using Azure CLI, use the [az cosmosdb keys list](/cli/azure/cosmosdb/keys#az-cosmosdb-keys-list) command with the option `--type connection-strings`. This command uses a [JMESPath query](https://jmespath.org/) to display only the primary table connection string.
-```azurecli
-# This gets the primary Table connection string
-az cosmosdb keys list \
- --type connection-strings \
- --resource-group $RESOURCE_GROUP_NAME \
- --name $COSMOS_ACCOUNT_NAME \
- --query "connectionStrings[?description=='Primary Table Connection String'].connectionString" \
- --output tsv
-```
+#### [PowerShell](#tab/azure-powershell)
-### [Azure PowerShell](#tab/azure-powershell)
-To get the primary table storage connection string using Azure PowerShell, use the [Get-AzCosmosDBAccountKey](/powershell/module/az.cosmosdb/get-azcosmosdbaccountkey) cmdlet.
+#### [Portal](#tab/azure-portal)
-```azurepowershell
-# This gets the primary Table connection string
-$(Get-AzCosmosDBAccountKey `
- -ResourceGroupName $resourceGroupName `
- -Name $cosmosAccountName `
- -Type "ConnectionStrings")."Primary Table Connection String"
-```
-The connection string for your Cosmos DB account is considered an app secret and must be protected like any other app secret or password. This example uses the [Secret Manager tool](/aspnet/core/security/app-secrets#secret-manager) to store the connection string during development and make it available to the application. The Secret Manager tool can be accessed from either Visual Studio or the .NET CLI.
+### Create a new .NET app
-### [Visual Studio](#tab/visual-studio)
+Create a new .NET application in an empty folder using your preferred terminal. Use the [``dotnet new console``](/dotnet/core/tools/dotnet-newt) to create a new console app.
-To open the Secret Manager tool from Visual Studio, right-click on the project and select **Manage User Secrets** from the context menu. This will open the *secrets.json* file for the project. Replace the contents of the file with the JSON below, substituting in your Cosmos DB table connection string.
-
-```json
-{
- "ConnectionStrings": {
- "CosmosTableApi": "<cosmos db table connection string>"
- }
-}
+```console
+dotnet new console -output <app-name>
```
-### [.NET CLI](#tab/netcore-cli)
+### Install the NuGet package
-To use the Secret Manager, you must first initialize it for your project using the `dotnet user-secrets init` command.
-
-```dotnetcli
-dotnet user-secrets init
-```
-
-Then, use the `dotnet user-secrets set` command to add the Cosmos DB table connection string as a secret.
-
-```dotnetcli
-dotnet user-secrets set "ConnectionStrings:CosmosTableApi" "<cosmos db table connection string>"
-```
--
+Add the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables) NuGet package to the new .NET project. Use the [``dotnet add package``](/dotnet/core/tools/dotnet-add-package) command specifying the name of the NuGet package.
-## 4 - Install Azure.Data.Tables NuGet package
-
-To access the Cosmos DB Table API from a .NET application, install the [Azure.Data.Tables](https://www.nuget.org/packages/Azure.Data.Tables) NuGet package.
-
-### [Visual Studio](#tab/visual-studio)
-
-```PowerShell
-Install-Package Azure.Data.Tables
-```
-
-### [.NET CLI](#tab/netcore-cli)
-
-```dotnetcli
+```console
dotnet add package Azure.Data.Tables ``` --
-## 5 - Configure the Table client in Startup.cs
-
-The Azure SDK communicates with Azure using client objects to execute different operations against Azure. The [TableClient](/dotnet/api/azure.data.tables.tableclient) object is the object used to communicate with the Cosmos DB Table API.
-
-An application will typically create a single [TableClient](/dotnet/api/azure.data.tables.tableclient) object per table to be used throughout the application. It's recommended to use dependency injection (DI) and register the [TableClient](/dotnet/api/azure.data.tables.tableclient) object as a singleton to accomplish this. For more information about using DI with the Azure SDK, see [Dependency injection with the Azure SDK for .NET](/dotnet/azure/sdk/dependency-injection).
-
-In the `Startup.cs` file of the application, edit the ConfigureServices() method to match the following code snippet:
-
-```csharp
-public void ConfigureServices(IServiceCollection services)
-{
- services.AddRazorPages()
- .AddMvcOptions(options =>
- {
- options.Filters.Add(new ValidationFilter());
- });
-
- var connectionString = Configuration.GetConnectionString("CosmosTableApi");
- services.AddSingleton<TableClient>(new TableClient(connectionString, "WeatherData"));
-
- services.AddSingleton<TablesService>();
-}
-```
-
-You will also need to add the following using statement at the top of the Startup.cs file.
-
-```csharp
-using Azure.Data.Tables;
-```
-
-## 6 - Implement Cosmos DB table operations
-
-All Cosmos DB table operations for the sample app are implemented in the `TableService` class located in the *Services* directory. You will need to import the `Azure` and `Azure.Data.Tables` namespaces at the top of this file to work with objects in the `Azure.Data.Tables` SDK package.
-
-```csharp
-using Azure;
-using Azure.Data.Tables;
-```
-
-At the start of the `TableService` class, add a member variable for the [TableClient](/dotnet/api/azure.data.tables.tableclient) object and a constructor to allow the [TableClient](/dotnet/api/azure.data.tables.tableclient) object to be injected into the class.
+### Configure environment variables
-```csharp
-private TableClient _tableClient;
-public TablesService(TableClient tableClient)
-{
- _tableClient = tableClient;
-}
-```
-
-### Get rows from a table
+## Code examples
-The [TableClient](/dotnet/api/azure.data.tables.tableclient) class contains a method named [Query](/dotnet/api/azure.data.tables.tableclient.query) which allows you to select rows from the table. In this example, since no parameters are being passed to the method, all rows will be selected from the table.
+* [Authenticate the client](#authenticate-the-client)
+* [Create a table](#create-a-table)
+* [Create an item](#create-an-item)
+* [Get an item](#get-an-item)
+* [Query items](#query-items)
-The method also takes a generic parameter of type [ITableEntity](/dotnet/api/azure.data.tables.itableentity) that specifies the model class data will be returned as. In this case, the built-in class [TableEntity](/dotnet/api/azure.data.tables.itableentity) is used, meaning the `Query` method will return a `Pageable<TableEntity>` collection as its results.
+The sample code described in this article creates a table named ``adventureworks``. Each table row contains the details of a product such as name, category, quantity, and a sale indicator. Each product also contains a unique identifier.
-```csharp
-public IEnumerable<WeatherDataModel> GetAllRows()
-{
- Pageable<TableEntity> entities = _tableClient.Query<TableEntity>();
+You'll use the following Table API classes to interact with these resources:
- return entities.Select(e => MapTableEntityToWeatherDataModel(e));
-}
-```
+- [``TableServiceClient``](/dotnet/api/azure.data.tables.tableserviceclient) - This class provides methods to perform service level operations with Azure Cosmos DB Table API.
+- [``TableClient``](/dotnet/api/azure.data.tables.tableclient) - This class allows you to interact with tables hosted in the Azure Cosmos DB table API.
+- [``TableEntity``](/dotnet/api/azure.data.tables.tableentity) - This class is a reference to a row in a table that allows you to manage properties and column data.
-The [TableEntity](/dotnet/api/azure.data.tables.itableentity) class defined in the `Azure.Data.Tables` package has properties for the partition key and row key values in the table. Together, these two values for a unique key for the row in the table. In this example application, the name of the weather station (city) is stored in the partition key and the date/time of the observation is stored in the row key. All other properties (temperature, humidity, wind speed) are stored in a dictionary in the `TableEntity` object.
+### Authenticate the client
-It is common practice to map a [TableEntity](/dotnet/api/azure.data.tables.tableentity) object to an object of your own definition. The sample application defines a class `WeatherDataModel` in the *Models* directory for this purpose. This class has properties for the station name and observation date that the partition key and row key will map to, providing more meaningful property names for these values. It then uses a dictionary to store all the other properties on the object. This is a common pattern when working with Table storage since a row can have any number of arbitrary properties and we want our model objects to be able to capture all of them. This class also contains methods to list the properties on the class.
+From the project directory, open the *Program.cs* file. In your editor, add a using directive for ``Azure.Data.Tables``.
-```csharp
-public class WeatherDataModel
-{
- // Captures all of the weather data properties -- temp, humidity, wind speed, etc
- private Dictionary<string, object> _properties = new Dictionary<string, object>();
- public string StationName { get; set; }
+Define a new instance of the ``TableServiceClient`` class using the constructor, and [``Environment.GetEnvironmentVariable``](/dotnet/api/system.environment.getenvironmentvariables) to read the connection string you set earlier.
- public string ObservationDate { get; set; }
-
- public DateTimeOffset? Timestamp { get; set; }
-
- public string Etag { get; set; }
-
- public object this[string name]
- {
- get => ( ContainsProperty(name)) ? _properties[name] : null;
- set => _properties[name] = value;
- }
-
- public ICollection<string> PropertyNames => _properties.Keys;
-
- public int PropertyCount => _properties.Count;
-
- public bool ContainsProperty(string name) => _properties.ContainsKey(name);
-}
-```
-
-The `MapTableEntityToWeatherDataModel` method is used to map a [TableEntity](/dotnet/api/azure.data.tables.itableentity) object to a `WeatherDataModel` object. The [TableEntity](/dotnet/api/azure.data.tables.itableentity) object contains a [Keys](/dotnet/api/azure.data.tables.tableentity.keys) property to get all of the property names contained in the table for the object (effectively the column names for this row in the table). The `MapTableEntityToWeatherDataModel` method directly maps the `PartitionKey`, `RowKey`, `Timestamp`, and `Etag` properties and then uses the `Keys` property to iterate over the other properties in the `TableEntity` object and map those to the `WeatherDataModel` object, minus the properties that have already been directly mapped.
-
-Edit the code in the `MapTableEntityToWeatherDataModel` method to match the following code block.
-
-```csharp
-public WeatherDataModel MapTableEntityToWeatherDataModel(TableEntity entity)
-{
- WeatherDataModel observation = new WeatherDataModel();
- observation.StationName = entity.PartitionKey;
- observation.ObservationDate = entity.RowKey;
- observation.Timestamp = entity.Timestamp;
- observation.Etag = entity.ETag.ToString();
-
- var measurements = entity.Keys.Where(key => !EXCLUDE_TABLE_ENTITY_KEYS.Contains(key));
- foreach (var key in measurements)
- {
- observation[key] = entity[key];
- }
- return observation;
-}
-```
-
-### Filter rows returned from a table
-
-To filter the rows returned from a table, you can pass an OData style filter string to the [Query](/dotnet/api/azure.data.tables.tableclient.query) method. For example, if you wanted to get all of the weather readings for Chicago between midnight July 1, 2021 and midnight July 2, 2021 (inclusive) you would pass in the following filter string.
-
-```odata
-PartitionKey eq 'Chicago' and RowKey ge '2021-07-01 12:00 AM' and RowKey le '2021-07-02 12:00 AM'
-```
-
-You can view all OData filter operators on the OData website in the section [Filter System Query Option](https://www.odata.org/documentation/odata-version-2-0/uri-conventions/).
-
-In the example application, the `FilterResultsInputModel` object is designed to capture any filter criteria provided by the user.
-
-```csharp
-public class FilterResultsInputModel : IValidatableObject
-{
- public string PartitionKey { get; set; }
- public string RowKeyDateStart { get; set; }
- public string RowKeyTimeStart { get; set; }
- public string RowKeyDateEnd { get; set; }
- public string RowKeyTimeEnd { get; set; }
- [Range(-100, +200)]
- public double? MinTemperature { get; set; }
- [Range(-100,200)]
- public double? MaxTemperature { get; set; }
- [Range(0, 300)]
- public double? MinPrecipitation { get; set; }
- [Range(0,300)]
- public double? MaxPrecipitation { get; set; }
-}
-```
-
-When this object is passed to the `GetFilteredRows` method in the `TableService` class, it creates a filter string for each non-null property value. It then creates a combined filter string by joining all of the values together with an "and" clause. This combined filter string is passed to the [Query](/dotnet/api/azure.data.tables.tableclient.query) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object and only rows matching the filter string will be returned. You can use a similar method in your code to construct suitable filter strings as required by your application.
-
-```csharp
-public IEnumerable<WeatherDataModel> GetFilteredRows(FilterResultsInputModel inputModel)
-{
- List<string> filters = new List<string>();
-
- if (!String.IsNullOrEmpty(inputModel.PartitionKey))
- filters.Add($"PartitionKey eq '{inputModel.PartitionKey}'");
- if (!String.IsNullOrEmpty(inputModel.RowKeyDateStart) && !String.IsNullOrEmpty(inputModel.RowKeyTimeStart))
- filters.Add($"RowKey ge '{inputModel.RowKeyDateStart} {inputModel.RowKeyTimeStart}'");
- if (!String.IsNullOrEmpty(inputModel.RowKeyDateEnd) && !String.IsNullOrEmpty(inputModel.RowKeyTimeEnd))
- filters.Add($"RowKey le '{inputModel.RowKeyDateEnd} {inputModel.RowKeyTimeEnd}'");
- if (inputModel.MinTemperature.HasValue)
- filters.Add($"Temperature ge {inputModel.MinTemperature.Value}");
- if (inputModel.MaxTemperature.HasValue)
- filters.Add($"Temperature le {inputModel.MaxTemperature.Value}");
- if (inputModel.MinPrecipitation.HasValue)
- filters.Add($"Precipitation ge {inputModel.MinTemperature.Value}");
- if (inputModel.MaxPrecipitation.HasValue)
- filters.Add($"Precipitation le {inputModel.MaxTemperature.Value}");
-
- string filter = String.Join(" and ", filters);
- Pageable<TableEntity> entities = _tableClient.Query<TableEntity>(filter);
-
- return entities.Select(e => MapTableEntityToWeatherDataModel(e));
-}
-```
-
-### Insert data using a TableEntity object
-
-The simplest way to add data to a table is by using a [TableEntity](/dotnet/api/azure.data.tables.itableentity) object. In this example, data is mapped from an input model object to a [TableEntity](/dotnet/api/azure.data.tables.itableentity) object. The properties on the input object representing the weather station name and observation date/time are mapped to the [PartitionKey](/dotnet/api/azure.data.tables.tableentity.partitionkey) and [RowKey](/dotnet/api/azure.data.tables.tableentity.rowkey) properties respectively which together form a unique key for the row in the table. Then the additional properties on the input model object are mapped to dictionary properties on the TableEntity object. Finally, the [AddEntity](/dotnet/api/azure.data.tables.tableclient.addentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object is used to insert data into the table.
-
-Modify the `InsertTableEntity` class in the example application to contain the following code.
-
-```csharp
-public void InsertTableEntity(WeatherInputModel model)
-{
- TableEntity entity = new TableEntity();
- entity.PartitionKey = model.StationName;
- entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
-
- // The other values are added like a items to a dictionary
- entity["Temperature"] = model.Temperature;
- entity["Humidity"] = model.Humidity;
- entity["Barometer"] = model.Barometer;
- entity["WindDirection"] = model.WindDirection;
- entity["WindSpeed"] = model.WindSpeed;
- entity["Precipitation"] = model.Precipitation;
-
- _tableClient.AddEntity(entity);
-}
-```
-### Upsert data using a TableEntity object
+### Create a table
-If you try to insert a row into a table with a partition key/row key combination that already exists in that table, you will receive an error. For this reason, it is often preferable to use the [UpsertEntity](/dotnet/api/azure.data.tables.tableclient.upsertentity) instead of the AddEntity method when adding rows to a table. If the given partition key/row key combination already exists in the table, the [UpsertEntity](/dotnet/api/azure.data.tables.tableclient.upsertentity) method will update the existing row. Otherwise, the row will be added to the table.
+Retrieve an instance of the `TableClient` using the `TableServiceClient` class. Use the [``TableClient.CreateIfNotExistsAsync``](/dotnet/api/azure.data.tables.tableclient.createifnotexistsasync) method on the `TableClient` to create a new table if it doesn't already exist. This method will return a reference to the existing or newly created table.
-```csharp
-public void UpsertTableEntity(WeatherInputModel model)
-{
- TableEntity entity = new TableEntity();
- entity.PartitionKey = model.StationName;
- entity.RowKey = $"{model.ObservationDate} {model.ObservationTime}";
- // The other values are added like a items to a dictionary
- entity["Temperature"] = model.Temperature;
- entity["Humidity"] = model.Humidity;
- entity["Barometer"] = model.Barometer;
- entity["WindDirection"] = model.WindDirection;
- entity["WindSpeed"] = model.WindSpeed;
- entity["Precipitation"] = model.Precipitation;
+### Create an item
- _tableClient.UpsertEntity(entity);
-}
-```
+The easiest way to create a new item in a table is to create a class that implements the [``ITableEntity``](/dotnet/api/azure.data.tables.itableentity) interface. You can then add your own properties to the class to populate columns of data in that table row.
-### Insert or upsert data with variable properties
-One of the advantages of using the Cosmos DB Table API is that if an object being loaded to a table contains any new properties then those properties are automatically added to the table and the values stored in Cosmos DB. There is no need to run DDL statements like ALTER TABLE to add columns as in a traditional database.
+Create an item in the collection using the `Product` class by calling [``TableClient.AddEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.addentityasync).
-This model gives your application flexibility when dealing with data sources that may add or modify what data needs to be captured over time or when different inputs provide different data to your application. In the sample application, we can simulate a weather station that sends not just the base weather data but also some additional values. When an object with these new properties is stored in the table for the first time, the corresponding properties (columns) will be automatically added to the table.
-In the sample application, the `ExpandableWeatherObject` class is built around an internal dictionary to support any set of properties on the object. This class represents a typical pattern for when an object needs to contain an arbitrary set of properties.
+### Get an item
-```csharp
-public class ExpandableWeatherObject
-{
- public Dictionary<string, object> _properties = new Dictionary<string, object>();
+You can retrieve a specific item from a table using the [``TableEntity.GetEntityAsync<T>``](/dotnet/api/azure.data.tables.tableclient.getentity) method. Provide the `partitionKey` and `rowKey` as parameters to identify the correct row to perform a quick *point read* of that item.
- public string StationName { get; set; }
- public string ObservationDate { get; set; }
+### Query items
- public object this[string name]
- {
- get => (ContainsProperty(name)) ? _properties[name] : null;
- set => _properties[name] = value;
- }
+After you insert an item, you can also run a query to get all items that match a specific filter by using the `TableClient.Query<T>` method. This example filters products by category using [Linq](/dotnet/standard/linq) syntax, which is a benefit of using strongly typed `ITableEntity` models like the `Product` class.
- public ICollection<string> PropertyNames => _properties.Keys;
+> [!NOTE]
+> You can also query items using [OData](/rest/api/storageservices/querying-tables-and-entities) syntax. You can see an example of this approach in the [Query Data](/azure/cosmos-db/table/tutorial-query-table) tutorial.
- public int PropertyCount => _properties.Count;
- public bool ContainsProperty(string name) => _properties.ContainsKey(name);
-}
-```
+## Run the code
-To insert or upsert such an object using the Table API, map the properties of the expandable object into a [TableEntity](/dotnet/api/azure.data.tables.tableentity) object and use the [AddEntity](/dotnet/api/azure.data.tables.tableclient.addentity) or [UpsertEntity](/dotnet/api/azure.data.tables.tableclient.upsertentity) methods on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object as appropriate.
-
-```csharp
-public void InsertExpandableData(ExpandableWeatherObject weatherObject)
-{
- TableEntity entity = new TableEntity();
- entity.PartitionKey = weatherObject.StationName;
- entity.RowKey = weatherObject.ObservationDate;
-
- foreach (string propertyName in weatherObject.PropertyNames)
- {
- var value = weatherObject[propertyName];
- entity[propertyName] = value;
- }
- _tableClient.AddEntity(entity);
-}
+This app creates an Azure Cosmos Table API table. The example then creates an item and then reads the exact same item back. Finally, the example creates a second item and then performs a query that should return multiple items. With each step, the example outputs metadata to the console about the steps it has performed.
-
-public void UpsertExpandableData(ExpandableWeatherObject weatherObject)
-{
- TableEntity entity = new TableEntity();
- entity.PartitionKey = weatherObject.StationName;
- entity.RowKey = weatherObject.ObservationDate;
-
- foreach (string propertyName in weatherObject.PropertyNames)
- {
- var value = weatherObject[propertyName];
- entity[propertyName] = value;
- }
- _tableClient.UpsertEntity(entity);
-}
+To run the app, use a terminal to navigate to the application directory and run the application.
+```dotnetcli
+dotnet run
```
-
-### Update an entity
-
-Entities can be updated by calling the [UpdateEntity](/dotnet/api/azure.data.tables.tableclient.updateentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object. Because an entity (row) stored using the Table API could contain any arbitrary set of properties, it is often useful to create an update object based around a Dictionary object similar to the `ExpandableWeatherObject` discussed earlier. In this case, the only difference is the addition of an `Etag` property which is used for concurrency control during updates.
-
-```csharp
-public class UpdateWeatherObject
-{
- public Dictionary<string, object> _properties = new Dictionary<string, object>();
- public string StationName { get; set; }
- public string ObservationDate { get; set; }
- public string Etag { get; set; }
+The output of the app should be similar to this example:
- public object this[string name]
- {
- get => (ContainsProperty(name)) ? _properties[name] : null;
- set => _properties[name] = value;
- }
-
- public ICollection<string> PropertyNames => _properties.Keys;
-
- public int PropertyCount => _properties.Count;
-
- public bool ContainsProperty(string name) => _properties.ContainsKey(name);
-}
+```output
+Single product name:
+Yamba Surfboard
+Multiple products:
+Yamba Surfboard
+Sand Surfboard
```
-In the sample app, this object is passed to the `UpdateEntity` method in the `TableService` class. This method first loads the existing entity from the Table API using the [GetEntity](/dotnet/api/azure.data.tables.tableclient.getentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient). It then updates that entity object and uses the `UpdateEntity` method save the updates to the database. Note how the [UpdateEntity](/dotnet/api/azure.data.tables.tableclient.updateentity) method takes the current Etag of the object to insure the object has not changed since it was initially loaded. If you want to update the entity regardless, you may pass a value of `ETag.All` to the `UpdateEntity` method.
+## Clean up resources
-```csharp
-public void UpdateEntity(UpdateWeatherObject weatherObject)
-{
- string partitionKey = weatherObject.StationName;
- string rowKey = weatherObject.ObservationDate;
+When you no longer need the Azure Cosmos DB Table API account, you can delete the corresponding resource group.
- // Use the partition key and row key to get the entity
- TableEntity entity = _tableClient.GetEntity<TableEntity>(partitionKey, rowKey).Value;
+### [Azure CLI](#tab/azure-cli)
- foreach (string propertyName in weatherObject.PropertyNames)
- {
- var value = weatherObject[propertyName];
- entity[propertyName] = value;
- }
+Use the [``az group delete``](/cli/azure/group#az-group-delete) command to delete the resource group.
- _tableClient.UpdateEntity(entity, new ETag(weatherObject.Etag));
-}
+```azurecli-interactive
+az group delete --name $resourceGroupName
```
-### Remove an entity
+### [PowerShell](#tab/azure-powershell)
-To remove an entity from a table, call the [DeleteEntity](/dotnet/api/azure.data.tables.tableclient.deleteentity) method on the [TableClient](/dotnet/api/azure.data.tables.tableclient) object with the partition key and row key of the object.
+Use the [``Remove-AzResourceGroup``](/powershell/module/az.resources/remove-azresourcegroup) cmdlet to delete the resource group.
-```csharp
-public void RemoveEntity(string partitionKey, string rowKey)
-{
- _tableClient.DeleteEntity(partitionKey, rowKey);
+```azurepowershell-interactive
+$parameters = @{
+ Name = $RESOURCE_GROUP_NAME
}
+Remove-AzResourceGroup @parameters
```
-## 7 - Run the code
-
-Run the sample application to interact with the Cosmos DB Table API. The first time you run the application, there will be no data because the table is empty. Use any of the buttons at the top of application to add data to the table.
-
+### [Portal](#tab/azure-portal)
-Selecting the **Insert using Table Entity** button opens a dialog allowing you to insert or upsert a new row using a `TableEntity` object.
+1. Navigate to the resource group you previously created in the Azure portal.
+ > [!TIP]
+ > In this quickstart, we recommended the name ``msdocs-cosmos-quickstart-rg``.
+1. Select **Delete resource group**.
-Selecting the **Insert using Expandable Data** button brings up a dialog that enables you to insert an object with custom properties, demonstrating how the Cosmos DB Table API automatically adds properties (columns) to the table when needed. Use the *Add Custom Field* button to add one or more new properties and demonstrate this capability.
+ :::image type="content" source="media/dotnet-quickstart/delete-resource-group-option.png" alt-text="Screenshot of the Delete resource group option in the navigation bar for a resource group.":::
+1. On the **Are you sure you want to delete** dialog, enter the name of the resource group, and then select **Delete**.
-Use the **Insert Sample Data** button to load some sample data into your Cosmos DB Table.
--
-Select the **Filter Results** item in the top menu to be taken to the Filter Results page. On this page, fill out the filter criteria to demonstrate how a filter clause can be built and passed to the Cosmos DB Table API.
--
-## Clean up resources
-
-When you are finished with the sample application, you should remove all Azure resources related to this article from your Azure account. You can do this by deleting the resource group.
-
-### [Azure portal](#tab/azure-portal)
-
-A resource group can be deleted using the [Azure portal](https://portal.azure.com/) by doing the following.
-
-| Instructions | Screenshot |
-|:-|--:|
-| [!INCLUDE [Delete resource group step 1](./includes/create-table-dotnet/remove-resource-group-1.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-1-240px.png" alt-text="A screenshot showing how to search for a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-1.png"::: |
-| [!INCLUDE [Delete resource group step 2](./includes/create-table-dotnet/remove-resource-group-2.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-2-240px.png" alt-text="A screenshot showing the location of the Delete resource group button." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-2.png"::: |
-| [!INCLUDE [Delete resource group step 3](./includes/create-table-dotnet/remove-resource-group-3.md)] | :::image type="content" source="./media/create-table-dotnet/azure-portal-remove-resource-group-3-240px.png" alt-text="A screenshot showing the confirmation dialog for deleting a resource group." lightbox="./media/create-table-dotnet/azure-portal-remove-resource-group-3.png"::: |
-
-### [Azure CLI](#tab/azure-cli)
-
-To delete a resource group using the Azure CLI, use the [az group delete](/cli/azure/group#az-group-delete) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurecli
-az group delete --name $RESOURCE_GROUP_NAME
-```
-
-### [Azure PowerShell](#tab/azure-powershell)
-
-To delete a resource group using Azure PowerShell, use the [Remove-AzResourceGroup](/powershell/module/az.resources/remove-azresourcegroup) command with the name of the resource group to be deleted. Deleting a resource group will also remove all Azure resources contained in the resource group.
-
-```azurepowershell
-Remove-AzResourceGroup -Name $resourceGroupName
-```
+ :::image type="content" source="media/dotnet-quickstart/delete-confirmation.png" alt-text="Screenshot of the delete confirmation page for a resource group.":::
## Next steps
-In this quickstart, you've learned how to create an Azure Cosmos DB account, create a table using the Data Explorer, and run an app. Now you can query your data using the Table API.
+In this quickstart, you learned how to create an Azure Cosmos DB Table API account, create a table, and manage entries using the .NET SDK. You can now dive deeper into the SDK to learn how to perform more advanced data queries and management tasks in your Azure Cosmos DB Table API resources.
> [!div class="nextstepaction"]
-> [Import table data to the Table API](table-import.md)
+> [Get started with Azure Cosmos DB Table API and .NET](/azure/cosmos-db/table/how-to-dotnet-get-started)
cosmos-db How To Create Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/table/how-to-create-container.md
This article explains the different ways to create a container in Azure Cosmos D
1. Sign in to the [Azure portal](https://portal.azure.com/).
-1. [Create a new Azure Cosmos account](create-table-dotnet.md#1create-an-azure-cosmos-db-account), or select an existing account.
+1. [Create a new Azure Cosmos account](create-table-dotnet.md#create-an-azure-cosmos-db-account), or select an existing account.
1. Open the **Data Explorer** pane, and select **New Table**. Next, provide the following details:
cost-management-billing Enable Preview Features Cost Management Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/enable-preview-features-cost-management-labs.md
description: This article explains how to explore preview features and provides a list of the recent previews you might be interested in. Previously updated : 06/23/2022 Last updated : 07/11/2022
Understanding what you're being charged for can be complicated. The best place t
Many Azure services use nested or child resources. SQL servers have databases, storage accounts have containers, and virtual networks have subnets. Most of the child resources are only used to configure services, but sometimes the resources have their own usage and charges. SQL databases are perhaps the most common example.
-SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to find the server and each database and then manually sum up their total cost. As an example, you can see the **aepool** elastic pool at the top of the list below and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
+SQL databases are deployed as part of a SQL server instance, but usage is tracked at the database level. Additionally, you might also have charges on the parent server, like for Microsoft Defender for Cloud. To get the total cost for your SQL deployment in classic cost analysis, you need to manually sum up the cost of the server and each individual database. As an example, you can see the **aepool** elastic pool at the top of the list below and the **treyanalyticsengine** server lower down on the first page. What you don't see is another database even lower in the list. You can imagine how troubling this situation would be when you need the total cost of a large server instance with many databases.
Here's an example showing classic cost analysis where multiple related resource costs aren't grouped.
If you don't have a budget yet, you'll see a link to create a new budget. Budget
Group related resources, like disks under VMs or web apps under App Service plans, by adding a ΓÇ£cm-resource-parentΓÇ¥ tag to the child resources with a value of the parent resource ID. Wait 24 hours for tags to be available in usage and your resources will be grouped. Leave feedback to let us know how we can improve this experience further for you.
-Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and cannot group them automatically. This experimental feature uses tags to summarize the total cost of your related resources together. You'll see a single row with the parent resource. When you expand the parent resource, you'll see each linked resource listed individually with their respective cost.
+Some resources have related dependencies that aren't explicit children or nested under the logical parent in Azure Resource Manager. Examples include disks used by a virtual machine or web apps assigned to an App Service plan. Unfortunately, Cost Management isn't aware of these relationships and can't group them automatically. This experimental feature uses tags to summarize the total cost of your related resources together. You'll see a single row with the parent resource. When you expand the parent resource, you'll see each linked resource listed individually with their respective cost.
-As an example, let's say you have an Azure Virtual Desktop host pool configured with two VMs. Tagging the VMs and corresponding network/disk resources groups them under the host pool, giving you the total cost of the session host VMs in your host pool deployment. This gets even more interesting if you want to also include the cost of any cloud solutions made available via your host pool.
+As an example, let's say you have an Azure Virtual Desktop host pool configured with two VMs. Tagging the VMs and corresponding network/disk resources groups them under the host pool, giving you the total cost of the session host VMs in your host pool deployment. This example gets even more interesting if you want to also include the cost of any cloud solutions made available via your host pool.
:::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-resource-parent-virtual-desktop.png" alt-text="Screenshot of the cost analysis preview showing VMs and disks grouped under an Azure Virtual Desktop host pool." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-resource-parent-virtual-desktop.png" :::
Once you know which resources you'd like to group, use the following steps to ta
3. Find the **Resource ID** property and copy its value. 4. Open **All resources** or the resource group that has the resources you want to link. 5. Select the checkboxes for every resource you want to link and click the **Assign tags** command.
-6. Specify a tag key of "cm-resource-parent" (make sure it is typed correctly) and paste the resource ID from step 3.
+6. Specify a tag key of "cm-resource-parent" (make sure it's typed correctly) and paste the resource ID from step 3.
7. Wait 24 hours for new usage to be sent to Cost Management with the tags. (Keep in mind resources must be actively running with charges for tags to be updated in Cost Management.) 8. Open the [Resources view](https://aka.ms/costanalysis/resources) in the cost analysis preview.
Charts in the cost analysis preview include a chart of daily or monthly charges
Charts are enabled on the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** Option at the bottom of the page to share feedback about the preview.
+<a name="cav3forecast"></a>
+
+## Forecast in the cost analysis preview
+
+Show the forecast for the current period at the top of the cost analysis preview.
+
+Charts can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Use the **How would you rate the cost analysis preview?** Option at the bottom of the page to share feedback about the preview.
++
+<a name="productscolumn"></a>
+
+## Product column in the cost analysis preview
+
+Every service tracks different usage attributes of the resources you've deployed. Each of these usage attributes is tracked via a "meter" in your cost data. Meters are grouped into categories and include other metadata to help you understand the charges. WeΓÇÖre testing new columns in the Resources and Services views in the cost analysis preview for Microsoft Customer Agreement. You may see a single Product column instead of the Service, Tier, and Meter columns.
+
+You can also enable this preview from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal. Note this preview is only applicable for Microsoft Customer Agreement accounts.
++
+<a name="recommendationinsights"></a>
+
+## Cost savings insights in the cost analysis preview
+
+Cost insights surface important details about your subscriptions, like potential anomalies or top cost contributors. To support your cost optimization goals, cost insights now include the total cost savings available from Azure Advisor for your subscription.
+
+You can enable cost savings insights for subscriptions from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal.
++
+<a name="resourceessentials"></a>
+
+## View cost for your resources
+
+Cost analysis is available from every management group, subscription, resource group, and billing scope in the Azure portal and the Microsoft 365 admin center. To make cost data more readily accessible for resource owners, you can now find a **View cost** link at the top-right of every resource overview screen, in **Essentials**. Clicking the link will open classic cost analysis with a resource filter applied.
+
+The view cost link is enabled by default in the [Azure preview portal](https://preview.portal.azure.com).
++
+<a name="whatsnew"></a>
+
+## What's new in Cost Management
+
+Learn about new and updated features or other announcements directly from within the Cost Management experience in the Azure portal. You can also follow along using the [Cost Management updates on the Azure blog](https://aka.ms/costmgmt/blog).
+
+What's new can be enabled from the [Try preview](https://aka.ms/costmgmt/trypreview) page in the Azure portal.
++ <a name="onlyinconfig"></a> ## Streamlined menu
You can enable **Open config items in the menu** on the [Try preview](https://ak
## Change scope from menu
-If you manage many subscriptions and need to switch between subscriptions or resource groups often, you might want to include the **Change scope from menu** option.
+If you manage many subscriptions, resource groups, or management groups and need to switch between them often, you might want to include the **Change scope from menu** option.
:::image type="content" source="./media/enable-preview-features-cost-management-labs/cost-analysis-change-scope-menu.png" alt-text="Screenshot showing the Change scope option added to the menu after selecting the Change menu from scope preview option." lightbox="./media/enable-preview-features-cost-management-labs/cost-analysis-change-scope-menu.png" :::
cost-management-billing Group Filter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/group-filter.md
The following table lists some of the most common grouping and filtering options
| **Pricing model** | Break down costs by on-demand, reservation, or spot usage. | Purchases show as **OnDemand**. If you see **Not applicable**, group by **Reservation** to determine whether the usage is reservation or on-demand usage and **Charge type** to identify purchases. | **Provider** | Break down costs by the provider type: Azure, Microsoft 365, Dynamics 365, AWS, and so on. | Identifier for product and line of business. | | **Publisher type** | Break down Microsoft, Azure, AWS, and Marketplace costs. | Values are **Microsoft** for MCA accounts and **Azure** for EA and pay-as-you-go accounts. |
-| **Reservation** | Break down costs by reservation. | Any usage or purchases that aren't associated with a reservation will show as **No reservation**. Group by **Publisher type** to identify other Azure, AWS, or Marketplace purchases. |
+| **Reservation** | Break down costs by reservation. | Any usage or purchases that aren't associated with a reservation will show as **No reservation** or **No values**. Group by **Publisher type** to identify other Azure, AWS, or Marketplace purchases. |
| **Resource** | Break down costs by resource. | Marketplace purchases show as **Other Marketplace purchases** and Azure purchases, like Reservations and Support charges, show as **Other Azure purchases**. Group by or filter on **Publisher type** to identify other Azure, AWS, or Marketplace purchases. | | **Resource group** | Break down costs by resource group. | Purchases, tenant resources not associated with subscriptions, subscription resources not deployed to a resource group, and classic resources don't have a resource group and will show as **Other Marketplace purchases**, **Other Azure purchases**, **Other tenant resources**, **Other subscription resources**, **$system**, or **Other charges**. | | **Resource type** | Break down costs by resource type. | Purchases and classic services don't have an Azure Resource Manager resource type and will show as **others**, **classic services**, or **No resource type**. |
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly.
<tr><td><b>Service category</b></td><td><b>Service improvements</b></td><td><b>Details</b></td></tr> <tr><td rowspan=3><b>Data flow</b></td><td>Fuzzy join supported for data flows</td><td>Fuzzy join is now supported in Join transformation of data flows with configurable similarity score on join conditions.<br><a href="data-flow-join.md#fuzzy-join">Learn more</a></td></tr> <tr><td>Editing capabilities in source projection</td><td>Editing capabilities in source projection is available in Dataflow to make schemas modifications easily<br><a href="data-flow-source.md#source-options">Learn more</a></td></tr>
-<tr><td>Cast transformation and assert error handling</td><td>Cast transformation and assert error handling are now supported in data flows for better transformation.<br><a href="data-flow-assert.md">Learn more</a></td></tr>
+<tr><td>Assert error handling</td><td>Assert error handling are now supported in data flows for data quality and data validation<br><a href="data-flow-assert.md">Learn more</a></td></tr>
<tr><td rowspan=2><b>Data Movement</b></td><td>Parameterization natively supported in additional 4 connectors</td><td>We added native UI support of parameterization for the following linked
-<tr><td>SAP Change Data Capture (CDC) capabilities in the new SAP ODP connector</td><td>SAP Change Data Capture (CDC) capabilities are now supported in the new SAP ODP connector.<br><a href="sap-change-data-capture-introduction-architecture.md">Learn more</a></td></tr>
+<tr><td>SAP Change Data Capture (CDC) capabilities in the new SAP ODP connector (Public Preview)</td><td>SAP Change Data Capture (CDC) capabilities are now supported in the new SAP ODP connector.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-the-public-preview-of-the-sap-cdc-solution-in-azure/ba-p/3420904">Learn more</a></td></tr>
+<tr><td><b>Orchestration</b></td><td>ΓÇÿturnOffAsync' property is available in Web activity</td><td>Web activity supports an async request-reply pattern that invokes HTTP GET on the Location field in the response header of an HTTP 202 Response. It helps web activity automatically poll the monitoring end-point till the job runs. ΓÇÿturnOffAsync' property is supported to disable this behavior in cases where polling isn't needed<br><a href="control-flow-web-activity.md#type-properties">Learn more</a></td></tr>
+<tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr>
<tr><td><b>Integration Runtime</b></td><td>Time-To-Live in managed VNET (Public Preview)</td><td>Time-To-Live can be set to the provisioned computes in managed VNET.<br><a href="https://techcommunity.microsoft.com/t5/azure-data-factory-blog/announcing-public-preview-of-time-to-live-ttl-in-managed-virtual/ba-p/3552879">Learn more</a></td></tr> <tr><td><b>Monitoring</b></td><td> Rerun pipeline with new parameters</td><td>You can now rerun pipelines with new parameter values in Azure Data Factory.<br><a href="monitor-visually.md#rerun-pipelines-and-activities">Learn more</a></td></tr> </table>
This page is updated monthly, so revisit it regularly.
<tr><td rowspan=2><b>Data movement</b></td><td>Get metadata-driven data ingestion pipelines on the Data Factory Copy Data tool within 10 minutes (GA)</td><td>You can build large-scale data copy pipelines with a metadata-driven approach on the Copy Data tool within 10 minutes.<br><a href="copy-data-tool-metadata-driven.md">Learn more</a></td></tr> <tr><td>Data Factory Google AdWords connector API upgrade available</td><td>The Data Factory Google AdWords connector now supports the new AdWords API version. No action is required for the new connector user because it's enabled by default.<br><a href="connector-troubleshoot-google-adwords.md#migrate-to-the-new-version-of-google-ads-api">Learn more</a></td></tr>
+<tr><td><b>Continuous integration and continuous delivery (CI/CD)</b></td><td>Cross tenant Azure DevOps support</td><td>Configure a repository using Azure DevOps Git not in the same tenant as the Azure Data Factory。<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
+ <tr><td><b>Region expansion</b></td><td>Data Factory is now available in West US3 and Jio India West</td><td>Data Factory is now available in two new regions: West US3 and Jio India West. You can colocate your ETL workflow in these new regions if you're using these regions to store and manage your modern data warehouse. You can also use these regions for business continuity and disaster recovery purposes if you need to fail over from another region within the geo.<br><a href="https://azure.microsoft.com/global-infrastructure/services/?products=data-factory&regions=all">Learn more</a></td></tr> <tr><td><b>Security</b></td><td>Connect to an Azure DevOps account in another Azure Active Directory (Azure AD) tenant</td><td>You can connect your Data Factory instance to an Azure DevOps account in a different Azure AD tenant for source control purposes.<br><a href="cross-tenant-connections-to-azure-devops.md">Learn more</a></td></tr>
databox Data Box Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-deploy-ordered.md
Previously updated : 06/06/2022 Last updated : 07/08/2022 #Customer intent: As an IT admin, I need to be able to order Data Box to upload on-premises data from my server onto Azure.
databox Data Box Disk Deploy Ordered https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-ordered.md
Previously updated : 06/10/2021 Last updated : 07/10/2022 # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Azure DDoS Protection Standard, combined with application design best practices,
## Pricing
-DDoS protection plans have a fixed monthly charge of $2,944 per month which covers up to 100 public IP addresses. Protection for additional resources will cost an additional $30 per resource per month.
- Under a tenant, a single DDoS protection plan can be used across multiple subscriptions, so there is no need to create more than one DDoS protection plan. To learn about Azure DDoS Protection Standard pricing, see [Azure DDoS Protection Standard pricing](https://azure.microsoft.com/pricing/details/ddos-protection/).
dedicated-hsm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/overview.md
Many of our customers have a requirement for single tenancy of the cryptographic
Many customers require full administrative control and sole access to their device for administrative purposes. After a device is provisioned, only the customer has administrative or application-level access to the device. Microsoft has no administrative control after the customer accesses the device for the first time, at which point the customer changes the password. From that point, the customer is a true single-tenant with full administrative control and application-management capability. Microsoft does maintain monitor-level access (not an admin role) for telemetry via serial port connection. This access covers hardware monitors such as temperature, power supply health, and fan health.
-
+ The customer is free to disable this monitoring needed. However, if they disable it, they won't receive proactive health alerts from Microsoft. ### High performance
Microsoft recognized a specific need for a unique set of customers. It is the on
## Is Azure Dedicated HSM right for you?
-Azure Dedicated HSM is a specialized service that addresses unique requirements for a specific type of large-scale organization. As a result, it's expected that the bulk of Azure customers will not fit the profile of use for this service. Many will find the Azure Key Vault service to be more appropriate and cost effective. To help you decide if it's a fit for your requirements, we've identified the following criteria.
+Azure Dedicated HSM is a specialized service that addresses unique requirements for a specific type of large-scale organization. As a result, it's expected that the bulk of Azure customers will not fit the profile of use for this service. Many will find the Azure Key Vault or Azure Managed HSM service to be more appropriate and cost effective. For an comparison of offerings, see [Azure key management services](../security/fundamentals/key-management.md#azure-key-management-services)
+
+To help you decide if Azure Dedicated HSM is a fit for your requirements, we've identified the following criteria.
### Best fit
Azure Dedicated HSM is most suitable for ΓÇ£lift-and-shiftΓÇ¥ scenarios that req
Azure Dedicated HSM is not a good fit for the following type of scenario: Microsoft cloud services that support encryption with customer-managed keys (such as Azure Information Protection, Azure Disk Encryption, Azure Data Lake Store, Azure Storage, Azure SQL Database, and Customer Key for Office 365) that are not integrated with Azure Dedicated HSM.
+> [!NOTE]
+> Customers must have a assigned Microsoft Account Manager and meet the monetary requirement of five million ($5M) USD or greater in overall committed Azure revenue annually to qualify for onboarding and use of Azure Dedicated HSM.
+ ### It depends Whether Azure Dedicated HSM will work for you depends on a potentially complex mix of requirements and compromises that you can or cannot make. An example is the FIPS 140-2 Level 3 requirement. This requirement is common, and Azure Dedicated HSM and a new single-tenant offering, [Azure Key Vault Managed HSM](../key-vault/managed-hsm/index.yml) are currently the only options for meeting it. If these mandated requirements aren't relevant, then often it's a choice between Azure Key Vault and Azure Dedicated HSM. Assess your requirements before making a decision.
defender-for-cloud Defender For Storage Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-storage-introduction.md
Title: Microsoft Defender for Storage - the benefits and features
+ Title: Microsoft Defender for Storage - the benefits and features
+ description: Learn about the benefits and features of Microsoft Defender for Storage. Previously updated : 06/29/2022 Last updated : 07/12/2022 # Overview of Microsoft Defender for Storage **Microsoft Defender for Storage** is an Azure-native layer of security intelligence that detects unusual and potentially harmful attempts to access or exploit your storage accounts. It uses advanced threat detection capabilities and [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) data to provide contextual security alerts. Those alerts also include steps to mitigate the detected threats and prevent future attacks.
-You can enable **Microsoft Defender for Storage** at either the subscription level (recommended) or the resource level.
+You can [enable Microsoft Defender for Storage](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) at either the subscription level (recommended) or the resource level.
-Defender for Storage continually analyzes the telemetry stream generated by the Azure Blob Storage and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
+Defender for Storage continually analyzes the telemetry stream generated by the [Azure Blob Storage](https://azure.microsoft.com/services/storage/blobs/) and Azure Files services. When potentially malicious activities are detected, security alerts are generated. These alerts are displayed in Microsoft Defender for Cloud, together with the details of the suspicious activity along with the relevant investigation steps, remediation actions, and security recommendations.
-Analyzed telemetry of Azure Blob Storage includes operation types such as **Get Blob**, **Put Blob**, **Get Container ACL**, **List Blobs**, and **Get Blob Properties**. Examples of analyzed Azure Files operation types include **Get File**, **Create File**, **List Files**, **Get File Properties**, and **Put Range**.
+Analyzed telemetry of Azure Blob Storage includes operation types such as `Get Blob`, `Put Blob`, `Get Container ACL`, `List Blobs`, and `Get Blob Properties`. Examples of analyzed Azure Files operation types include `Get File`, C`reate File`, `List Files`, `Get File Properties`, and `Put Range`.
Defender for Storage doesn't access the Storage account data and has no impact on its performance.
You can learn more by watching this video from the Defender for Cloud in the Fie
|Protected storage types:|[Blob Storage](https://azure.microsoft.com/services/storage/blobs/) (Standard/Premium StorageV2, Block Blobs) <br>[Azure Files](../storage/files/storage-files-introduction.md) (over REST API and SMB)<br>[Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md) (Standard/Premium accounts with hierarchical namespaces enabled)| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts| -- ## What are the benefits of Microsoft Defender for Storage? Defender for Storage provides: - **Azure-native security** - With 1-click enablement, Defender for Storage protects data stored in Azure Blob, Azure Files, and Data Lakes. As an Azure-native service, Defender for Storage provides centralized security across all data assets that are managed by Azure and is integrated with other Azure security services such as Microsoft Sentinel.+ - **Rich detection suite** - Powered by Microsoft Threat Intelligence, the detections in Defender for Storage cover the top storage threats such as unauthenticated access, compromised credentials, social engineering attacks, data exfiltration, privilege abuse, and malicious content.+ - **Response at scale** - Defender for Cloud's automation tools make it easier to prevent and respond to identified threats. Learn more in [Automate responses to Defender for Cloud triggers](workflow-automation.md). :::image type="content" source="media/defender-for-storage-introduction/defender-for-storage-high-level-overview.png" alt-text="High-level overview of the features of Microsoft Defender for Storage.":::
Security alerts are triggered for the following scenarios (typically from 1-2 ho
| **Public visibility** | Potential break-in attempts by scanning containers and pulling potentially sensitive data from publicly accessible containers. | | **Phishing campaigns** | When content that's hosted on Azure Storage is identified as part of a phishing attack that's impacting Microsoft 365 users. |
+You can check out [the full list of Microsoft Defender for Storage alerts](alerts-reference.md#alerts-azurestorage).
+ Alerts include details of the incident that triggered them, and recommendations on how to investigate and remediate threats. Alerts can be exported to Microsoft Sentinel or any other third-party SIEM or any other external tool. Learn more in [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md). > [!TIP]
Alerts include details of the incident that triggered them, and recommendations
> [!TIP] > When a file is suspected to contain malware, Defender for Cloud displays an alert and can optionally email the storage owner for approval to delete the suspicious file. To set up this automatic removal of files that hash reputation analysis indicates contain malware, deploy a [workflow automation to trigger on alerts that contain "Potential malware uploaded to a storage accountΓÇ¥](https://techcommunity.microsoft.com/t5/azure-security-center/how-to-respond-to-potential-malware-uploaded-to-azure-storage/ba-p/1452005). -
-## Enable Defender for Storage
-
-When you enable this Defender plan on a subscription, all existing Azure Storage accounts will be protected and any storage resources added to that subscription in the future will also be automatically protected.
-
-You can enable Defender for Storage in any of several ways, described in [Set up Microsoft Defender for Cloud](../storage/common/azure-defender-storage-configure.md#set-up-microsoft-defender-for-cloud) in the Azure Storage documentation.
- ## FAQ - Microsoft Defender for Storage -- [Overview of Microsoft Defender for Storage](#overview-of-microsoft-defender-for-storage)
- - [Availability](#availability)
- - [What are the benefits of Microsoft Defender for Storage?](#what-are-the-benefits-of-microsoft-defender-for-storage)
- - [Security threats in cloud-based storage services](#security-threats-in-cloud-based-storage-services)
- - [What kind of alerts does Microsoft Defender for Storage provide?](#what-kind-of-alerts-does-microsoft-defender-for-storage-provide)
- - [Limitations of hash reputation analysis](#limitations-of-hash-reputation-analysis)
- - [Enable Defender for Storage](#enable-defender-for-storage)
- - [FAQ - Microsoft Defender for Storage](#faqmicrosoft-defender-for-storage)
- - [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)
- - [Can I exclude a specific Azure Storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)
- - [How do I configure automatic responses for security alerts?](#how-do-i-configure-automatic-responses-for-security-alerts)
- - [Next steps](#next-steps)
+- [How do I estimate charges at the account level?](#how-do-i-estimate-charges-at-the-account-level)
+- [Can I exclude a specific Azure Storage account from a protected subscription?](#can-i-exclude-a-specific-azure-storage-account-from-a-protected-subscription)
+- [How do I configure automatic responses for security alerts?](#how-do-i-configure-automatic-responses-for-security-alerts)
### How do I estimate charges at the account level?
For example, you can set up automation to open tasks or tickets for specific per
Use automation for automatic response - to define your own or use ready-made automation from the community (such as removing malicious files upon detection). For more solutions, visit the Microsoft community on GitHub.ΓÇ» -- ## Next steps In this article, you learned about Microsoft Defender for Storage. > [!div class="nextstepaction"] > [Enable Defender for Storage](enable-enhanced-security.md)--- [The full list of Microsoft Defender for Storage alerts](alerts-reference.md#alerts-azurestorage)-- [Stream alerts to a SIEM, SOAR, or IT Service Management solution](export-to-siem.md)-- [Save Storage telemetry for investigation](../azure-monitor/essentials/diagnostic-settings.md)
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
If you're looking for the latest release notes, you'll find them in the [What's
### Changes to recommendations for managing endpoint protection solutions
-**Estimated date for change:** June 2022
+**Estimated date for change:** August 2022
In August 2021, we added two new **preview** recommendations to deploy and maintain the endpoint protection solutions on your machines. For full details, [see the release note](release-notes-archive.md#two-new-recommendations-for-managing-endpoint-protection-solutions-in-preview).
defender-for-iot Appliance Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/appliance-catalog/appliance-catalog-overview.md
+
+ Title: OT monitoring appliance reference overview - Microsoft Defender for IoT
+description: Provides an overview of all appliances available for use with Microsoft Defender for IoT OT sensors and on-premises management consoles.
Last updated : 07/10/2022+++
+# OT monitoring appliance reference
+
+This article provides an overview of the OT monitoring appliances supported with Microsoft Defender for IoT.
+
+Each article provides details about the appliance and any extra software installation procedures required. For more information, see [Install OT system software](../how-to-install-software.md) and [Update Defender for IoT OT monitoring software](../update-ot-software.md).
+
+## Corporate environments
+
+The following OT monitoring appliances are available for corporate deployments:
+
+- [HPE ProLiant DL360](hpe-proliant-dl360.md)
+
+## Large enterprises
+
+The following OT monitoring appliances are available for large enterprise deployments:
+
+- [HPE ProLiant DL20/DL20 Plus (4SFF)](hpe-proliant-dl20-plus-enterprise.md)
+
+## Production line
+
+The following OT monitoring appliances are available for production line deployments:
+
+- [HPE ProLiant DL20/DL20 Plus (NHP 2LFF) for SMB deployments](hpe-proliant-dl20-plus-smb.md)
+- [Dell Edge 5200 (Rugged)](dell-edge-5200.md)
+- [YS-techsystems YS-FIT2 (Rugged)](ys-techsystems-ys-fit2.md)
+
+## Next steps
+
+For more information, see:
+
+- [Which appliances do I need?](../ot-appliance-sizing.md)
+- [Pre-configured physical appliances for OT monitoring](../ot-pre-configured-appliances.md)
+- [OT monitoring with virtual appliances](../ot-virtual-appliances.md)
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Last updated 03/24/2022
The Microsoft Defender for IoT system is built to provide broad coverage and visibility from diverse data sources.
-The following image shows how data can stream into Defender for IoT from network sensors, Microsoft Defender for Endpoint, and third party sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
+The following image shows how data can stream into Defender for IoT from network sensors, Microsoft Defender for Endpoint, and partner sources to provide a unified view of IoT/OT security. Defender for IoT in the Azure portal provides asset inventories, vulnerability assessments, and continuous threat monitoring.
:::image type="content" source="media/architecture/system-architecture.png" alt-text="Diagram of the Defender for IoT system architecture." border="false":::
Defender for IoT systems include the following components:
Defender for IoT network sensors discover and continuously monitor network traffic on IoT and OT devices. -- Purpose-built for IoT and OT networks, sensors connect to a SPAN port or network TAP and can provide visibility into IoT and OT risks within minutes of connecting to the network.
+- The sensors are purpose-built for IoT and OT networks. They connect to a SPAN port or network TAP and can provide visibility into IoT and OT risks within minutes of connecting to the network.
-- Sensors use IoT and OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect IoT and OT threats, such as fileless malware, based on anomalous or unauthorized activity.
+- The sensors use IoT and OT-aware analytics engines and Layer-6 Deep Packet Inspection (DPI) to detect IoT and OT threats, such as fileless malware, based on anomalous or unauthorized activity.
Data collection, processing, analysis, and alerting takes place directly on the sensor. Running processes directly on the sensor can be ideal for locations with low bandwidth or high-latency connectivity because only the metadata is transferred on for management, either to the Azure portal or an on-premises management console.
In contrast, when working with locally managed sensors:
- View any data for a specific sensor from the sensor console. For a unified view of all information detected by several sensors, use an on-premises management console. For more information, see [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md). -- You must manually upload any threat intelligence packages
+- You must manually upload any threat intelligence packages.
- Sensor names can be updated in the sensor console.
Defender for IoT sensors apply analytics engines on ingested data, triggering al
Analytics engines provide machine learning and profile analytics, risk analysis, a device database and set of insights, threat intelligence, and behavioral analytics.
-For example, for OT networks, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitions - using a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine creates a baseline for industrial control system (ICS) networks. Since many detection algorithms were build for IT, rather than OT, networks, an extra baseline for ICS networks helps to shorten the systems learning curve for new detections.
+For example, for OT networks, the **policy violation detection** engine alerts users of any deviation from baseline behavior, such as unauthorized use of specific function codes, access to specific objects, or changes to device configuration. The policy violation engine models industry control system (ICS) networks as deterministic sequences of states and transitions - using a patented technique called Industrial Finite State Modeling (IFSM). The policy violation detection engine creates a baseline for industrial control system (ICS) networks. Since many detection algorithms were built for IT, rather than OT, networks, an extra baseline for ICS networks helps to shorten the systems learning curve for new detections.
Specifically for OT networks, OT network sensors also provide the following analytics engines: -- **Protocol violation detection engine**. Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and initiation of an obsolete function code alerts.
+- **Protocol violation detection engine**: Identifies the use of packet structures and field values that violate ICS protocol specifications, for example: Modbus exception, and initiation of an obsolete function code alerts.
-- **Industrial malware detection engine**. Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton.
+- **Industrial malware detection engine**: Identifies behaviors that indicate the presence of known malware, such as Conficker, Black Energy, Havex, WannaCry, NotPetya, and Triton.
-- **Anomaly detection engine**. Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the platform requires a shorter learning period than generic mathematical approaches or analytics originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. Anomaly detection engine alerts include Excessive SMB sign-in attempts, and PLC Scan Detected alerts.
+- **Anomaly detection engine**: Detects unusual machine-to-machine (M2M) communications and behaviors. By modeling ICS networks as deterministic sequences of states and transitions, the platform requires a shorter learning period than generic mathematical approaches or analytics originally developed for IT rather than OT. It also detects anomalies faster, with minimal false positives. Anomaly detection engine alerts include Excessive SMB sign-in attempts, and PLC Scan Detected alerts.
-- **Operational incident detection**. Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device might be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts.
+- **Operational incident detection**: Detects operational issues such as intermittent connectivity that can indicate early signs of equipment failure. For example, the device might be disconnected (unresponsive), and Siemens S7 stop PLC command was sent alerts.
## Management options
defender-for-iot Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/getting-started.md
You can use this procedure to set up a Defender for IoT trial. The trial provide
Before you start, make sure that you have: -- An Azure account. If you don't already have an Azure account, you can [create your Azure free account today](https://azure.microsoft.com/free/).
+- An Azure account. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).
- Access to an Azure subscription with the subscription **Owner** or **Contributor** role.
For more information, see:
- [Predeployment checklist](pre-deployment-checklist.md) - [Identify required appliances](how-to-identify-required-appliances.md)
-## Add a Defender for IoT plan to an Azure subscription
+## Add a Defender for IoT plan to an Azure subscription
-This procedure describes how to add a Defender for IoT plan to an Azure subscription.
+This procedure describes how to add a Defender for IoT plan to an Azure subscription.
**To add a Defender for IoT plan to an Azure subscription:**
This procedure describes how to add a Defender for IoT plan to an Azure subscrip
- **Subscription**. Select the subscription where you would like to add a plan. - Toggle on the **OT - Operational / ICS networks** and/or **EIoT - Enterprise IoT for corporate networks** options as needed for your network types. - **Price plan**. Select a monthly or annual commitment, or a [trial](how-to-manage-subscriptions.md#about-defender-for-iot-trials). Microsoft Defender for IoT provides a 30-day free trial for the first 1,000 committed devices for evaluation purposes.
-
+ For more information, see the [Microsoft Defender for IoT pricing page](https://azure.microsoft.com/pricing/details/iot-defender/). - **Committed sites** (for OT annual commitment only). Enter the number of committed sites. - **Number of devices**. If you selected a monthly or annual commitment, enter the number of devices you'll want to monitor. If you selected a trial, this section doesn't appear as you have a default of 1000 devices.
- :::image type="content" source="media/how-to-manage-subscriptions/onboard-plan.png" alt-text="Screenshot of adding a plan to your subscription.":::
+ :::image type="content" source="media/how-to-manage-subscriptions/onboard-plan.png" alt-text="Screenshot of adding a plan to your subscription." lightbox="media/how-to-manage-subscriptions/onboard-plan.png":::
1. Select **Next**.
-1. **Review & purchase**. Review the listed charges for your selections and **accept the terms and conditions**.
+1. **Review & purchase**. Review the listed charges for your selections and **accept the terms and conditions**.
1. Select **Purchase**.
-Your plan will be shown under the associated subscription in the **Plans and pricing** grid.
+Your plan will be shown under the associated subscription in the **Plans and pricing** grid.
For more information, see [Manage your subscriptions](how-to-manage-subscriptions.md).
defender-for-iot How To Install Software https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-install-software.md
Title: Install OT system software - Microsoft Defender for IoT
-description: Learn how to install a sensor and the on-premises management console for Microsoft Defender for IoT.
Previously updated : 01/06/2022
+ Title: Install OT network monitoring software - Microsoft Defender for IoT
+description: Learn how to install agentless monitoring software for an OT sensor and an on-premises management console for Microsoft Defender for IoT. Use this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
Last updated : 07/11/2022
-# Install OT system software
-
-This article describes how to install software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
+# Install OT agentless monitoring software
+This article describes how to install agentless monitoring software for OT sensors and on-premises management consoles. You might need the procedures in this article if you're reinstalling software on a preconfigured appliance, or if you've chosen to install software on your own appliances.
## Pre-installation configuration
For more information, see:
Make sure that you've downloaded the relevant software file for the sensor or on-premises management console.
-You can obtain the latest versions of our OT sensor and on-premises management console software from the Azure portal, on the Defender for IoT > **Getting started** page. Select the **Sensor**, **On-premises management console**, or **Updates** tab and locate the software you need.
+You can obtain the latest versions of our OT sensor and on-premises management console software from the Azure portal. On the Defender for IoT > **Getting started** page, select the **Sensor**, **On-premises management console**, or **Updates** tab and locate the software you need.
Mount the ISO file using one of the following options:
This procedure describes how to install OT sensor software on a physical or virt
1. The sensor will reboot, and the **Package configuration** screen will appear. Press the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-1. Select the monitor interface and press the **ENTER** key.
+1. Select the monitor interface. For example:
:::image type="content" source="media/tutorial-install-components/monitor-interface.png" alt-text="Screenshot of the select monitor interface screen.":::
-1. If one of the monitoring ports is for ERSPAN, select it, and press the **ENTER** key.
+1. If one of the monitoring ports is for ERSPAN, select it. For example:
:::image type="content" source="media/tutorial-install-components/erspan-monitor.png" alt-text="Screenshot of the select erspan monitor screen.":::
-1. Select the interface to be used as the management interface, and press the **ENTER** key.
+1. Select the interface to be used as the management interface. For example:
:::image type="content" source="media/tutorial-install-components/management-interface.png" alt-text="Screenshot of the management interface select screen.":::
-1. Enter the sensor's IP address, and press the **ENTER** key.
+1. Enter the sensor's IP address. For example:
:::image type="content" source="media/tutorial-install-components/sensor-ip-address.png" alt-text="Screenshot of the sensor IP address screen.":::
-1. Enter the path of the mounted logs folder. We recommend using the default path, and press the **ENTER** key.
+1. Enter the path of the mounted logs folder. We recommend using the default path. For example:
:::image type="content" source="media/tutorial-install-components/mounted-backups-path.png" alt-text="Screenshot of the mounted backup path screen.":::
-1. Enter the Subnet Mask IP address, and press the **ENTER** key.
+1. Enter the Subnet Mask IP address. For example:
-1. Enter the default gateway IP address, and press the **ENTER** key.
+1. Enter the default gateway IP address.
-1. Enter the DNS Server IP address, and press the **ENTER** key.
+1. Enter the DNS Server IP address.
-1. Enter the sensor hostname and press the **ENTER** key.
+1. Enter the sensor hostname. For example:
:::image type="content" source="media/tutorial-install-components/sensor-hostname.png" alt-text="Screenshot of the screen where you enter a hostname for your sensor.":::
For information on how to find the physical port on your appliance, see [Find yo
### Add a secondary NIC (optional)
-You can enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. By adding a secondary NIC, the first will be dedicated for end-users, and the secondary will support the configuration of a gateway for routed networks.
+You can enhance security to your on-premises management console by adding a secondary NIC dedicated for attached sensors within an IP address range. When you use a secondary NIC, the first is dedicated for end-users, and the secondary supports the configuration of a gateway for routed networks.
Both NICs will support the user interface (UI). If you choose not to deploy a secondary NIC, all of the features will be available through the primary NIC.
This command will cause the light on the port to flash for the specified time pe
After you've finished installing OT monitoring software on your appliance, test your system to make sure that processes are running correctly. The same validation process applies to all appliance types.
-System health validations are supported via the sensor or on-premises management console UI or CLI, and is available for both the **Support** and **CyberX** users.
+System health validations are supported via the sensor or on-premises management console UI or CLI, and are available for both the **Support** and **CyberX** users.
After installing OT monitoring software, make sure to run the following tests:
After installing OT monitoring software, make sure to run the following tests:
For more information, see [Check system health](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md#check-system-health) in our sensor and on-premises management console troubleshooting article.
-## Access sensors from the on-premises management console
+## Configure tunneling access for sensors through the on-premises management console
+
+Enhance system security by preventing direct user access to the sensor.
+
+Instead of direct access, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same.
+
+When tunneling access is configured, users use the following URL syntax to access their sensor consoles: `https://<on-premises management console address>/<sensor address>/<page URL>`
+
+For example, the following image shows a sample architecture where users access the sensor consoles via the on-premises management console.
-You can enhance system security by preventing direct user access to the sensor. Instead, use proxy tunneling to let users access the sensor from the on-premises management console with a single firewall rule. This technique narrows the possibility of unauthorized access to the network environment beyond the sensor. The user's experience when signing in to the sensor remains the same.
+The interface between the IT firewall, on-premises management console, and the OT firewall is done using a reverse proxy with URL rewrites. The interface between the OT firewall and the sensors is done using reverse SSH tunnels.
-**To enable tunneling**:
+**To enable tunneling access for sensors**:
1. Sign in to the on-premises management console's CLI with the **CyberX** or the **Support** user credentials.
defender-for-iot How To Manage Device Inventory For Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-device-inventory-for-organizations.md
If you are under the impression that certain devices aren't actively communicati
If you have devices no longer in use, delete them from the device inventory so that they're no longer connected to Defender for IoT.
-Devices must be inactive for 14 days or more in order for you to be able to delete them.
- **To delete a device**: In the **Device inventory** page, select the device you want to delete, and then select **Delete** :::image type="icon" source="media/how-to-manage-device-inventory-on-the-cloud/delete-device.png" border="false"::: in the toolbar at the top of the page.
defender-for-iot How To Set Up Your Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-set-up-your-network.md
This article describes how to set up your OT network to work with Microsoft Defender for IoT components, including the OT network sensors, the Azure portal, and an optional on-premises management console.
-OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into OT/ICS/IoT risks. Sensors carry out data collection, analysis and alerting on-site, making them ideal for locations with low bandwidth or high latency.
+OT network sensors use agentless, patented technology to discover, learn, and continuously monitor network devices for a deep visibility into OT/ICS/IoT risks. Sensors carry out data collection, analysis, and alerting on-site, making them ideal for locations with low bandwidth or high latency.
This article is intended for personnel experienced in operating and managing OT and IoT networks, such as automation engineers, plant managers, OT network infrastructure service providers, cybersecurity teams, CISOs, and CIOs.
For assistance or support, contact [Microsoft Support](https://support.microsoft
## Prerequisites
-Before performing the procedures in this article, make sure that you understand your own network architecture and how you'll connect to Defender for IoT. For more information, see:
+Before performing the procedures in this article, make sure you understand your own network architecture and how you'll connect to Defender for IoT. For more information, see:
- [Microsoft Defender for IoT system architecture](architecture.md) - [Sensor connection methods](architecture-connections.md)
Before performing the procedures in this article, make sure that you understand
## On-site deployment tasks
-Perform the steps in this section before deploying Defender for IoT on your network.
+Perform the steps in this section before deploying Defender for IoT on your network.
Make sure to perform each step methodologically, requesting the information and reviewing the data you receive. Prepare and configure your site and then validate your configuration.
Record the following site information:
- Configuration workstation. -- SSL certificates (optional but recommended).
+- TLS/SSL certificates (optional but recommended).
- SMTP authentication (optional). To use the SMTP server with authentication, prepare the credentials required for your server.
Record the following site information:
- Make sure that you can connect to the sensor management interface. -- Make sure that you have a supported browser. Supported browsers include terminal software, such as PuTTY, or the latest versions of Microsoft Edge, Chrome, Firefox, or Safari (Mac only).
+- Make sure that you have terminal software (like PuTTY) or a supported browser. Supported browsers include the latest versions of Microsoft Edge, Chrome, Firefox, or Safari (Mac only).
For more information, see [recommended browsers for the Azure portal](../../azure-portal/azure-portal-supported-browsers-devices.md#recommended-browsers). -- <a name="networking-requirements"></a>Make sure the required firewall rules are open on the workstation. Verify that your organizational security policy allows access as required. For more information, see [Networking requirements](#networking-requirements).-
+- Make sure the required firewall rules are open on the workstation. Verify that your organizational security policy allows access as required. For more information, see [Networking requirements](#networking-requirements).
### Set up certificates
-After you've installed Defender for IoT sensor and/or on-premises management console software, a local, self-signed certificate is generated
-and used to access the sensor web application.
+After you've installed the Defender for IoT sensor or on-premises management console software, a local, self-signed certificate is generated, and used to access the sensor web application.
-The first time you sign in to Defender for IoT, administrator users are prompted to provide an SSL/TLS certificate. Optional certificate validation is enabled by default.
+The first time they sign in to Defender for IoT, administrator users are prompted to provide an SSL/TLS certificate. Optional certificate validation is enabled by default.
We recommend having your certificates ready before you start your deployment. For more information, see [Defender for IoT installation](how-to-install-software.md) and [About Certificates](how-to-deploy-certificates.md). - ### Plan rack installation **To plan your rack installation**:
For example:
## Networking requirements Use the following tables to ensure that required firewalls are open on your workstation and verify that your organization security policy allows required access.+ ### User access to the sensor and management console | Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination |
Use the following tables to ensure that required firewalls are open on your work
| Protocol | Transport | In/Out | Port | Used | Purpose | Source | Destination | |--|--|--|--|--|--|--|--| | NTP | UDP | In/Out | 123 | Time Sync | Connects the NTP to the on-premises management console | Sensor | On-premises management console |
-| SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console |
+| TLS/SSL | TCP | In/Out | 443 | Give the sensor access to the on-premises management console. | The connection between the sensor, and the on-premises management console | Sensor | On-premises management console |
### Other firewall rules for external services (optional)
Open these ports to allow extra services for Defender for IoT.
| Proxy | TCP/UDP | In/Out | 443 | Proxy | To connect the sensor to a proxy server | On-premises management console and Sensor | Proxy server | | Syslog | UDP | Out | 514 | LEEF | The logs that are sent from the on-premises management console to Syslog server | On-premises management console and Sensor | Syslog server | | LDAPS | TCP | In/Out | 636 | Active Directory | Allows Active Directory management of users that have access, to sign in to the system | On-premises management console and Sensor | LDAPS server |
-| Tunneling | TCP | In | 9000 </br></br> in addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console </br></br> Port 22 from the sensor to the on-premises management console | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
+| Tunneling | TCP | In | 9000 </br></br> In addition to port 443 </br></br> Allows access from the sensor, or end user, to the on-premises management console </br></br> Port 22 from the sensor to the on-premises management console | Monitoring | Tunneling | Endpoint, Sensor | On-premises management console |
## Choose a cloud connection method
This section provides troubleshooting for common issues when preparing your netw
### Can't connect by using a web interface
-1. Verify that the computer that you're trying to connect is on the same network as the appliance.
+1. Verify that the computer you're trying to connect is on the same network as the appliance.
2. Verify that the GUI network is connected to the management port on the sensor.
This section provides troubleshooting for common issues when preparing your netw
1. To apply the settings, select **Y**.
-5. After you restart, connect with user support and use the **network list** command to verify that the parameters were changed.
+5. After you restart, connect with user support, and use the **network list** command to verify that the parameters were changed.
6. Try to ping and connect from the GUI again.
defender-for-iot Ot Appliance Sizing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-appliance-sizing.md
You can use both physical or virtual appliances.
Use the following hardware profiles for high bandwidth corporate IT/OT mixed networks: - |Hardware profile |Max throughput |Max monitored Assets |Deployment | ||||| |C5600 | 3 Gbps | 12 K |Physical / Virtual |
Use the following hardware profiles for enterprise monitoring at the site level
|E1000 |1 Gbps |10K |Physical / Virtual | |E500 |1 Gbps |10K |Physical / Virtual | - ## Production line monitoring Use the following hardware profiles for production line monitoring:
Then, use any of the following procedures to continue:
- [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console) - [Install software](how-to-install-software.md)
-Reference articles for OT monitoring appliances also include installation procedures in case you need to install software on your own appliances, or re-install software on preconfigured appliances.
+Reference articles for OT monitoring appliances also include installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances.
defender-for-iot Ot Pre Configured Appliances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-pre-configured-appliances.md
Title: Preconfigured appliances for OT network monitoring description: Learn about the appliances available for use with Microsoft Defender for IoT OT sensors and on-premises management consoles. Previously updated : 04/07/2022 Last updated : 07/11/2022
Use the links in the tables below to jump to articles with more details about ea
Microsoft has partnered with [Arrow Electronics](https://www.arrow.com/) to provide pre-configured sensors. To purchase a pre-configured sensor, contact Arrow at: [hardware.sales@arrow.com](mailto:hardware.sales@arrow.com).
-For more information, see [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors)
+For more information, see [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors).
-> [!TIP]
-> Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
->
->- **Performance** over the total assets monitored
->- **Compatibility** with new Defender for IoT releases, with validations for upgrades and driver support
->- **Stability**, validated physical appliances undergo traffic monitoring and packet loss tests
->- **In-lab experience**, Microsoft support teams train using validated physical appliances and have a working knowledge of the hardware
->- **Availability**, components are selected to offer long-term worldwide availability
->
+
+## Advantages of preconfigured appliances
+
+Pre-configured physical appliances have been validated for Defender for IoT OT system monitoring, and have the following advantages over installing your own software:
+
+- **Performance** over the total assets monitored
+- **Compatibility** with new Defender for IoT releases, with validations for upgrades and driver support
+- **Stability**, validated physical appliances undergo traffic monitoring and packet loss tests
+- **In-lab experience**, Microsoft support teams train using validated physical appliances and have a working knowledge of the hardware
+- **Availability**, components are selected to offer long-term worldwide availability
## Appliances for OT network sensors
-You can order any of the following preconfigured appliances for monitoring your OT networks:
+You can [order](mailto:hardware.sales@arrow.com) any of the following preconfigured appliances for monitoring your OT networks:
|Hardware profile |Appliance |Performance / Monitoring |Physical specifications | |||||
-|C5600 | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
-|E1800, E1000, E500 | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
-|L500 | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
-|L100 | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
-|L64 | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 <br> 8 Cores/32G RAM/100GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
+|**C5600** | [HPE ProLiant DL360](appliance-catalog/hpe-proliant-dl360.md) | **Max bandwidth**: 3Gbp/s <br>**Max devices**: 12,000 <br> 32 Cores/32G RAM/5.6TB | **Mounting**: 1U <br>**Ports**: 15x RJ45 or 8x SFP (OPT) |
+|**E1800, E1000, E500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | **Max bandwidth**: 1 Gbp/s<br>**Max devices**: 10,000 <br> 8 Cores/32G RAM/1.8TB | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**L500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-smb.md) <br> (NHP 2LFF) | **Max bandwidth**: 200Mbp/s<br>**Max devices**: 1,000 <br> 4 Cores/8G RAM/500GB | **Mounting**: 1U<br>**Ports**: 4x RJ45 |
+|**L100** | [YS-Techsystems YS-FIT2](appliance-catalog/ys-techsystems-ys-fit2.md) <br>(Rugged MIL-STD-810G) | **Max bandwidth**: 10Mbp/s <br>**Max devices**: 100 <br> 4 Cores/8G RAM/128GB | **Mounting**: DIN/VESA<br>**Ports**: 2x RJ45 |
+|**L64** | [Dell Edge 5200](appliance-catalog/dell-edge-5200.md) <br> (Rugged MIL-STD-810G) | **Max bandwidth**: 60Mbp/s<br>**Max devices**: 1,000 <br> 8 Cores/32G RAM/100GB | **Mounting**: Wall Mount<br>**Ports**: 3x RJ45 |
> [!NOTE]
You can purchase any of the following appliances for your OT on-premises managem
|Hardware profile |Appliance |Max sensors |Physical specifications | |||||
-|E1800, E1000, E500 | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
+|**E1800, E1000, E500** | [HPE ProLiant DL20/DL20 Plus](appliance-catalog/hpe-proliant-dl20-plus-enterprise.md) <br> (4SFF) | 300 | **Mounting**: 1U <br>**Ports**: 8x RJ45 or 6x SFP (OPT) |
## Next steps
-Continue understanding system requirements for physical or virtual appliances. For more information, see [Which appliances do I need?](ot-appliance-sizing.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
+Continue understanding system requirements for physical or virtual appliances.
+
+For more information, see [Which appliances do I need?](ot-appliance-sizing.md) and [OT monitoring with virtual appliances](ot-virtual-appliances.md).
-Use any of the following procedures to continue:
+Then, use any of the following procedures to continue:
- [Purchase sensors or download software for sensors](onboard-sensors.md#purchase-sensors-or-download-software-for-sensors) - [Download software for an on-premises management console](how-to-manage-the-on-premises-management-console.md#download-software-for-the-on-premises-management-console) - [Install software](how-to-install-software.md)
-Our OT monitoring appliance reference articles also include installation procedures in case you need to install software on your own appliances, or re-install software on preconfigured appliances.
+Our OT monitoring appliance reference articles also include extra installation procedures in case you need to install software on your own appliances, or reinstall software on preconfigured appliances. For more information, see [OT monitoring appliance reference](appliance-catalog/appliance-catalog-overview.md).
defender-for-iot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/overview.md
# Welcome to Microsoft Defender for IoT for organizations - The Internet of Things (IoT) supports billions of connected devices that use operational technology (OT) networks. IoT/OT devices and networks are often designed without prioritizing security, and therefore can't be protected by traditional systems. With each new wave of innovation, the risk to IoT devices and OT networks increases the possible attack surfaces.
-Microsoft Defender for IoT is a unified security solution for identifying IoT and OT devices, vulnerabilities, and threats and managing them through a central interface. This set of documentation describes how end-user organizations can secure their entire IoT/OT environment, including protecting existing devices or building security into new IoT innovations.
+Microsoft Defender for IoT is a unified security solution for identifying IoT and OT devices, vulnerabilities, and threats. With Defender for IoT, you can manage them through a central interface. This set of documentation describes how end-user organizations can secure their entire IoT/OT environment, including protecting existing devices or building security into new IoT innovations.
:::image type="content" source="media/overview/end-to-end-coverage.png" alt-text="Diagram showing an example of Defender for IoT's end-to-end coverage solution.":::
Microsoft Defender for IoT is a unified security solution for identifying IoT an
Many legacy IoT and OT devices don't support agents, and can therefore remain unpatched, misconfigured, and invisible to IT teams. These devices become soft targets for threat actors who want to pivot deeper into corporate networks.
-Agentless monitoring in Defender for IoT provides visibility and security into networks that can't be covered by traditional network security monitoring tools and may lack understanding of specialized protocols, devices, and relevant machine-to-machine (M2M) behaviors.
+Traditional network security monitoring tools may lack understanding of networks containing specialized protocols, devices, and relevant machine-to-machine (M2M) behaviors. Agentless monitoring in Defender for IoT provides visibility and security into those networks.
- **Discover IoT/OT devices** in your network, their details, and how they communicate. Gather data from network sensors, Microsoft Defender for end-point, and third-party sources. - **Assess risks and manage vulnerabilities** using machine learning, threat intelligence, and behavioral analytics. For example:
- - Identify unpatched devices, open ports, unauthorized applications, unauthorized connections, changes to device configurations, PLC code, firmware, and more.
+ - Identify unpatched devices, open ports, unauthorized applications, unauthorized connections, changes to device configurations, PLC code, firmware, and more.
- - Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further.
+ - Run searches in historical traffic across all relevant dimensions and protocols. Access full-fidelity PCAPs to drill down further.
- - Detect advanced threats that you may have missed by static IOCs, such as zero-day malware, fileless malware, and living-off-the-land tactics.
+ - Detect advanced threats that you may have missed by static IOCs, such as zero-day malware, fileless malware, and living-off-the-land tactics.
-- **Respond to threats** by integrating with Microsoft services, such as Microsoft Sentinel, third-party systems, and APIs. Use advanced integrations for security information and event management (SIEM), security operations and response (SOAR), extended detection and response (XDR) services, and more.
+- **Respond to threats** by integrating with Microsoft services, such as Microsoft Sentinel, non-Microsoft systems, and APIs. Use advanced integrations for security information and event management (SIEM), security operations and response (SOAR), extended detection and response (XDR) services, and more.
A centralized user experience lets the security team visualize and secure all their IT, IoT, and OT devices regardless of where the devices are located. - ## Support for cloud, on-premises, and hybrid networks Defender for IoT can support various network configurations: -- **Cloud**. Extend your journey to the cloud by having your data delivered to Azure, where you can visualize data from a central location and also share data with other Microsoft services for end-to-end security monitoring and response.
+- **Cloud**: Extend your journey to the cloud by delivering your data to Azure. There you can visualize data from a central location. That data can be shared with other Microsoft services for end-to-end security monitoring and response.
-- **On-premises**. For example, in air-gapped environments, you might want to keep all of your data fully on-premises. Use the data provided by each sensor and the central visualizations provided by an on-premises management console to ensure security on your network.
+- **On-premises**: For example, in air-gapped environments, you might want to keep all of your data fully on-premises. Use the data provided by each sensor and the central visualizations provided by an on-premises management console to ensure security on your network.
-- **Hybrid**. If you have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises only, set up your system in a flexible and scalable configuration that fits your needs.
+- **Hybrid**: You may have hybrid network requirements where you can deliver some data to the cloud and other data must remain on-premises. In this case, set up your system in a flexible and scalable configuration that fits your needs.
Regardless of configuration, data detected by a specific sensor is also always available in the sensor console.
Regardless of configuration, data detected by a specific sensor is also always a
IoT and ICS devices can be secured using both embedded protocols and proprietary, custom, or non-standard protocols. Use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins that decode network traffic, regardless of protocol type.
-For example, in an environment running MODBUS, you might want to generate an alert when the sensor detects a write command to a memory register on a specific IP address and Ethernet destination. Or you might want to generate an alert when any access is performed to a specific IP address. Alerts are triggered when Horizon alert rule conditions are met.
+For example, in an environment running MODBUS, you can generate an alert when the sensor detects a write command to a memory register on a specific IP address and Ethernet destination. Or you might want to generate an alert when any access is performed to a specific IP address. Alerts are triggered when Horizon alert rule conditions are met.
Use custom, condition-based alert triggering and messaging to help pinpoint specific network activity and effectively update your security, IT, and operational teams. Contact [ms-horizon-support@microsoft.com](mailto:ms-horizon-support@microsoft.com) for details about working with the Open Development Environment (ODE) SDK and creating protocol plugins. ## Protect enterprise networks
-<a name="enterprise"></a>Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
+Microsoft Defender for IoT can protect IoT and OT devices, whether they're connected to IT, OT, or dedicated IoT networks.
Enterprise IoT network protection extends agentless features beyond operational environments, providing coverage for all IoT devices in your environment. For example, an enterprise IoT environment may include printers, cameras, and purpose-built, proprietary, devices. When you expand Microsoft Defender for IoT into the enterprise network, you can apply Microsoft 365 Defender's features for asset discovery and use Microsoft Defender for Endpoint for a single, integrated package that can secure all of your IoT/OT infrastructure.
-Use Microsoft Defender for IoT's sensors as extra data sources, providing visibility in areas of your organization's network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any enterprise IoT devices discovered on the network by either service.
+Use Microsoft Defender for IoT's sensors as extra data sources. They provide visibility in areas of your organization's network where Microsoft Defender for Endpoint isn't deployed, and when employees are accessing information remotely. Microsoft Defender for IoT's sensors provide visibility into both the IoT-to-IoT and the IoT-to-internet communications. Integrating Defender for IoT and Defender for Endpoint synchronizes any enterprise IoT devices discovered on the network by either service.
For more information, see the [Microsoft 365 Defender](/microsoft-365/security/defender/microsoft-365-defender) and [Microsoft Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint).
defender-for-iot Tutorial Getting Started Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-getting-started-eiot-sensor.md
Title: Get started with Enterprise IoT - Microsoft Defender for IoT description: In this tutorial, you'll learn how to onboard to Microsoft Defender for IoT with an Enterprise IoT deployment Previously updated : 12/12/2021 Last updated : 07/11/2022 # Tutorial: Get started with Enterprise IoT monitoring
-This tutorial will help you learn how to get started with your Enterprise IoT monitoring deployment.
+This tutorial describes how to get started with your Enterprise IoT monitoring deployment with Microsoft Defender for IoT.
-Microsoft Defender for IoT has extended the agentless capabilities to go beyond operational environments, and advance into the realm of enterprise environments. Defender for IoT supports the entire breadth of IoT devices in your environment, including everything from corporate printers, cameras, to purpose-built devices, proprietary, and unique devices.
+Defender for IoT supports the entire breadth of IoT devices in your environment, including everything from corporate printers and cameras, to purpose-built, proprietary, and unique devices.
-You can extend your analytics capabilities to view alerts, vulnerabilities and recommendations for your enterprise devices with the Microsoft Defender for Endpoint integration. For more information, see the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+Integrate with [Microsoft Defender for Endpoint](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration) for analytics features that include alerts, vulnerabilities, and recommendations for your enterprise devices.
In this tutorial, you learn how to: > [!div class="checklist"] > * Set up a server or Virtual Machine (VM)
-> * Prepare your environment
-> * Set up an Enterprise IoT sensor
-> * Install the sensor
+> * Prepare your networking requirements
+> * Set up an Enterprise IoT sensor in the Azure portal
+> * Install the sensor software
> * Validate your setup > * View detected Enterprise IoT devices in the Azure portal > * View devices, alerts, vulnerabilities, and recommendations in Defender for Endpoint
In this tutorial, you learn how to:
Before you start, make sure that you have: -- A Defender for IoT plan added to your Azure subscription. You can add a plan from Defender for IoT in the Azure portal, or from Defender for Endpoint. If you already have a subscription that has Defender for IoT onboarded for OT environments, youΓÇÖll need to edit the plan to add Enterprise IoT.
-For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md), [Edit a plan](how-to-manage-subscriptions.md#edit-a-plan), or the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+- A Defender for IoT plan added to your Azure subscription.
-- The Azure permissions, as listed in [Quickstart: Getting Started with Defender for IoT](getting-started.md#permissions).
+ You can add a plan from Defender for IoT in the Azure portal, or from Defender for Endpoint. If you already have a subscription that has Defender for IoT onboarded for OT environments, youΓÇÖll need to edit the plan to add Enterprise IoT.
+
+ For more information, see [Quickstart: Get started with Defender for IoT](getting-started.md), [Edit a plan](how-to-manage-subscriptions.md#edit-a-plan), or the [Defender for Endpoint documentation](/microsoft-365/security/defender-endpoint/enable-microsoft-defender-for-iot-integration).
+
+- Required Azure permissions, as listed in [Quickstart: Getting Started with Defender for IoT](getting-started.md#permissions).
## Set up a server or Virtual Machine (VM)
-Before you deploy your Enterprise IoT sensor, you'll need to configure your server, or VM, and connect a Network Interface Card (NIC) to a switch monitoring (SPAN) port.
+Before you deploy your Enterprise IoT sensor, you'll need to configure your server or VM, and connect a Network Interface Card (NIC) to a switch monitoring (SPAN) port.
-**To set up a server, or VM**:
+**To set up a server or VM**:
1. Ensure that your resources are set to one of the following specifications: | Tier | Requirements | |--|--|
- | **Minimum** | To support up to 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16 GB RAM of DDR4 or better<br>- 250 GB HDD |
- | **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32 GB RAM of DDR4 or better<br>- 500 GB HDD |
-
+ | **Minimum** | To support up to 1 Gbps: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
+ | **Recommended** | To support up to 15 Gbps: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
+ Make sure that your server or VM also has:
-
- * Two network adapters
- * Ubuntu 18.04 operating system.
-1. Connect a NIC to a switch.
+ - Two network adapters
+ - Ubuntu 18.04 operating system
- * **Physical device** - connect a monitoring network interface (NIC) to a switch monitoring (SPAN) port.
+1. Connect a NIC to a switch as follows:
- * **VM** - Connect a vNIC to a vSwitch in promiscuous mode.
+ - **Physical device** - Connect a monitoring network interface (NIC) to a switch monitoring (SPAN) port.
- * For a VM, run the following command to enable the network adapter in promiscuous mode.
+ - **VM** - Connect a vNIC to a vSwitch in promiscuous mode.
- ```bash
- ifconfig <monitoring port> up promisc
- ```
+ For a VM, run the following command to enable the network adapter in promiscuous mode.
-1. Validate incoming traffic to the monitoring port with the following command:
+ ```bash
+ ifconfig <monitoring port> up promisc
+ ```
+
+1. Validate incoming traffic to the monitoring port. Run:
```bash ifconfig <monitoring interface>
Before you deploy your Enterprise IoT sensor, you'll need to configure your serv
If the number of RX packets increases each time, the interface is receiving incoming traffic. Repeat this step for each interface you have.
-## Prepare your environment
+## Prepare networking requirements
-The environment will now have to be prepared.
+This procedure describes how to prepare networking requirements on your server or VM to work with Defender for IoT with Enterprise IoT networks.
-**To prepare the environment**:
+**To prepare your networking requirements**:
1. Open the following ports in your firewall:
- * HTTPS - 443 TCP
-
- * DNS - 53 TCP
-
-1. Hostnames for Azure resources:
+ - **HTTPS** - 443 TCP
+ - **DNS** - 53 TCP
- * **EventHub**: *.servicebus.windows.net
+1. Make sure that your server or VM can access the cloud using HTTP on port 443 to the following Microsoft domains:
- * **Storage**: *.blob.core.windows.net
+ - **EventHub**: `*.servicebus.windows.net`
+ - **Storage**: `*.blob.core.windows.net`
+ - **Download Center**: `download.microsoft.com`
+ - **IoT Hub**: `*.azure-devices.net`
- * **Download Center**: download.microsoft.com
+### (Optional) Download Azure public IP ranges
- * **IoT Hub**: *.azure-devices.net
-
-You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure resources that are specified above, along with their region.
+You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure domains that are specified above, along with their region.
> [!Note]
-> The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. Please download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
+> The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
## Set up an Enterprise IoT sensor
-A sensor is needed to discover and continuously monitor Enterprise IoT devices. The sensor will use the Enterprise IoT network and endpoint sensors to gain comprehensive visibility.
+You'll need a Defender for IoT network sensor to discover and continuously monitor Enterprise IoT devices. Defender for IoT sensors use the Enterprise IoT network and endpoint sensors to gain comprehensive visibility.
-**Prerequisites**: Make sure that you've completed [Set up a server or Virtual Machine (VM)](#set-up-a-server-or-virtual-machine-vm) and [Prepare your environment](#prepare-your-environment), including verifying that you have the listed required resources.
+**Prerequisites**: Make sure that you've completed [Set up a server or Virtual Machine (VM)](#set-up-a-server-or-virtual-machine-vm) and [Prepare networking requirements](#prepare-networking-requirements), including verifying that you have the listed required resources.
**To set up an Enterprise IoT sensor**:
A sensor is needed to discover and continuously monitor Enterprise IoT devices.
1. Select **Set up Enterprise IoT Security**.
- :::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="On the Getting Started page select Onboard sensor.":::
+ :::image type="content" source="media/tutorial-get-started-eiot/onboard-sensor.png" alt-text="Screenshot of the Getting started page for Enterprise IoT security.":::
1. In the **Sensor name** field, enter a meaningful name for your sensor.
A sensor is needed to discover and continuously monitor Enterprise IoT devices.
:::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
-1. Copy the command to a safe location, and continue [below](#install-the-sensor).
+1. Copy the command to a safe location, and continue by [installing the sensor](#install-the-sensor) software.
## Install the sensor
Run the command that you received and saved when you registered the Enterprise I
1. Run the command that you saved from [setting up an Enterprise IoT sensor](#set-up-an-enterprise-iot-sensor).
-1. When the command is complete, the installation wizard will appear.
-
-1. `What is the name of the monitored interface?` Use the space bar to select an interface.
-
- :::image type="content" source="media/tutorial-get-started-eiot/monitored-interface.png" alt-text="Screenshot of the select monitor interface selection scree.":::
-
-1. Select **Ok**.
-
-1. `Setup proxy server`.
-
- * If no, select **No**.
+ The installation wizard appears when the command process completes:
- * If yes, select **Yes**.
+ - In the **What is the name of the monitored interface?** screen, use the SPACEBAR to select the interfaces you want to monitor with your sensor, and then select OK.
-1. (Optional) If you're setting up a proxy server.
+ - In the **Set up proxy server** screen, select whether to set up a proxy server for your sensor (**Yes** / **No**).
- 1. Enter the proxy server host, and select **Ok**.
+ (Optional) If you're setting up a proxy server, define the following values, selecting **Ok** after each option:
- 1. Enter the proxy server port, and select **Ok**.
+ - Proxy server host
+ - Proxy server port
+ - Proxy server username
+ - Proxy server password
- 1. Enter the proxy server username, and select **Ok**.
+The installation process completes.
- 1. Enter the server password, and select **Ok**.
+## Validate your setup
-The installation will now finish.
+Wait 1 minute after your sensor installation has completed before starting to validate your sensor setup.
-## Validate your setup
+**To validate the sensor setup**:
-1. Wait 1 minute after the installation has completed, and run the following command to process the sanity of your system.
+1. To process your system sanity, run:
```bash sudo docker ps ```
-1. Ensure the following containers are up:
-
- * compose_statistics-collector_1
-
- * compose_cloud-communication_1
-
- * compose_horizon_1
-
- * compose_attributes-collector_1
+1. In the results that display, ensure that the following containers are up:
- * compose_properties_1
+ - `compose_statistics-collector_1`
+ - `compose_cloud-communication_1`
+ - `compose_horizon_1`
+ - `compose_attributes-collector_1`
+ - `compose_properties_1`
- :::image type="content" source="media/tutorial-get-started-eiot/up-healthy.png" alt-text="Screenshot showing the containers are up and healthy.":::
-
-1. Monitor port validation with the following command to see which interface is defined to handle port mirroring:
+1. Check your port validation to see which interface is defined to handle port mirroring. Run:
```bash sudo docker logs compose_horizon_1 ````
- :::image type="content" source="media/tutorial-get-started-eiot/defined-interface.png" alt-text="Run the command to see which interface is defined to handle port monitoring.":::
+ For example:
+
+ :::image type="content" source="media/tutorial-get-started-eiot/defined-interface.png" alt-text="Screenshot of a result showing an interface defined to handle port monitoring.":::
-1. Wait 5 minutes, and run the following command to check the traffic D2C sanity:
+1. Wait 5 minutes and then check your traffic D2C sanity. Run:
```bash sudo docker logs -f compose_attributes-collector_1 ```
- Ensure that packets are being sent to the Event Hubs.
+ Check your results to ensure that packets are being sent to the Event Hubs.
## View detected Enterprise IoT devices in Azure
-You can view your devices and network information in the Defender for IoT **Device inventory** page.
- Once you've validated your setup, the **Device inventory** page will start to populate with all of your devices after 15 minutes.
-To view your device inventory in the Azure portal, go to **Defender for IoT** > **Device inventory**.
+- View your devices and network information in the Defender for IoT **Device inventory** page on the Azure portal.
+
+ To view your device inventory, go to **Defender for IoT** > **Device inventory**.
-You can also view your sensors from the **Sites and sensors** page. Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
+- You can also view your sensors from the **Sites and sensors** page. Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**.
For more information, see:
For more information, see:
## Microsoft Defender for Endpoint integration
-Once youΓÇÖve onboarded a plan and set up your sensor, your device data integrates automatically with Microsoft Defender for Endpoint. Discovered devices appear in both the Defender for IoT and Defender for Endpoint portals, extending security analytics capabilities for your Enterprise IoT devices and providing complete coverage.
+Once youΓÇÖve onboarded a plan and set up your sensor, your device data integrates automatically with Microsoft Defender for Endpoint. Discovered devices appear in both the Defender for IoT and Defender for Endpoint portals. Use this integration to extend security analytics capabilities for your Enterprise IoT devices and providing complete coverage.
In Defender for Endpoint, you can view discovered IoT devices and related alerts, vulnerabilities, and recommendations. For more information, see:
In Defender for Endpoint, you can view discovered IoT devices and related alerts
- [Vulnerabilities in my organization](/microsoft-365/security/defender-vulnerability-management/) - [Security recommendations](/microsoft-365/security/defender-vulnerability-management/tvm-security-recommendation)
-## Remove the sensor (optional)
+## Remove an Enterprise IoT network sensor (optional)
-Remove a sensor that's no longer in use from Defender for IoT.
+Remove a sensor if it's no longer in use with Defender for IoT.
-**To remove a sensor**, run the following command:
+**To remove a sensor**, run the following command on the sensor server or VM:
```bash sudo apt purge -y microsoft-eiot-sensor
defender-for-iot Tutorial Onboarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/tutorial-onboarding.md
Title: Get started with Microsoft Defender for IoT for OT security
+ Title: Tutorial - Get started with Microsoft Defender for IoT for OT security
description: This tutorial describes how to use Microsoft Defender for IoT to set up a network for OT system security. Previously updated : 06/02/2022 Last updated : 07/11/2022 # Tutorial: Get started with Microsoft Defender for IoT for OT security
Before you start, make sure that you have the following:
- At least one device to monitor, with the device connected to a SPAN port on a switch. -- VMware, ESXi 5.5 or later, installed and operational:-
+- VMware, ESXi 5.5 or later, installed and operational on your sensor.
- <a name="hw"></a>Available hardware resources for your VM as follows:
Before continuing, make sure that your sensor can access the cloud using HTTP on
## Onboard and activate the virtual sensor
-Before you can start using your Defender for IoT sensor, you'll need to onboard the created virtual sensor to your Azure subscription and download the virtual sensor's activation file to activate the sensor.
+Before you can start using your Defender for IoT sensor, you'll need to onboard your new virtual sensor to your Azure subscription, and download the virtual sensor's activation file to activate the sensor.
### Onboard the virtual sensor
Before you can start using your Defender for IoT sensor, you'll need to onboard
1. At the bottom left, select **Set up OT/ICS Security**.
- :::image type="content" source="media/tutorial-onboarding/onboard-a-sensor.png" alt-text="Screenshot of selecting to onboard the sensor to start the onboarding process for your sensor.":::
+ :::image type="content" source="media/tutorial-onboarding/onboard-a-sensor.png" alt-text="Screenshot of the Getting started page for OT network sensors.":::
In the **Set up OT/ICS Security** page, you can leave the **Step 1: Did you set up a sensor?** and **Step 2: Configure SPAN port or TAP** steps collapsed, because you've completed these tasks earlier in this tutorial.
Your sensor is activated and onboarded to Defender for IoT. In the **Sites and s
## Next steps
-After your OT sensor is connection, continue with any of the following to start analyzing your data:
+After your OT sensor is connected, continue with any of the following to start analyzing your data:
- [View assets from the Azure portal](how-to-manage-device-inventory-for-organizations.md)
digital-twins Concepts 3D Scenes Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-3d-scenes-studio.md
To work with 3D Scenes Studio, you'll need the following required resources:
* To **view** 3D scenes, you'll need at least *Storage Blob Data Reader* access to these storage resources. To **build** 3D scenes, you'll need *Storage Blob Data Contributor* or *Storage Blob Data Owner* access. You can grant required roles at either the storage account level or the container level. For more information about Azure storage permissions, see [Assign an Azure role](../storage/blobs/assign-azure-role-data-access.md?tabs=portal#assign-an-azure-role).
- * You should also configure [CORS](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. For complete CORS setting information, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md#prerequisites).
+ * You should also configure [Cross-Origin Resource Sharing (CORS)](/rest/api/storageservices/cross-origin-resource-sharing--cors--support-for-the-azure-storage-services) for your storage account, so that 3D Scenes Studio will be able to access your storage container. For complete CORS setting information, see [Use 3D Scenes Studio (preview)](how-to-use-3d-scenes-studio.md#prerequisites).
Then, you can access 3D Scenes Studio at this link: [3D Scenes Studio](https://dev.explorer.azuredigitaltwins-test.net/3dscenes).
These limits are recommended because 3D Scenes Studio leverages the standard [Az
Try out 3D Scenes Studio with a sample scenario in [Get started with 3D Scenes Studio](quickstart-3d-scenes-studio.md).
-Or, learn how to use the studio's full feature set in [Use 3D Scenes Studio](how-to-use-3d-scenes-studio.md).
+Or, learn how to use the studio's full feature set in [Use 3D Scenes Studio](how-to-use-3d-scenes-studio.md).
event-grid Onboard Partner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md
If you selected **Channel name header** for **Partner topic routing mode**, crea
1. Specify **source** information for the partner topic. Source is contextual information on the source of events provided by the partner that the end user can see. This information is helpful when end user is considering activating a partner topic, for example. :::image type="content" source="./media/onboard-partner/channel-partner-topic-basics.png" alt-text="Image showing the Create Channel - Basics page.":::
+ 1. Select **Add event type definitions** to declare the kind of events that are sent to the channel and to its associated partner topic. Event types are shown to customers when creating event subscriptions on the partner topic and are used to select the specific event types to send to an event handler destination.
+
+ :::image type="content" source="./media/onboard-partner/event-type-definition-1.png" alt-text="Screenshot that shows the Event Types Definitions section with Add event types definitions option selected.":::
+
+ :::image type="content" source="./media/onboard-partner/event-type-definition-2.png" alt-text="Screenshot that shows the definition of a sample event type.":::
+
+ :::image type="content" source="./media/onboard-partner/event-type-definition-3.png" alt-text="Screenshot that shows a list with the event type definition that was added.":::
1. If you selected **Partner Destination**, enter the following details: 1. **ID of the subscription** in which the partner topic will be created. 1. **Resource group** in which the partner topic will be created.
If you selected **Channel name header** for **Partner topic routing mode**, crea
**Partner destination** option: :::image type="content" source="./media/onboard-partner/create-channel-review-create-destination.png" alt-text="Image showing the Create Channel - Review + create page when the Partner Destination option is selected.":::
-
+## Manage a channel
+
+If you created a channel you may be interested to update the configuration once the resource has been created.
+
+1. Go to the **Configuration** on the channel. You may update message for partner topic activation, expiration time if not activated, and event type definitions.
+
+ :::image type="content" source="./media/onboard-partner/channel-configuration.png" alt-text="Screenshot that shows the Configuration page of a channel.":::
+
+> [!IMPORTANT]
+> Don't forget to save changes before leaving the configuration page.
## Create an event channel
event-grid Partner Events Overview For Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview-for-partners.md
You may want to use the Partner Events feature if you've one or more of the foll
### For partners as a subscriber -- You want your service to react to customer events that originate in Microsoft/Azure.-- You want your customer to react to Microsoft/Azure service events using their applications hosted by your platform. You use your platform's event routing capabilities to deliver events to the right customer solution.
+- You want your service to react to customer events that originate in Microsoft Azure.
+- You want your customer to react to Microsoft Azure service events using their applications hosted by your platform. You use your platform's event routing capabilities to deliver events to the right customer solution.
- You want a simple model where your customers just select your service name as a destination without the need for them to know technical details like your platform endpoints. - Your system/platform supports [Cloud Events 1.0](https://cloudevents.io/) schema.
Registrations are global. That is, they aren't associated with a particular Azur
### Channel A Channel is a nested resource to a Partner Namespace. A channel has two main purposes:
- - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is the customer's resource where events from a partner system. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel is the kind of resource, along with partner topics and partner destinations that enable bi-directional event integration.
+ - It's the resource type that allows you to create partner resources on a customer's Azure subscription. When you create a channel of type `partner topic`, a partner topic is created on a customer's Azure subscription. A partner topic is a customer's resource to which events are routed when a partner system publishes events. Similarly, when a channel of type `partner destination` is created, a partner destination is created on a customer's Azure subscription. Partner destinations are resources that represent a partner system endpoint to where events are delivered. A channel is the kind of resource, along with partner topics and partner destinations that enable bi-directional event integration.
A channel has the same lifecycle as its associated customer partner topic or destination. When a channel of type `partner topic` is deleted, for example, the associated customer's partner topic is deleted. Similarly, if the partner topic is deleted by the customer, the associated channel on your Azure subscription is deleted. - It's a resource that is used to route events. A channel of type ``partner topic`` is used to route events to a customer's partner topic. It supports two types of routing modes. - **Channel name routing**. With this kind of routing, you publish events using an http header called `aeg-channel-name` where you provide the name of the channel to which events should be routed. As channels are a partner's representation of partner topics, the events routed to the channel show on the customer's parter topic. This kind of routing is a new capability not present in `event channels`, which support only source-based routing. Channel name routing enables more use cases than the source-based routing and it's the recommended routing mode to choose. For example, with channel name routing a customer can request events that originate in different event sources to land on a single partner topic. - **Source-based routing**. This routing approach is based on the value of the `source` context attribute in the event. Sources are mapped to channels and when an event comes with a source, say, of value "A" that event is routed to the partner topic associated to the channel that contains "A" in its source property.
+ You may want to declare the event types that are routed to the channel and to its associated partner topic. Event types are shown to customers when creating event subscriptions on the partner topic and are used to select the specific event types to send to an event handler destination. [Learn more](onboard-partner.md#create-a-channel).
+
+ >[!IMPORTANT]
+ >Event types can be managed on the channel and once the values are updated, changes are reflected immediately on the associated partner topic.
+ A channel of type ``partner destination`` is used to route events to a partner system. When creating a channel of this type, you provide your webhook URL where you receive the events published by Azure Event Grid. Once the channel is created, a customer can use the partner destination resource when creating an [event subscription](subscribe-through-portal.md) as the destination to deliver events to the partner system. Event Grid publishes events with the request including an http header `aeg-channel-name` too. Its value can be used to associate the incoming events with a specific user who in the first place requested the partner destination. A customer can use your partner destination to send your service any kind of events available to [Event Grid](overview.md).
- - A channel can store definitions for event types. These definitions can be added during the creation of a channel or once the channel is created in the configuration. The event type definitions allow a customer to subscribe to these events when using partner topics. [Learn more](concepts.md#inline-event-type-definitions).
-
- >[!IMPORTANT]
- >Event types can be managed in the channel and once the values are updated, changes will be reflected immediately in the associated partner topic.
- ### Partner namespace A partner namespace is a regional resource that has an endpoint to publish events to Azure Event Grid. Partner namespaces contain either channels or event channels (legacy resource). You must create partner namespaces in regions where customers request partner topics or destinations because channels and their corresponding partner resources must reside in the same region. You can't have a channel in a given region with its related partner topic, for example, located in a different region.
frontdoor Front Door Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-caching.md
The following request headers won't be forwarded to a backend when using caching
- Transfer-Encoding - Accept-Language
+## Response headers
+
+The following response headers will be stripped if the origin response is cacheable. For example, Cache control header has max-age value.
+
+- Set-Cookie
+ ## Cache behavior and duration ::: zone pivot="front-door-standard-premium"
governance Assign Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-portal.md
disks_ policy definition.
[modify](./concepts/effects.md#modify) effect. As the policy used for this quickstart doesn't, leave it blank. For more information, see [managed identities](../../active-directory/managed-identities-azure-resources/overview.md) and
- [how remediation security works](./how-to/remediate-resources.md#how-remediation-security-works).
+ [how remediation access control works](./how-to/remediate-resources.md#how-remediation-access-control-works).
1. Select **Next** at the bottom of the page or the **Non-compliance messages** tab at the top of the page to move to the next segment of the assignment wizard.
governance Effects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/effects.md
related resources to match and the template deployment to execute.
- **roleDefinitionIds** (required) - This property must include an array of strings that match role-based access control role ID accessible by the subscription. For more information, see
- [remediation - configure policy definition](../how-to/remediate-resources.md#configure-policy-definition).
+ [remediation - configure the policy definition](../how-to/remediate-resources.md#configure-the-policy-definition).
- **DeploymentScope** (optional) - Allowed values are _Subscription_ and _ResourceGroup_. - Sets the type of deployment to be triggered. _Subscription_ indicates a
needed for remediation and the **operations** used to add, update, or remove tag
- **roleDefinitionIds** (required) - This property must include an array of strings that match role-based access control role ID accessible by the subscription. For more information, see
- [remediation - configure policy definition](../how-to/remediate-resources.md#configure-policy-definition).
+ [remediation - configure the policy definition](../how-to/remediate-resources.md#configure-the-policy-definition).
- The role defined must include all operations granted to the [Contributor](../../../role-based-access-control/built-in-roles.md#contributor) role. - **conflictEffect** (optional)
governance Remediate Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/how-to/remediate-resources.md
# Remediate non-compliant resources with Azure Policy
-Resources that are non-compliant to a **deployIfNotExists** or **modify** policy can be put into a
-compliant state through **Remediation**. Remediation is accomplished by instructing Azure Policy to
-run the **deployIfNotExists** effect or the **modify operations** of the assigned policy on your
-existing resources and subscriptions, whether that assignment is to a management group, a
-subscription, a resource group, or an individual resource. This article shows the steps needed to
+Resources that are non-compliant to policies with **deployIfNotExists** or **modify** effects can be put into a
+compliant state through **Remediation**. Remediation is accomplished through **remediation tasks** that deploy the **deployIfNotExists** template or the **modify** operations of the assigned policy on your existing resources and subscriptions, whether that assignment is on a management group,
+subscription, resource group, or individual resource. This article shows the steps needed to
understand and accomplish remediation with Azure Policy.
-## How remediation security works
+## How remediation access control works
-When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using
+When Azure Policy starts a template deployment when evaluating **deployIfNotExists** policies or modifies a resource when evaluating **modify** policies, it does so using
a [managed identity](../../../active-directory/managed-identities-azure-resources/overview.md) that is associated with the policy assignment. Policy assignments use [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) for Azure resource authorization. You can use either a system-assigned managed identity that is created by the policy service or a user-assigned identity provided by the user. The managed identity needs to be assigned the minimum role-based access control (RBAC) role(s) required to remediate resources.
-If the managed identity is missing roles, an error is displayed
+If the managed identity is missing roles, an error is displayed in the portal
during the assignment of the policy or an initiative. When using the portal, Azure Policy automatically grants the managed identity the listed roles once assignment starts. When using an Azure software development kit (SDK), the roles must manually be granted to the managed identity. The _location_ of the managed identity doesn't impact its operation with Azure Policy.
+ > [!NOTE]
+ > Changing a policy definition does not automatically update the assignment or the associated managed identity.
-> [!IMPORTANT]
-> In the following scenarios, the assignment's managed identity must be
-> [manually granted access](#manually-configure-the-managed-identity) or the remediation deployment
-> fails:
->
-> - If the assignment is created through SDK
-> - If a resource modified by **deployIfNotExists** or **modify** is outside the scope of the policy
-> assignment
-> - If the template accesses properties on resources outside the scope of the policy assignment
->
-> Also, changing a a policy definition does not update the assignment or the associated managed identity.
+Remediation security can be configured through the following steps:
+- [Configure the policy definition](#configure-the-policy-definition)
+- [Configure the managed identity](#configure-the-managed-identity)
+- [Grant permissions to the managed identity through defined roles](#grant-permissions-to-the-managed-identity-through-defined-roles)
+- [Create a remediation task](#create-a-remediation-task)
-## Configure policy definition
+## Configure the policy definition
-The first step is to define the roles that **deployIfNotExists** and **modify** needs in the policy
-definition to successfully deploy the content of your included template. Under the **details**
-property in the policy definition, add a **roleDefinitionIds** property. This property is an array of strings that match
+As a prerequisite, the policy definition must define the roles that **deployIfNotExists** and **modify** need to successfully deploy the content of the included template. No action is required for a built-in policy definition because these roles are prepopulated. For a custom policy definition, under the **details**
+property, add a **roleDefinitionIds** property. This property is an array of strings that match
roles in your environment. For a full example, see the [deployIfNotExists example](../concepts/effects.md#deployifnotexists-example) or the [modify examples](../concepts/effects.md#modify-examples).
az role definition list --name "Contributor"
> [managed identity best practice recommendations](../../../active-directory/managed-identities-azure-resources/managed-identity-best-practice-recommendations.md) > for more best practices.
-## Manually configure the managed identity
+## Configure the managed identity
-When creating an assignment using the portal, Azure Policy can generate a system-assigned managed identity and
-grant it the roles defined in **roleDefinitionIds**. Alternatively, you can specify a user-assigned managed identity that receives the same role assignment.
+Each Azure Policy assignment can be associated with only one managed identity. However, the managed identity can be assigned multiple roles. Configuration occurs in two steps: first create either a system-assigned or user-assigned managed identity, then grant it the necessary roles.
> [!NOTE]
- > Each Azure Policy assignment can be associated with only one managed identity. However, the managed identity can be assigned multiple roles.
+ > When creating a managed identity through the portal, roles will be granted automatically to the managed identity. If **roleDefinitionIds** are later edited in the policy definition, the new permissions must be manually granted, even in the portal.
-In the following conditions, steps to create
-the managed identity and assign it permissions must be done manually:
+### Create the managed identity
-- While using the SDK (such as Azure PowerShell)-- When a resource outside the assignment scope is modified by the template-- When a resource outside the assignment scope is read by the template
+# [Portal](#tab/azure-portal)
-## Configure a managed identity through the Azure portal
+When creating an assignment using the portal, Azure Policy can generate a system-assigned managed identity and grant it the roles defined in the policy definition's **roleDefinitionIds**. Alternatively, you can specify a user-assigned managed identity that receives the same role assignment.
-When creating an assignment using the portal, you can select either a system-assigned managed identity or a user-assigned managed identity.
To set a system-assigned managed identity in the portal:
is selected.
1. Under **Existing user assigned identities**, select the managed identity.
- > [!NOTE]
- > If the managed identity does not have the permissions needed to execute the required remediation task, it will be granted permissions *automatically* only through the portal. For all other methods, permissions must be configured manually.
- >
+# [PowerShell](#tab/azure-powershell)
-### Create managed identity with PowerShell
+To create an identity during the assignment of the policy, **Location** must be defined and **Identity** used.
-To create an identity during the assignment of the policy, **Location** must be defined and **Identity** used. The following example gets the definition of the built-in policy **Deploy SQL DB transparent data encryption** sets the target resource group, and then creates the assignment using a **system assigned** managed identity.
+The following example gets the definition of the built-in policy **Deploy SQL DB transparent data encryption** sets the target resource group, and then creates the assignment using a **system assigned** managed identity.
```azurepowershell-interactive # Login first with Connect-AzAccount if not using Cloud Shell
$assignment = New-AzPolicyAssignment -Name 'sqlDbTDE' -DisplayName 'Deploy SQL D
The `$assignment` variable now contains the principal ID of the managed identity along with the standard values returned when creating a policy assignment. It can be accessed through `$assignment.Identity.PrincipalId` for system-assigned managed identities and `$assignment.Identity.UserAssignedIdentities[$userassignedidentityid].PrincipalId` for user-assigned managed identities.
-### Grant a managed identity defined roles with PowerShell
+# [Azure CLI](#tab/azure-cli)
-The new managed identity must complete replication through Azure Active Directory before it can be
-granted the needed roles. Once replication is complete, the following example iterates the policy
-definition in `$policyDef` for the **roleDefinitionIds** and uses
-[New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to
-grant the new managed identity the roles.
+To create an identity during the assignment of the policy, use [az policy assignment create](/cli/azure/policy/assignment?view=azure-cli-latest#az-policy-assignment-create&preserve-view=true) commands with the parameters **--location**, **--mi-system-assigned**, **--mi-user-assigned**, and **--identity-scope** depending on whether the managed identity should be system-assigned or user-assigned.
-```azurepowershell-interactive
-# Use the $policyDef to get to the roleDefinitionIds array
-$roleDefinitionIds = $policyDef.Properties.policyRule.then.details.roleDefinitionIds
+To add a system-assigned identity or a user-assigned identity to a policy assignment, follow example [az policy assignment identity assign](/cli/azure/policy/assignment/identity?view=azure-cli-latest#az-policy-assignment-identity-assign&preserve-view=true) commands.
-if ($roleDefinitionIds.Count -gt 0)
-{
- $roleDefinitionIds | ForEach-Object {
- $roleDefId = $_.Split("/") | Select-Object -Last 1
- New-AzRoleAssignment -Scope $resourceGroup.ResourceId -ObjectId $assignment.Identity.PrincipalId -RoleDefinitionId $roleDefId
- }
-}
-```
+
-### Grant a managed identity defined roles through the portal
+### Grant permissions to the managed identity through defined roles
-There are two ways to grant an assignment's managed identity the defined roles using the portal, by
+> [!IMPORTANT]
+>
+> If the managed identity does not have the permissions needed to execute the required remediation task, it will be granted permissions *automatically* only through the portal. You may skip this step if creating a managed identity through the portal.
+>
+> For all other methods, the assignment's managed identity must be manually granted access through the addition of roles, or else the remediation deployment will fail.
+>
+> Example scenarios that require manual permissions:
+> - If the assignment is created through SDK
+> - If a resource modified by **deployIfNotExists** or **modify** is outside the scope of the policy
+> assignment
+> - If the template accesses properties on resources outside the scope of the policy assignment
+>
+
+# [Portal](#tab/azure-portal)
+
+There are two ways to grant an assignment's managed identity the defined roles using the portal: by
using **Access control (IAM)** or by editing the policy or initiative assignment and selecting **Save**.
To add a role to the assignment's managed identity, follow these steps:
1. Select the **Access control (IAM)** link in the resources page and then select **+ Add role assignment** at the top of the access control page.
-1. Select the appropriate role that matches a **roleDefinitionIds** from the policy definition.
+1. Select the appropriate role that matches a **roleDefinitionId** from the policy definition.
Leave **Assign access to** set to the default of 'Azure AD user, group, or application'. In the **Select** box, paste or type the portion of the assignment resource ID located earlier. Once the search completes, select the object with the same name to select ID and select **Save**.
-
-## Create a remediation task
-The following sections describe how to create a remediation task.
+# [PowerShell](#tab/azure-powershell)
-### Create a remediation task through the portal
+The new managed identity must complete replication through Azure Active Directory before it can be
+granted the needed roles. Once replication is complete, the following example iterates the policy
+definition in `$policyDef` for the **roleDefinitionIds** and uses
+[New-AzRoleAssignment](/powershell/module/az.resources/new-azroleassignment) to
+grant the new managed identity the roles.
-During evaluation, the policy assignment with **deployIfNotExists** or **modify** effects determines
-if there are non-compliant resources or subscriptions. When non-compliant resources or subscriptions
-are found, the details are provided on the **Remediation** page. Along with the list of policies
-that have non-compliant resources or subscriptions is the option to trigger a **remediation task**.
-This option is what creates a deployment from the **deployIfNotExists** template or the **modify**
-operations.
+```azurepowershell-interactive
+# Use the $policyDef to get to the roleDefinitionIds array
+$roleDefinitionIds = $policyDef.Properties.policyRule.then.details.roleDefinitionIds
-To create a **remediation task**, follow these steps:
+if ($roleDefinitionIds.Count -gt 0)
+{
+ $roleDefinitionIds | ForEach-Object {
+ $roleDefId = $_.Split("/") | Select-Object -Last 1
+ New-AzRoleAssignment -Scope $resourceGroup.ResourceId -ObjectId $assignment.Identity.PrincipalId -RoleDefinitionId $roleDefId
+ }
+}
+```
-1. Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching
- for and selecting **Policy**.
+# [Azure CLI](#tab/azure-cli)
+
+The new managed identity must complete replication through Azure Active Directory before it can be granted the needed roles. Once replication is complete, the roles specified in the policy definition's **roleDefinitionIds** should be granted to the managed identity.
+
+Access the roles specified in the policy definition using the [az policy definition show](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command, then iterate over each **roleDefinitionId** to create the role assignment using the [az role assignment create](/cli/azure/policy/definition?view=azure-cli-latest#az-policy-definition-show&preserve-view=true) command.
+++
+## Create a remediation task
+
+# [Portal](#tab/azure-portal)
+
+Launch the Azure Policy service in the Azure portal by selecting **All services**, then searching for and selecting **Policy**.
:::image type="content" source="../media/remediate-resources/search-policy.png" alt-text="Screenshot of searching for Policy in All Services." border="false":::
+### Step 1: Initiate remediation task creation
+There are three ways to create a remediation task through the portal.
+
+#### Option 1: Create a remediation task from the Remediation page
+ 1. Select **Remediation** on the left side of the Azure Policy page. :::image type="content" source="../media/remediate-resources/select-remediation.png" alt-text="Screenshot of the Remediation node on the Policy page." border="false":::
-1. All **deployIfNotExists** and **modify** policy assignments with non-compliant resources are
- included on the **Policies to remediate** tab and data table. Select on a policy with resources
- that are non-compliant. The **New remediation task** page opens.
+1. All **deployIfNotExists** and **modify** policy assignments are
+ shown on the **Policies to remediate** tab. Select one with resources
+ that are non-compliant to open the **New remediation task** page.
+
+1. Follow steps to [specify remediation task details](#step-2-specify-remediation-task-details).
+
+#### Option 2: Create a remediation task from a non-compliant policy assignment
+
+1. Select **Compliance** on the left side of the Azure Policy page.
+
+1. Select a non-compliant policy or initiative assignment containing **deployIfNotExists** or **modify** effects.
+
+1. Select the **Create Remediation Task** button at the top of the page to open the **New remediation task** page.
+
+1. Follow steps to [specify remediation task details](#step-2-specify-remediation-task-details).
+
+#### Option 3: Create a remediation task during policy assignment
+
+If the policy or initiative definition to assign has a **deployIfNotExists** or a **Modify** effect,
+the **Remediation** tab of the wizard offers a _Create a remediation task_ option, which creates a remediation task at the same time as the policy assignment.
> [!NOTE]
- > An alternate way to open the **remediation task** page is to find and select the policy from
- > the **Compliance** page, then select the **Create Remediation Task** button.
+ > This is the most streamlined approach for creating a remediation task and is supported for policies assigned on a _subscription_. For policies assigned on a _management group_, remediation tasks should be created using [Option 1](#option-1-create-a-remediation-task-from-the-remediation-page) or [Option 2](#option-2-create-a-remediation-task-from-a-non-compliant-policy-assignment) after evaluation has determined resource compliance.
+
+1. From the assignment wizard in the portal, navigate to the **Remediation** tab. Select the check box for **Create a remediation task**.
+
+1. If the remediation task is initiated from an initiative assignment, select the policy to remediate from the drop-down.
-1. On the **New remediation task** page, optional remediation settings are shown:
+1. Configure the [managed identity](#configure-the-managed-identity) and fill out the rest of the wizard. The remediation task will be created when the assignment is created.
+
+### Step 2: Specify remediation task details
+
+This step is only applicable when using [Option 1](#option-1-create-a-remediation-task-from-the-remediation-page) or [Option 2](#option-2-create-a-remediation-task-from-a-non-compliant-policy-assignment) to initiate remediation task creation.
+
+1. If the remediation task is initiated from an initiative assignment, select the policy to remediate from the drop-down. One **deployIfNotExists** or **modify** policy can be remediated through a single Remediation task at a time.
+
+1. Optionally modify remediation settings on the **New remediation task** page:
- **Failure Threshold percentage** - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
- - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number of is 50,000 resources.
+ - **Resource Count** - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number is 50,000 resources.
- **Parallel Deployments** - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10. > [!NOTE]
To create a **remediation task**, follow these steps:
:::image type="content" source="../media/remediate-resources/task-progress.png" alt-text="Screenshot of the Remediation tasks tab and progress of existing remediation tasks." border="false":::
-1. Select the **remediation task** from the policy compliance page to get details about the
- progress. The filtering used for the task is shown along with status and a list of resources being
- remediated.
+### Step 3: Track remediation task progress
-1. From the **Remediation task** page, select and hold (or right-click) on a resource to view either the remediation
- task's deployment or the resource. At the end of the row, select on **Related events** to see
+1. Navigate to the **Remediation tasks** tab on the **Remediation** page. Click on a remediation task to view details about the filtering used, the current status, and a list of resources being remediated.
+
+1. From the **Remediation task** details page, right-click on a resource to view either the remediation
+ task's deployment or the resource. At the end of the row, select **Related events** to see
details such as an error message. :::image type="content" source="../media/remediate-resources/resource-task-context-menu.png" alt-text="Screenshot of the context menu for a resource on the Remediate task tab." border="false":::
-Resources deployed through a **remediation task** are added to the **Deployed Resources** tab on the
-policy compliance page.
+Resources deployed through a **remediation task** are added to the **Deployed Resources** tab on the policy assignment details page.
-### Create a remediation task through Azure CLI
-
-To create a **remediation task** with Azure CLI, use the `az policy remediation` commands. Replace
-`{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your **deployIfNotExists**
-or **modify** policy assignment ID.
-
-```azurecli-interactive
-# Login first with az login if not using Cloud Shell
-
-# Create a remediation for a specific assignment
-az policy remediation create --name myRemediation --policy-assignment '/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policyAssignments/{myAssignmentId}'
-```
-
-For other remediation commands and examples, see the [az policy
-remediation](/cli/azure/policy/remediation) commands.
-
-### Create a remediation task through Azure PowerShell
+# [PowerShell](#tab/azure-powershell)
To create a **remediation task** with Azure PowerShell, use the `Start-AzPolicyRemediation` commands. Replace `{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your
commands. Replace `{subscriptionId}` with your subscription ID and `{myAssignmen
Start-AzPolicyRemediation -Name 'myRemedation' -PolicyAssignmentId '/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policyAssignments/{myAssignmentId}' ```
-For other remediation cmdlets and examples, see the [Az.PolicyInsights](/powershell/module/az.policyinsights/#policy_insights)
+You may also choose to adjust remediation settings through these optional parameters:
+- `-FailureThreshold` - Used to specify whether the remediation task should fail if the percentage of failures exceeds the given threshold. Provided as a number between 0 to 100. By default, the failure threshold is 100%.
+- `-ParallelDeploymentCount` - Determines how many non-compliant resources to remediate in a given remediation task. The default value is 500 (the previous limit). The maximum number is 50,000 resources.
+- `-ResourceCount` - Determines how many resources to remediate at the same time. The allowed values are 1 to 30 resources at a time. The default value is 10.
+
+For more remediation cmdlets and examples, see the [Az.PolicyInsights](/powershell/module/az.policyinsights/#policy_insights)
module.
-### Create a remediation task during policy assignment in the Azure portal
+# [Azure CLI](#tab/azure-cli)
-A streamlined way of creating a remediation task is to do so from the Azure portal during policy
-assignment. If the policy definition to assign is a **deployIfNotExists** or a **Modify** effect,
-the wizard on the **Remediation** tab offers a _Create a remediation task_ option. If this option is
-selected, a remediation task is created at the same time as the policy assignment.
+To create a **remediation task** with Azure CLI, use the `az policy remediation` commands. Replace
+`{subscriptionId}` with your subscription ID and `{myAssignmentId}` with your **deployIfNotExists**
+or **modify** policy assignment ID.
+
+```azurecli-interactive
+# Login first with az login if not using Cloud Shell
+
+# Create a remediation for a specific assignment
+az policy remediation create --name myRemediation --policy-assignment '/subscriptions/{subscriptionId}/providers/Microsoft.Authorization/policyAssignments/{myAssignmentId}'
+```
+
+For more remediation commands and examples, see the [az policy
+remediation](/cli/azure/policy/remediation) commands.
++ ## Next steps
governance Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/overview.md
to users who do not need them.
> [!NOTE] > The managed identity of a **deployIfNotExists** or **modify** policy assignment needs enough > permissions to create or update targetted resources. For more information, see
-> [Configure policy definitions for remediation](./how-to/remediate-resources.md#configure-policy-definition).
+> [Configure policy definitions for remediation](./how-to/remediate-resources.md#configure-the-policy-definition).
### Special permissions requirement for Azure Policy with Azure Virtual Network Manager (preview)
governance Create And Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/tutorials/create-and-manage.md
resources missing the tag.
automatically based on the policy definition. For more information, see [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) and
- [how remediation security works](../how-to/remediate-resources.md#how-remediation-security-works).
+ [how remediation access control works](../how-to/remediate-resources.md#how-remediation-access-control-works).
1. Select the **Non-compliance messages** tab at the top of the wizard.
overview](../overview.md).
Since it's added twice, the _Add or replace a tag on resources_ policy definitions each get a different _reference ID_.
- :::image type="content" source="../media/create-and-manage/initiative-definition-2.png" alt-text="Screenshot of the selected policy definitions with their reference id and group on the initiative definition page.":::
+ :::image type="content" source="../media/create-and-manage/initiative-definition-2.png" alt-text="Screenshot of the selected policy definitions with their reference ID and group on the initiative definition page.":::
> [!NOTE] > The selected policy definitions can be added to groups by selecting one or more added
overview](../overview.md).
list. For the two instances of the _Add or replace a tag on resources_ policy definitions, set the **Tag Name** parameters to 'Env' and 'CostCenter and the **Tag Value** parameters to 'Test' and 'Lab' as shown below. Leave the others as 'Default value'. Using the same definition twice in
- the initiative but with different parameters, this configuration adds or replace an 'Env' tag
+ the initiative but with different parameters, this configuration adds or replaces an 'Env' tag
with the value 'Test' and a 'CostCenter' tag with the value of 'Lab' on resources in scope of the assignment.
New-AzPolicySetDefinition -Name 'VMPolicySetDefinition' -Metadata '{"category":"
leave it blank. For more information, see [managed identities](../../../active-directory/managed-identities-azure-resources/overview.md) and
- [how remediation security works](../how-to/remediate-resources.md#how-remediation-security-works).
+ [how remediation access control works](../how-to/remediate-resources.md#how-remediation-access-control-works).
1. Select the **Review + create** tab at the top of the wizard.
that was denied by the policy definition.
:::image type="content" source="../media/create-and-manage/compliance-overview.png" alt-text="Screenshot of the Events tab and policy event details on the Initiative compliance page." border="false"::: In this example, Trent Baker, one of Contoso's Sr. Virtualization specialists, was doing required
-work. We need to grant Trent a space for an exception. Created a new resource group,
+work. We need to grant Trent a space for an exception. Create a new resource group,
**LocationsExcluded**, and next grant it an exception to this policy assignment. ### Update assignment with exclusion
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
Title: Connect an IoT Edge transparent gateway to an Azure IoT Central application
-description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application
+description: How to connect devices through an IoT Edge transparent gateway to an IoT Central application. The article shows how to use both the IoT Edge 1.1 and 1.2 runtimes.
Previously updated : 02/28/2022 Last updated : 05/08/2022
An IoT Edge device can act as a gateway that provides a connection between other devices on a local network and your IoT Central application. You use a gateway when the device can't access your IoT Central application directly.
-IoT Edge supports the [*transparent* and *translation* gateway patterns](../../iot-edge/iot-edge-as-gateway.md). This article summarizes how to implement the transparent gateway pattern. In this pattern, the gateway passes messages from the downstream device through to the IoT Hub endpoint in your IoT Central application. The gateway does not manipulate the messages as they pass through. In IoT Central, each downstream device appears as child to the gateway device:
+IoT Edge supports the [*transparent* and *translation* gateway patterns](../../iot-edge/iot-edge-as-gateway.md). This article summarizes how to implement the transparent gateway pattern. In this pattern, the gateway passes messages from the downstream device through to the IoT Hub endpoint in your IoT Central application. The gateway doesn't manipulate the messages as they pass through. In IoT Central, each downstream device appears as child to the gateway device:
:::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/edge-transparent-gateway.png" alt-text="IoT Edge as a transparent gateway." border="false"::: For simplicity, this article uses virtual machines to host the downstream and gateway devices. In a real scenario, the downstream device and gateway would run on physical devices on your local network.
+This article shows how to implement the scenario by using either the IoT Edge 1.1 runtime or the IoT Edge 1.2 runtime.
+ ## Prerequisites
+# [IoT Edge 1.1](#tab/edge1-1)
+
+To complete the steps in this article, you need:
+
+- An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+- An [IoT Central application created](howto-create-iot-central-application.md) from the **Custom application** template. To learn more, see [Create an IoT Central application](howto-create-iot-central-application.md).
+
+To follow the steps in this article, download the following files to your computer:
+
+- [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.
+- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-1/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
+
+# [IoT Edge 1.2](#tab/edge1-2)
To complete the steps in this article, you need: - An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
To complete the steps in this article, you need:
To follow the steps in this article, download the following files to your computer: - [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.-- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
+- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-2/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
++ ## Add device templates
To find these values, navigate to each device in the device list and select **Co
To let you try out this scenario, the following steps show you how to deploy the gateway and downstream devices to Azure virtual machines. > [!TIP]
-> To learn how to deploy the IoT Edge runtime to a physical device, see [Create an IoT Edge device](../../iot-edge/how-to-create-iot-edge-device.md) in the IoT Edge documentation.
+> To learn how to deploy the IoT Edge 1.1 or 1.2 runtime to a physical device, see [Create an IoT Edge device](../../iot-edge/how-to-create-iot-edge-device.md) in the IoT Edge documentation.
+
+# [IoT Edge 1.1](#tab/edge1-1)
+
+To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.1 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
+
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway-1-1%2FDeployGatewayVMs.json)
+
+When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
+
+1. Go to the **Devices** page in your IoT Central application. If the IoT Edge gateway device is connected to IoT Central, its status is **Provisioned**.
+
+1. Open the IoT Edge gateway device and verify the status of the modules on the **Modules** page. If the IoT Edge runtime started successfully, the status of the **$edgeAgent** and **$edgeHub** modules is **Running**:
+
+ :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-1.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub version 1.1 modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-1.png":::
+
+ > [!TIP]
+ > You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application.
+
+# [IoT Edge 1.2](#tab/edge1-2)
-To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you'll run code to send simulated thermostat telemetry:
+To try out the transparent gateway scenario, select the following button to deploy two Linux virtual machines. One virtual machine has the IoT Edge 1.2 runtime installed and is the transparent IoT Edge gateway. The other virtual machine is a downstream device where you run code to send simulated thermostat telemetry:
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway%2FDeployGatewayVMs.json)
+[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmaster%2Ftransparent-gateway-1-2%2FDeployGatewayVMs.json)
When the two virtual machines are deployed and running, verify the IoT Edge gateway device is running on the `edgegateway` virtual machine:
When the two virtual machines are deployed and running, verify the IoT Edge gate
1. Open the IoT Edge gateway device and verify the status of the modules on the **Modules** page. If the IoT Edge runtime started successfully, the status of the **$edgeAgent** and **$edgeHub** modules is **Running**:
- :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime.png":::
+ :::image type="content" source="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-2.png" alt-text="Screenshot showing the $edgeAgent and $edgeHub version 1.2 modules running on the IoT Edge gateway." lightbox="media/how-to-connect-iot-edge-transparent-gateway/iot-edge-runtime-1-2.png":::
> [!TIP] > You may have to wait for several minutes while the virtual machine starts up and the device is provisioned in your IoT Central application. ++ ## Configure the gateway For your IoT Edge device to function as a transparent gateway, it needs some certificates to prove its identity to any downstream devices. This article uses demo certificates. In a production environment, use certificates from your certificate authority. To generate the demo certificates and install them on your gateway device:
+# [IoT Edge 1.1](#tab/edge1-1)
+ 1. Use SSH to connect to and sign in on your gateway device virtual machine. 1. Run the following commands to clone the IoT Edge repository and generate your demo certificates:
To generate the demo certificates and install them on your gateway device:
# Clone the repo cd ~ git clone https://github.com/Azure/iotedge.git-
+
# Generate the demo certificates mkdir certs cd certs
To generate the demo certificates and install them on your gateway device:
After you run the previous commands, the following files are ready to use in the next steps: - *~/certs/certs/azure-iot-test-only.root.ca.cert.pem* - The root CA certificate used to make all the other demo certificates for testing an IoT Edge scenario.
- - *~/certs/certs/iot-edge-device-mycacert-full-chain.cert.pem* - A device CA certificate that's referenced from the *config.yaml* file. In a gateway scenario, this CA certificate is how the IoT Edge device verifies its identity to downstream devices.
+ - *~/certs/certs/iot-edge-device-mycacert-full-chain.cert.pem* - A device CA certificate that's referenced from the IoT Edge configuration file. In a gateway scenario, this CA certificate is how the IoT Edge device verifies its identity to downstream devices.
- *~/certs/private/iot-edge-device-mycacert.key.pem* - The private key associated with the device CA certificate. To learn more about these demo certificates, see [Create demo certificates to test IoT Edge device features](../../iot-edge/how-to-create-test-certificates.md).
To generate the demo certificates and install them on your gateway device:
trusted_ca_certs: "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem" ```
- The example shown above assumes you're signed in as **AzureUser** and created a device CA certificated called "mycacert".
+ The example shown above assumes you're signed in as **AzureUser** and created a device CA certificate called "mycacert".
1. Save the changes and restart the IoT Edge runtime:
To generate the demo certificates and install them on your gateway device:
If the IoT Edge runtime starts successfully after your changes, the status of the **$edgeAgent** and **$edgeHub** modules changes to **Running** on the **Modules** page for your gateway device in IoT Central.
-If the runtime doesn't start, check the changes you made in *config.yaml* and see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
+If the runtime doesn't start, check the changes you made in the IoT Edge configuration file and see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
+
+Your transparent gateway is now configured and ready to start forwarding telemetry from downstream devices.
+
+# [IoT Edge 1.2](#tab/edge1-2)
+
+1. Use SSH to connect to and sign in on your gateway device virtual machine.
+
+1. Run the following commands to clone the IoT Edge repository and generate your demo certificates:
+
+ ```bash
+ # Clone the repo
+ cd ~
+ git clone https://github.com/Azure/iotedge.git
+
+ # Generate the demo certificates
+ mkdir certs
+ cd certs
+ cp ~/iotedge/tools/CACertificates/*.cnf .
+ cp ~/iotedge/tools/CACertificates/certGen.sh .
+ ./certGen.sh create_root_and_intermediate
+ ./certGen.sh create_edge_device_ca_certificate "mycacert"
+ ```
+
+ After you run the previous commands, the following files are ready to use in the next steps:
+
+ - *~/certs/certs/azure-iot-test-only.root.ca.cert.pem* - The root CA certificate used to make all the other demo certificates for testing an IoT Edge scenario.
+ - *~/certs/certs/iot-edge-device-mycacert-full-chain.cert.pem* - A device CA certificate that's referenced from the IoT Edge configuration file. In a gateway scenario, this CA certificate is how the IoT Edge device verifies its identity to downstream devices.
+ - *~/certs/private/iot-edge-device-mycacert.key.pem* - The private key associated with the device CA certificate.
+
+ To learn more about these demo certificates, see [Create demo certificates to test IoT Edge device features](../../iot-edge/how-to-create-test-certificates.md).
+
+1. Open the *config.toml* file in a text editor. For example:
+
+ ```bash
+ sudo nano /etc/aziot/config.toml
+ ```
+
+1. Locate the `Certificate settings` settings. Add the certificate settings as follows:
+
+ ```text
+ trust_bundle_cert = "file:///home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem"
+
+ [edge_ca]
+ cert = "file:///home/AzureUser/certs/certs/iot-edge-device-ca-mycacert-full-chain.cert.pem"
+ pk = "file:///home/AzureUser/certs/private/iot-edge-device-ca-mycacert.key.pem"
+ ```
+
+ The example shown above assumes you're signed in as **AzureUser** and created a device CA certificate called "mycacert".
+
+1. Save the changes and restart the IoT Edge runtime:
+
+ ```bash
+ sudo iotedge config apply
+ ```
+
+If the IoT Edge runtime starts successfully after your changes, the status of the **$edgeAgent** and **$edgeHub** modules changes to **Running** on the **Modules** page for your gateway device in IoT Central.
+
+If the runtime doesn't start, check the changes you made in the IoT Edge configuration file and see [Troubleshoot your IoT Edge device](../../iot-edge/troubleshoot.md).
Your transparent gateway is now configured and ready to start forwarding telemetry from downstream devices. ++ ## Provision a downstream device IoT Central relies on the Device Provisioning Service (DPS) to provision devices in IoT Central. Currently, IoT Edge can't use DPS provision a downstream device to your IoT Central application. The following steps show you how to provision the `thermostat1` device manually. To complete these steps, you need an environment with Python 3.6 (or higher) installed and internet connectivity. The [Azure Cloud Shell](https://shell.azure.com/) has Python 3.7 pre-installed:
IoT Central relies on the Device Provisioning Service (DPS) to provision devices
1. Run the following command to download the Python script that does the device provisioning: ```bash
- wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway/provision_device.py
+ wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-1/provision_device.py
``` 1. To provision the `thermostat1` downstream device in your IoT Central application, run the following commands, replacing `{your application id scope}` and `{your device primary key}`. You made a note of these values when you added the devices to your IoT Central application:
In your IoT Central application, verify that the **Device status** for the `ther
In the previous section, you configured the `edgegateway` virtual machine with the demo certificates to enable it to run as gateway. The `leafdevice` virtual machine is ready for you to install a thermostat simulator that uses the gateway to connect to IoT Central.
-The `leafdevice` virtual machine needs a copy of the root CA certificate you created on the `edgegateway` virtual machine. Copy the */home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem* file from the `edgegateway` virtual machine to your home directory on the `leafdevice` virtual machine. You can use the **scp** command to copy files between Linux virtual machines.
+The `leafdevice` virtual machine needs a copy of the root CA certificate you created on the `edgegateway` virtual machine. Copy the */home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem* file from the `edgegateway` virtual machine to your home directory on the `leafdevice` virtual machine. You can use the **scp** command to copy files between Linux virtual machines. For example, from the `leafdevice` machine:
+
+```bash
+scp AzureUser@edgegateway:/home/AzureUser/certs/certs/azure-iot-test-only.root.ca.cert.pem .
+```
To learn how to check the connection from the downstream device to the gateway, see [Test the gateway connection](../../iot-edge/how-to-connect-downstream-device.md#test-the-gateway-connection).
To run the thermostat simulator on the `leafdevice` virtual machine:
```bash cd ~
- wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway/simple_thermostat.py
+ wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-1/simple_thermostat.py
``` 1. Install the Azure IoT device Python module:
iot-central Howto Create Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-analytics.md
Azure IoT Central provides rich analytics capabilities to analyze historical trends and correlate telemetry from your devices. To get started, select **Data explorer** on the left pane.
+> [!NOTE]
+> Only users in a role that have the necessary permissions can view, create, edit, and delete queries. To learn more, see [Manage users and roles in your IoT Central application](howto-manage-users-roles.md).
+ ## Understand the data explorer UI The analytics user interface has three main components:
iot-central Howto Manage Organizations With Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-organizations-with-rest-api.md
The IoT Central REST API lets you:
The REST API lets you create organizations in your IoT Central application. Use the following request to create an organization in your application: ```http
-PUT https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
+PUT https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
``` * organizationId - Unique ID of the organization
The response to this request looks like the following example:
} ``` -- ### Get an organization Use the following request to retrieve details of an individual organization from your application: ```http
-GET https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
+GET https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
``` The response to this request looks like the following example:
The response to this request looks like the following example:
Use the following request to update details of an organization in your application: ```http
-PATCH https://{subdomain}.{baseDomain}/api/organizations/{organizationId}?api-version=2022-05-31
+PATCH https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31
``` The following example shows a request body that updates an organization.
Use the following request to delete an organization:
DELETE https://{your app subdomain}.azureiotcentral.com/api/organizations/{organizationId}?api-version=2022-05-31 ```
+## Use organizations
+
+### Manage roles
+
+The REST API lets you list the roles defined in your IoT Central application. Use the following request to retrieve a list of application role and organization role IDs from your application. To learn more see, [How to manage IoT Central organizations](howto-create-organizations.md):
+
+```http
+GET https://{your app subdomain}.azureiotcentral.com/api/roles?api-version=2022-05-31
+```
+
+The response to this request looks like the following example that includes the application role and organization role IDs.
+
+```json
+{
+ "value": [
+ {
+ "id": "ca310b8d-2f4a-44e0-a36e-957c202cd8d4",
+ "displayName": "Administrator"
+ },
+ {
+ "id": "ae2c9854-393b-4f97-8c42-479d70ce626e",
+ "displayName": "Operator"
+ },
+ {
+ "id": "344138e9-8de4-4497-8c54-5237e96d6aaf",
+ "displayName": "Builder"
+ },
+ {
+ "id": "c495eb57-eb18-489e-9802-62c474e5645c",
+ "displayName": "Org Admin"
+ },
+ {
+ "id": "b4935647-30e4-4ed3-9074-dcac66c2f8ef",
+ "displayName": "Org Operator"
+ },
+ {
+ "id": "84cc62c1-dabe-49d3-b16e-8b291232b285",
+ "displayName": "Org Viewer"
+ }
+ ]
+}
+```
+
+### Create an API token to a node in an organization hierarchy
+
+Use the following request to create Create an API token to a node in an organization hierarchy in your application:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/apiTokens/{tokenId}?api-version=2022-05-31
+```
+
+* tokenId - Unique ID of the token
+
+The following example shows a request body that creates an API token for an organization in a IoT Central application.
+
+```json
+{
+ "roles": [
+ {
+ "role": "84cc62c1-dabe-49d3-b16e-8b291232b285",
+ "organization": "seattle"
+ }
+ ]
+}
+```
+
+The request body has some required fields:
+
+|Name|Description|
+|-|--|
+|role |ID of one of the organization roles.|
+|organization| ID of the organization|
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "token1",
+ "roles": [
+ {
+ "role": "84cc62c1-dabe-49d3-b16e-8b291232b285",
+ "organization": "seattle"
+ }
+ ],
+ "expiry": "2023-07-07T17:05:08.407Z",
+ "token": "SharedAccessSignature sr=8a0617**********************4c0d71c&sig=3RyX69G4%2FBZZnG0LXOjQv*************e8s%3D&skn=token1&se=1688749508407"
+}
+```
+
+### Associate a user with a node in an organization hierarchy
+
+Use the following request to create and associate a user with a node in an organization hierarchy in your application. The ID and email must be unique in the application:
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/users/user-001?api-version=2022-05-31
+```
+
+In the following request body, the `role` is the ID of one of the organization roles and `organization` is the ID of the organization
+
+```json
+{
+ "id": "user-001",
+ "type": "email",
+ "roles": [
+ {
+ "role": "84cc62c1-dabe-49d3-b16e-8b291232b285",
+ "organization": "seattle"
+ }
+ ],
+ "email": "user5@contoso.com"
+
+}
+```
+
+The response to this request looks like the following example. The role value identifies which role the user is associated with:
+
+```json
+{
+ "id": "user-001",
+ "type": "email",
+ "roles": [
+ {
+ "role": "84cc62c1-dabe-49d3-b16e-8b291232b285",
+ "organization": "seattle"
+ }
+ ],
+ "email": "user5@contoso.com"
+}
+```
+
+### Add and associate a device to an organization
+
+Use the following request to associate a new device with an organization
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/devices/{deviceId}?api-version=2022-05-31
+```
+
+The following example shows a request body that adds a device for a device template. You can get the `template` details from the device templates page in IoT Central application UI.
+
+```json
+{
+ "displayName": "CheckoutThermostat",
+ "template": "dtmi:contoso:Thermostat;1",
+ "simulated": true,
+ "enabled": true,
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the device.
+* `@enabled`: declares that this object is an interface.
+* `@etag`: ETag used to prevent conflict in device updates.
+* `simulated`: Whether the device is simulated.
+* `template` : The device template definition for the device.
+* `organizations` : List of organization IDs that the device is a part of. Currently, you can only associate a device with a single organization.
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "thermostat1",
+ "etag": "eyJoZWFkZXIiOiJcIjI0MDAwYTdkLTAwMDAtMDMwMC0wMDAwLTYxYjgxZDIwMDAwMFwiIiwiZGF0YSI6IlwiMzMwMDQ1M2EtMDAwMC0wMzAwLTAwMDAtNjFiODFkMjAwMDAwXCIifQ",
+ "displayName": "CheckoutThermostat",
+ "simulated": true,
+ "provisioned": false,
+ "template": "dtmi:contoso:Thermostat;1",
+ "enabled": true,
+ "organizations": [
+ "seattle"
+ ]
+
+}
+```
+
+### Add and associate a device group to an organization
+
+Use the following request to create and associate a new device group with an organization.
+
+```http
+PUT https://{your app subdomain}.azureiotcentral.com/api/deviceGroups/{deviceGroupId}?api-version=2022-05-31
+```
+
+When you create a device group, you define a `filter` that selects the devices to add to the group. A `filter` identifies a device template and any properties to match. The following example creates device group that contains all devices associated with the "dtmi:modelDefinition:dtdlv2" template where the `provisioned` property is true.
+
+```json
+{
+ "displayName": "Device group 1",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+
+The request body has some required fields:
+
+* `@displayName`: Display name of the device group.
+* `@filter`: Query defining which devices should be in this group.
+* `description`: Short summary of device group.
+* `organizations` : List of organization IDs that the device is a part of. Currently, you can only associate a device with a single organization.
+
+The response to this request looks like the following example:
+
+```json
+{
+ "id": "group1",
+ "displayName": "Device group 1",
+ "description": "Custom device group.",
+ "filter": "SELECT * FROM devices WHERE $template = \"dtmi:modelDefinition:dtdlv2\" AND $provisioned = true",
+ "organizations": [
+ "seattle"
+ ]
+}
+```
+ ## Next steps Now that you've learned how to manage organizations with the REST API, a suggested next step is to [How to use the IoT Central REST API to manage data exports.](howto-manage-data-export-with-rest-api.md)
iot-edge How To Provision Single Device Linux Symmetric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-symmetric.md
Previously updated : 06/06/2022 Last updated : 07/11/2022
Remove the IoT Edge runtime.
::: moniker range="iotedge-2018-06" ```bash
-sudo apt-get remove iotedge
+sudo apt-get autoremove iotedge
``` ::: moniker-end
sudo apt-get remove iotedge
# [Ubuntu / Debian / Raspberry Pi OS](#tab/ubuntu+debian+rpios) ```bash
-sudo apt-get remove aziot-edge
+sudo apt-get autoremove --purge aziot-edge
```
-Use the `--purge` flag if you want to delete all the files associated with IoT Edge, including your configuration files. Leave this flag out if you want to reinstall IoT Edge and use the same configuration information in the future.
+Leave out the `--purge` flag if you plan to reinstall IoT Edge and use the same configuration information in the future. The `--purge` flags deletes all the files associated with IoT Edge, including your configuration files.
# [Red Hat Enterprise Linux](#tab/rhel) ```bash
Finally, remove the container runtime from your device.
# [Ubuntu / Debian / Raspberry Pi OS](#tab/ubuntu+debian+rpios) ```bash
-sudo apt-get remove --purge moby-cli
-sudo apt-get remove --purge moby-engine
+sudo apt-get autoremove --purge moby-engine
``` # [Red Hat Enterprise Linux](#tab/rhel)
iot-edge How To Provision Single Device Linux X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-provision-single-device-linux-x509.md
Previously updated : 06/03/2022 Last updated : 07/11/2022
Remove the IoT Edge runtime.
::: moniker range="iotedge-2018-06" ```bash
-sudo apt-get remove iotedge
+sudo apt-get autoremove iotedge
``` ::: moniker-end
sudo apt-get remove iotedge
# [Ubuntu / Debian / Raspberry Pi OS](#tab/ubuntu+debian+rpios) ```bash
-sudo apt-get remove aziot-edge
+sudo apt-get autoremove --purge aziot-edge
```
-Use the `--purge` flag if you want to delete all the files associated with IoT Edge, including your configuration files. Leave this flag out if you want to reinstall IoT Edge and use the same configuration information in the future.
+Leave out the `--purge` flag if you plan to reinstall IoT Edge and use the same configuration information in the future. The `--purge` flags deletes all the files associated with IoT Edge, including your configuration files.
# [Red Hat Enterprise Linux](#tab/rhel) ```bash
sudo docker rm -f <container name>
Finally, remove the container runtime from your device. ```bash
-sudo apt-get remove --purge moby-cli
-sudo apt-get remove --purge moby-engine
+sudo apt-get autoremove --purge moby-engine
``` ## Next steps
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
The systems listed in the following table are considered compatible with Azure I
| - | -- | - | -- | | [CentOS-7](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7) | ![CentOS + AMD64](./media/support/green-check.png) | ![CentOS + ARM32v7](./media/support/green-check.png) | ![CentOS + ARM64](./media/support/green-check.png) | | [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/support/green-check.png) | ![Debian 9 + ARM32v7](./media/support/green-check.png) | ![Debian 9 + ARM64](./media/support/green-check.png) |
-| [Debian 10](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) |
+| [Debian 10 <sup>1</sup>](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) |
| [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | | [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | | [Mentor Embedded Linux Omni OS](https://www.mentor.com/embedded-software/linux/mel-omni-os/) | ![Mentor Embedded Linux Omni OS + AMD64](./media/support/green-check.png) | | ![Mentor Embedded Linux Omni OS + ARM64](./media/support/green-check.png) | | [RHEL 7](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7) | ![RHEL 7 + AMD64](./media/support/green-check.png) | ![RHEL 7 + ARM32v7](./media/support/green-check.png) | ![RHEL 7 + ARM64](./media/support/green-check.png) |
-| [Ubuntu 18.04 <sup>1</sup>](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | | ![Ubuntu 18.04 + ARM32v7](./media/support/green-check.png) | |
-| [Ubuntu 20.04 <sup>1</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | |
+| [Ubuntu 18.04 <sup>2</sup>](https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes) | | ![Ubuntu 18.04 + ARM32v7](./media/support/green-check.png) | |
+| [Ubuntu 20.04 <sup>2</sup>](https://wiki.ubuntu.com/FocalFossa/ReleaseNotes) | | ![Ubuntu 20.04 + ARM32v7](./media/support/green-check.png) | |
| [Wind River 8](https://docs.windriver.com/category/os-wind_river_linux) | ![Wind River 8 + AMD64](./media/support/green-check.png) | | | | [Yocto](https://www.yoctoproject.org/) | ![Yocto + AMD64](./media/support/green-check.png) | ![Yocto + ARM32v7](./media/support/green-check.png) | ![Yocto + ARM64](./media/support/green-check.png) | | Raspberry Pi OS Buster | | ![Raspberry Pi OS Buster + ARM32v7](./media/support/green-check.png) | ![Raspberry Pi OS Buster + ARM64](./media/support/green-check.png) |
-<sup>1</sup> Installation packages are made available on the [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). See the installation steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
+<sup>1</sup> With the release of 1.3, there are new system calls that cause crashes in Debian 10. To see the workaround, view the [Known issue: Debian 10 (Buster) on ARMv7](https://github.com/Azure/azure-iotedge/releases) section of the 1.3 release notes for details.
+
+<sup>2</sup> Installation packages are made available on the [Azure IoT Edge releases](https://github.com/Azure/azure-iotedge/releases). See the installation steps in [Offline or specific version installation](how-to-provision-single-device-linux-symmetric.md#offline-or-specific-version-installation-optional).
## Releases
iot-fundamentals Iot Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-fundamentals/iot-support-help.md
Previously updated : 6/10/2020 Last updated : 7/11/2022
iot-hub-device-update Device Update Configuration File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configuration-file.md
Title: Understand Device Update for Azure IoT Hub Configuration File| Microsoft
description: Understand Device Update for Azure IoT Hub Configuration File. Previously updated : 12/13/2021 Last updated : 06/27/2022
-# Device Update for IoT Hub Configuration File
+# Device Update for IoT Hub configuration file
-The "du-config.json" is a file that contains the below configurations for the Device Update agent. The Device Update Agent will then read these values and report them to the Device Update Service.
+The Device Update agent gets its configuration information from the `du-config.json` file on the device. The agent reads these values and reports them to the Device Update service:
* AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["manufacturer"] * AzureDeviceUpdateCore:4.ClientMetadata:4.deviceProperties["model"] * DeviceInformation.manufacturer * DeviceInformation.model
-* connectionData
+* connectionData
* connectionType
-
+ ## File location
-When installing Debian agent on an IoT Device with a Linux OS, modify the '/etc/adu/du-config.json' file to update values. For a Yocto build system, in the partition or disk called 'adu' create a json file called '/adu/du-config.json'.
+When installing Debian agent on an IoT Device with a Linux OS, modify the `/etc/adu/du-config.json` file to update values. For a Yocto build system, in the partition or disk called `adu`, create a json file called `/adu/du-config.json`.
## List of fields
-|Name|Description|
+| Name | Description |
|--|--|
-|SchemaVersion|The schema version that maps the current configuration file format version|
-|aduShellTrustedUsers|The list of users that can launch the 'adu-shell' program. Note, 'adu-shell' is a "broker" program that does various Update Actions, as 'root'. The Device Update default content update handlers invoke 'adu-shell' to do tasks that require "super user" privilege. Examples of tasks that require this privilege are "apt-get install" or executing a privileged scripts.|
-|aduc_manufacturer|Reported by the `AzureDeviceUpdateCore:4.ClientMetadata:4` interface to classify the device for targeting the update deployment.|
-|aduc_model|Reported by the `AzureDeviceUpdateCore:4.ClientMetadata:4` interface to classify the device for targeting the update deployment.|
-|connectionType|Possible values "string" when connecting the device to IoT Hub manually for testing purposes. For production scenarios, use value "AIS" when using the IoT Identity Service to connect the device to IoT Hub. See [understand IoT Identity Service configurations](https://azure.github.io/iot-identity-service/configuration.html)|
-|connectionData|If connectionType = "string", add the value from your IoT Device's, device or module connection string here. If connectionType = "AIS", set the connectionData to empty string("connectionData": "").
-|manufacturer|Reported by the Device Update Agent as part of the `DeviceInformation` interface.|
-|model|Reported by the Device Update Agent as part of the `DeviceInformation` interface.|
-
+| SchemaVersion | The schema version that maps the current configuration file format version. |
+| aduShellTrustedUsers | The list of users that can launch the **adu-shell** program. Note, adu-shell is a broker program that does various update actions as 'root'. The Device Update default content update handlers invoke adu-shell to do tasks that require super user privilege. Examples of tasks that require this privilege are `apt-get install` or executing a privileged script. |
+| aduc_manufacturer | Reported by the **AzureDeviceUpdateCore:4.ClientMetadata:4** interface to classify the device for targeting the update deployment. |
+| aduc_model | Reported by the **AzureDeviceUpdateCore:4.ClientMetadata:4** interface to classify the device for targeting the update deployment. |
+| connectionType | Accepted values are `string` or `AIS`. Use `string` when connecting the device to IoT Hub manually for testing purposes. For production scenarios, use `AIS` when using the IoT Identity Service to connect the device to IoT Hub. For more information, see [understand IoT Identity Service configurations](https://azure.github.io/iot-identity-service/configuration.html). |
+| connectionData |If connectionType = "string", add your IoT device's device or module connection string here. If connectionType = "AIS", set the connectionData to empty string (`"connectionData": ""`). |
+| manufacturer | Reported by the Device Update agent as part of the **DeviceInformation** interface. |
+| model | Reported by the Device Update agent as part of the **DeviceInformation** interface. |
## Example "du-config.json" file contents
-```markdown
+```json
{ "schemaVersion": "1.1",
iot-hub-device-update Device Update Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-error-codes.md
Title: Error codes for Device Update for Azure IoT Hub | Microsoft Docs
description: This document provides a table of error codes for various Device Update components. Previously updated : 1/26/2022 Last updated : 06/28/2022
-# Device Update for IoT Hub Error Codes
+# Device Update for IoT Hub error codes
This document provides a table of error codes for various Device Update components. It's meant to be used as a reference for users who want to parse their own error codes to diagnose and troubleshoot issues.
There are two primary client-side components that may throw error codes: the Dev
### ResultCode and ExtendedResultCode
-The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. [Learn More](device-update-plug-and-play.md) about the Device Update Core PnP interface.
+The Device Update for IoT Hub Core PnP interface reports `ResultCode` and `ExtendedResultCode`, which can be used to diagnose failures. For more information about the Device Update Core PnP interface, see [Device Update and Plug and Play](device-update-plug-and-play.md).
`ResultCode` is a general status code and `ExtendedResultCode` is an integer with encoded error information.
The DO error code can be obtained by examining the exceptions thrown in response
+ Facility code (4 bits) ```
-Refer to [Device Update Agent result codes and extended result codes](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/device-update-agent-extended-result-codes.md) or [implement a custom Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers) for details on parsing codes.
+For more information about parsing codes, see [Device Update Agent result codes and extended result codes](https://github.com/Azure/iot-hub-device-update/tree/main/docs/agent-reference/device-update-agent-extended-result-codes.md) or [implement a custom Content Handler](https://github.com/Azure/iot-hub-device-update/tree/main/src/content_handlers).
## Device Update content service
-The following table lists error codes pertaining to the content service component of the Device Update service. The content service component is responsible for handling importing of update content. More troubleshooting information is also available for [importing proxy updates](device-update-proxy-update-troubleshooting.md).
-
-| Error Code | String Error | Next steps |
-|-|-||
-| "UpdateAlreadyExists" | Update with the same identity already exists. | Make sure you're importing an update that hasnΓÇÖt already been imported into this instance of Device Update for IoT Hub. |
-| "DuplicateContentImport" | Identical content imported simultaneously multiple times. | Same as for UpdateAlreadyExists. |
-| "CannotProcessImportManifest" | Error processing import manifest. | Refer to [import concepts](./import-concepts.md) and [import update](./create-update.md) documentation for proper import manifest formatting. |
-| "CannotDownload" | Cannot download import manifest. | Check to make sure the URL for the import manifest file is still valid. |
-| "CannotParse" | Cannot parse import manifest. | Check your import manifest for accuracy against the schema defined in the [import update](./create-update.md) documentation. |
-| "UnsupportedVersion" | Import manifest schema version is not supported. | Make sure your import manifest is using the latest schema defined in the [import update](./create-update.md) documentation. |
-| Error importing update due to exceeded limit. | Cannot import additional update provider. | You've reached a [limit](device-update-limits.md) on the number of different __Providers__ allowed in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. |
-| Error importing update due to exceeded limit. | Cannot import additional update name for the specified provider. | You've reached a [limit](device-update-limits.md) on the number of different __Names__ allowed under one Provider in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. |
-| Error importing update due to exceeded limit. | Cannot import additional update version for the specified provider and name. | You've reached a [limit](device-update-limits.md) on the number of different __Versions__ allowed under one Provider and Name in your instance of Device Update for IoT Hub. Delete some updates with that Name from your instance and try again. |
-| Error importing update due to exceeded limit. | Cannot import additional update provider with the specified compatibility. | When defining [compatibility properties](import-schema.md#compabilityinfo-object) in an import manifest, keep in mind that Device Update for IoT Hub supports a single Provider and Name combination for a given set of compatibility properties. If you try to use the same compatibility properties with more than one Provider/Name combination, you'll see these errors. To resolve this issue, make sure that all updates for a given device (as defined by compatibility properties) use the same Provider and Name. |
-| Error importing update due to exceeded limit. | Cannot import additional update name with the specified compatibility. | When defining device [compatibility properties](import-schema.md#compabilityinfo-object) in an import manifest, keep in mind that Device Update for IoT Hub supports a single Provider and Name combination for a given set of compatibility properties. If you try to use the same compatibility properties with more than one Provider/Name combination, you'll see these errors. To resolve this issue, make sure that all updates for a given device (as defined by compatibility properties) use the same Provider and Name. |
-| Error importing update due to exceeded limit. | Cannot import additional update version with the specified compatibility. | When defining device [compatibility properties](import-schema.md#compabilityinfo-object) in an import manifest, keep in mind that Device Update for IoT Hub supports a single Provider and Name combination for a given set of compatibility properties. If you try to use the same compatibility properties with more than one Provider/Name combination, you'll see these errors. To resolve this issue, make sure that all updates for a given device (as defined by compatibility properties) use the same Provider and Name. |
-| "CannotProcessUpdateFile" | Error processing source file. | |
-| "ContentFileCannotDownload" | Cannot download source file. | Check to make sure the URL for the update file(s) is still valid. |
-| "SourceFileMalwareDetected" | A known malware signature was detected in a file being imported. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. If a known malware signature is identified, the import will fail and a unique error message will be returned. The error message contains the description of the malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. |
-| "SourceFilePendingMalwareAnalysis" | A signature was detected in a file being imported that may indicate malware is present. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. The import will fail if a scan signature has _characteristics_ of malware, even if there is not an exact match to known malware. When this occurs, a unique error message will be returned. The error message contains the description of the suspected malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you've removed the malware from any files being imported, you can start the import process again. If you're certain your files are free of malware and continue to see this error, use the [Contact Microsoft Support](troubleshoot-device-update.md#contact) process. |
-
-**[Next Step: Troubleshoot issues with Device Update](.\troubleshoot-device-update.md)**
+
+The following table lists error codes pertaining to the content service component of the Device Update service. The content service component is responsible for importing update content. More troubleshooting information is also available for [importing proxy updates](device-update-proxy-update-troubleshooting.md).
+
+| Error code | String error | Next steps |
+|--|--|--|
+| UpdateAlreadyExists | Update with the same identity already exists. | Make sure you're importing an update that hasnΓÇÖt already been imported into this instance of Device Update for IoT Hub. |
+| DuplicateContentImport | Identical content imported simultaneously multiple times. | Make sure you're importing an update that hasnΓÇÖt already been imported into this instance of Device Update for IoT Hub. |
+| CannotProcessImportManifest | Error processing import manifest. | Refer to [import concepts](import-concepts.md) and [import update](create-update.md) documentation for proper import manifest formatting. |
+| CannotDownload | Cannot download import manifest. | Check to make sure the URL for the import manifest file is still valid. |
+| CannotParse | Cannot parse import manifest. | Check your import manifest for accuracy against the schema defined in the [import update](create-update.md) documentation. |
+| UnsupportedVersion | Import manifest schema version is not supported. | Make sure your import manifest is using the latest schema defined in the [import update](create-update.md) documentation. |
+| Error importing update due to exceeded limit. | Cannot import additional update provider. | You've reached a [limit](device-update-limits.md) on the number of different __providers__ allowed in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. |
+| Error importing update due to exceeded limit. | Cannot import additional update name for the specified provider. | You've reached a [limit](device-update-limits.md) on the number of different __names__ allowed under one provider in your instance of Device Update for IoT Hub. Delete some updates from your instance and try again. |
+| Error importing update due to exceeded limit. | Cannot import additional update version for the specified provider and name. | You've reached a [limit](device-update-limits.md) on the number of different __versions__ allowed under one provider and name in your instance of Device Update for IoT Hub. Delete some updates with that name from your instance and try again. |
+| Error importing update due to exceeded limit. | Cannot import additional update provider with the specified compatibility.<br><br>_or_<br><br>Cannot import additional update name with the specified compatibility.<br><br>_or_<br><br>Cannot import additional update version with the specified compatibility. | When defining [compatibility properties](import-schema.md#compatibility-object) in an import manifest, keep in mind that Device Update for IoT Hub supports a single provider and name combination for a given set of compatibility properties. If you try to use the same compatibility properties with more than one provider/name combination, you'll see these errors. To resolve this issue, make sure that all updates for a given device (as defined by compatibility properties) use the same provider and name. |
+| CannotProcessUpdateFile | Error processing source file. | |
+| ContentFileCannotDownload | Cannot download source file. | Check to make sure the URL for the update file(s) is still valid. |
+| SourceFileMalwareDetected | A known malware signature was detected in a file being imported. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. If a known malware signature is identified, the import fails and a unique error message is returned. The error message contains the description of the malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware. <br><br>Once you have removed the malware from any files being imported, you can start the import process again. |
+| SourceFilePendingMalwareAnalysis | A signature was detected in a file being imported that may indicate malware is present. | Content imported into Device Update for IoT Hub is scanned for malware by several different mechanisms. The import fails if a scan signature has characteristics of malware, even if there is not an exact match to known malware. When this occurs, a unique error message is returned. The error message contains the description of the suspected malware signature, and a file hash for each file where the signature was detected. You can use the file hash to find the exact file being flagged, and use the description of the malware signature to check that file for malware.<br><br>Once you've removed the malware from any files being imported, you can start the import process again. If you're certain your files are free of malware and continue to see this error, use the [Contact Microsoft Support](troubleshoot-device-update.md#contact) process. |
+
+## Next steps
+
+[Troubleshoot issues with Device Update](.\troubleshoot-device-update.md)
iot-hub-device-update Import Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-concepts.md
Title: Understand Device Update for IoT Hub importing | Microsoft Docs
description: Key concepts for importing a new update into Device Update for IoT Hub. Previously updated : 2/10/2021 Last updated : 06/27/2022 # Importing updates into Device Update for IoT Hub
-In order to deploy an update to devices from Device Update for IoT Hub, you first have to _import_ that update into the Device Update service. Here is an overview of some important concepts to understand when it comes to importing updates.
+In order to deploy an update to devices from Device Update for IoT Hub, you first have to import that update into the Device Update service. This article provides an overview of some important concepts to understand when it comes to importing updates.
## Import manifest
-An import manifest is a JSON file that defines important information about the update that you are importing. You will submit both your import manifest and associated update file or files (such as a firmware update package) as part of the import process. The metadata that is defined in the import manifest is used to ingest the update. Some of the metadata is also used at deployment time - for example, to validate if an update was installed correctly.
+An import manifest is a JSON file that defines important information about the update that you're importing. You submit both your import manifest and associated update file or files (such as a firmware update package) as part of the import process. The metadata that is defined in the import manifest is used to ingest the update. Some of the metadata is also used at deployment time - for example, to validate if an update was installed correctly.
-**Example**
+For example:
```json {
An import manifest is a JSON file that defines important information about the u
} ```
-The import manifest contains several items which represent important Device Update for IoT Hub concepts. These are outlined in this section. The full schema is documented [here](./import-schema.md).
+The import manifest contains several items that represent important Device Update for IoT Hub concepts. These items are outlined in this section. For information about the full import schema, see [Import manifest JSON schema](./import-schema.md).
-### Update identity (updateId)
+### Update identity
+
+The *update identity* or *updateId* is the unique identifier for an update in Device Update for IoT Hub. It's composed of three parts:
-*Update identity* is the unique identifer for an update in Device Update for IoT Hub. It is composed of three parts:
- **Provider**: entity who is creating or directly responsible for the update. It will often be a company name. - **Name**: identifier for a class of updates. It will often be a device class or model name.-- **Version**: a version number distinguishing this update from others that have the same Provider and Name.
+- **Version**: a version number distinguishing this update from others that have the same provider and name.
+
+For example:
+
+```json
+{
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster",
+ "version": "1.0"
+ }
+}
+```
> [!NOTE]
-> UpdateId is used by Device Update for IoT Hub service only, and may be different from identity of actual software component on the device.
+> UpdateId is used by the Device Update service only, and may be different from the identities of actual software components on the device.
### Compatibility
-*Compatibility* defines the criteria of a device that can install the update. It contains device properties - a set of arbitrary key value pairs that are reported from a device. Only devices with matching properties will be eligible for deployment. An update may be compatible with multiple device classes by having more than one set of device properties.
+*Compatibility* defines the criteria of a device that can install the update. It contains device properties that are a set of arbitrary key value pairs that are reported from a device. Only devices with matching properties will be eligible for deployment. An update may be compatible with multiple device classes by having more than one set of device properties.
-Here is an example of an update that can only be deployed to a device that reports *Contoso* and *Toaster* as its device manufacturer and model.
+Here's an example of an update that can only be deployed to a device that reports *Contoso* and *Toaster* as its device manufacturer and model.
```json {
Here is an example of an update that can only be deployed to a device that repor
### Instructions
-The *Instructions* part contains the necessary information or *steps* for device agent to install the update. The simplest update contains a single *inline* step. That step executes the included payload file using a *handler* registered with the device agent:
+The *Instructions* part contains the necessary information or *steps* for device agent to install the update. The simplest update contains a single inline step. That step executes the included payload file using a *handler* registered with the device agent:
```json {
An update may contain more than one step:
} ```
-An update may contain *reference* step which instructs device agent to install another update with its own import manifest altogether, establishing a *parent* and *child* update relationship. For example an update for a toaster may contain two child updates:
+An update may contain *reference* steps that instruct the device agent to install another update with its own import manifest altogether, establishing a parent and child update relationship. For example, an update for a toaster may contain two child updates:
```json {
An update may contain *reference* step which instructs device agent to install a
``` > [!NOTE]
-> An update may contain any combination of *inline* and *reference* steps.
+> An update may contain any combination of inline and reference steps.
### Files
-The *Files* part contains the metadata of update payload files like their names, sizes, and hash. Device Update for IoT Hub uses this metadata for integrity validation during import process. The same information is then forwarded to device agent to repeat the integrity validation prior to installation.
+The *Files* part contains the metadata of update payload files like their names, sizes, and hash. Device Update for IoT Hub uses this metadata for integrity validation during the import process. The same information is then forwarded to the device agent to repeat the integrity validation prior to installation.
> [!NOTE]
-> An update that contains *reference* steps only will not have any update payload file in the parent update.
+> An update that contains only reference steps won't have any update payload file in the parent update.
## Create an import manifest You may use any text editor to create import manifest JSON file. There are also sample scripts for creating import manifest programmatically in [Azure/iot-hub-device-update](https://github.com/Azure/iot-hub-device-update/tree/main/tools/AduCmdlets) on GitHub. > [!IMPORTANT]
-> Import manifest JSON filename must end with `.importmanifest.json` when imported through Microsoft Azure portal.
+> An import manifest JSON filename must end with `.importmanifest.json` when imported through Azure portal.
> [!TIP] > Use [Visual Studio Code](https://code.visualstudio.com) to enable autocomplete and JSON schema validation when creating an import manifest. ## Limits on importing updates
-Certain limits are enforced for each Device Update for IoT Hub instance. If you have not already reviewed them, please see [Device Update limits](./device-update-limits.md).
+Certain limits are enforced for each Device Update for IoT Hub instance. If you haven't already reviewed them, see [Device Update limits](./device-update-limits.md).
## Next steps -- Try out the [Import How-To guide](./create-update.md), which will walk you through the import process step by step.-- Review [Import Manifest Schema](./import-schema.md).
+- To learn more about the import process, see [Prepare an update to import](./create-update.md).
+- Review the [Import manifest schema](./import-schema.md).
iot-hub-device-update Import Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/import-schema.md
Title: Importing updates into Device Update for IoT Hub - schema and other infor
description: Schema and other related information (including objects) that is used when importing updates into Device Update for IoT Hub. Previously updated : 2/25/2021 Last updated : 06/27/2022
-# Importing updates into Device Update for IoT Hub - schema and other information
-If you want to import an update into Device Update for IoT Hub, be sure you've reviewed the [concepts](import-concepts.md) and [How-To guide](import-update.md) first. If you're interested in the details of import manifest schema, or information about API permissions, see below.
+# Importing updates into Device Update for IoT Hub: schema and other information
-## Import manifest JSON schema version 4.0
+If you want to import an update into Device Update for IoT Hub, be sure you've reviewed the [concepts](import-concepts.md) and [how-to guide](import-update.md) first. If you're interested in the details of import manifest schema, or information about API permissions, see below.
-Import manifest JSON schema is hosted at [SchemaStore.org](https://json.schemastore.org/azure-deviceupdate-import-manifest-4.0.json).
+The import manifest JSON schema is hosted at [SchemaStore.org](https://json.schemastore.org/azure-deviceupdate-import-manifest-4.0.json).
-### Schema
+## Schema
-**Properties**
-
-|Name|Type|Description|Required|
+|Property|Type|Description|Required|
||||| |**$schema**|`string`|JSON schema reference.|No| |**updateId**|`updateId`|Unique update identifier.|Yes|
-|**description**|`string`|Optional update description.|No|
+|**description**|`string`|Optional update description.<br><br>Maximum length: 512 characters|No|
|**compatibility**|`compatibility`|List of device property sets this update is compatible with.|Yes| |**instructions**|`instructions`|Update installation instructions.|Yes| |**files**|`file` `[0-10]`|List of update payload files. Sum of all file sizes may not exceed 2 GB. May be empty or null if all instruction steps are reference steps.|No| |**manifestVersion**|`string`|Import manifest schema version. Must be 4.0.|Yes|
-|**createdDateTime**|`string`|Date & time import manifest was created in ISO 8601 format.|Yes|
-
-Additional properties are not allowed.
-
-#### $schema
-
-JSON schema reference.
-
-* **Type**: `string`
-* **Required**: No
-
-#### updateId
-
-Unique update identifier.
-
-* **Type**: `updateId`
-* **Required**: Yes
-
-#### description
-
-Optional update description.
-
-* **Type**: `string`
-* **Required**: No
-* **Minimum Length**: `>= 1`
-* **Maximum Length**: `<= 512`
-
-#### compatibility
-
-List of device property sets this update is compatible with.
-
-* **Type**: `compatibility`
-* **Required**: Yes
-
-#### instructions
-
-Update installation instructions.
-
-* **Type**: `instructions`
-* **Required**: Yes
-
-#### files
-
-List of update payload files. Sum of all file sizes may not exceed 2 GB. May be empty or null if all instruction steps are reference steps.
+|**createdDateTime**|`string`|Date & time import manifest was created in ISO 8601 format.<br><br>Example: `"2020-10-02T22:18:04.9446744Z"`|Yes|
-* **Type**: `file` `[0-10]`
-* **Required**: No
+Additional properties aren't allowed.
-#### manifestVersion
+## updateId object
-Import manifest schema version. Must be `4.0`.
+The *updateID* object is a unique identifier for each update.
-* **Type**: `string`
-* **Required**: Yes
-
-#### createdDateTime
-
-Date & time import manifest was created in ISO 8601 format.
-
-* **Type**: `string`
-* **Required**: Yes
-* **Examples**:
- * `"2020-10-02T22:18:04.9446744Z"`
-
-### updateId object
-
-Unique update identifier.
-
-**`Update identity` Properties**
-
-|Name|Type|Description|Required|
+|Property|Type|Description|Required|
|||||
-|**provider**|`string`|Entity who is creating or directly responsible for the update. It can be a company name.|Yes|
-|**name**|`string`|Identifier for a class of update. It can be a device class or model name.|Yes|
-|**version**|`string`|Two to four part dot separated numerical version numbers. Each part must be a number between 0 and 2147483647 and leading zeroes will be dropped.|Yes|
-
-Additional properties are not allowed.
-
-#### updateId.provider
+|**provider**|`string`|Entity who is creating or directly responsible for the update. It can be a company name.<br><br>Pattern: `^[a-zA-Z0-9.-]+$`<br>Maximum length: 64 characters|Yes|
+|**name**|`string`|Identifier for a class of update. It can be a device class or model name.<br><br>Pattern: `^[a-zA-Z0-9.-]+$`<br>Maximum length: 64 characters|Yes|
+|**version**|`string`|Two- to four-part dot-separated numerical version numbers. Each part must be a number between 0 and 2147483647 and leading zeroes will be dropped.<br><br>Pattern: `^\d+(?:\.\d+)+$`<br>Examples: `"1.0"`, `"2021.11.8"`|Yes|
-Entity who is creating or directly responsible for the update. It can be a company name.
+Additional properties aren't allowed.
-* **Type**: `string`
-* **Required**: Yes
-* **Pattern**: `^[a-zA-Z0-9.-]+$`
-* **Minimum Length**: `>= 1`
-* **Maximum Length**: `<= 64`
+For example:
-#### updateId.name
+```json
+{
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster",
+ "version": "1.0"
+ }
+}
+```
-Identifier for a class of update. It can be a device class or model name.
+## compatibility object
-* **Type**: `string`
-* **Required**: Yes
-* **Pattern**: `^[a-zA-Z0-9.-]+$`
-* **Minimum Length**: `>= 1`
-* **Maximum Length**: `<= 64`
-
-#### updateId.version
-
-Two to four part dot separated numerical version numbers. Each part must be a number between 0 and 2147483647 and leading zeroes will be dropped.
-
-* **Type**: `string`
-* **Required**: Yes
-* **Pattern**: `^\d+(?:\.\d+)+$`
-* **Examples**:
- * `"1.0"`
- * `"2021.11.8"`
-
-### compabilityInfo object
-
-Properties of a device this update is compatible with.
+The *compatibility* object describes the properties of a device that this update is compatible with.
* **Type**: `object` * **Minimum Properties**: `1`
Each property is a name-value pair of type string.
* **Minimum Property Value Length**: `1` * **Maximum Property Value Length**: `64`
-_Note that the same exact set of compatibility properties cannot be used with more than one Update Provider and Name combination._
+The same exact set of compatibility properties can't be used with more than one update provider and name combination.
-### instructions object
-
-Update installation instructions.
-
-**Properties**
-
-|Name|Type|Description|Required|
-|||||
-|**steps**|`array[1-10]`||Yes|
+For example:
-Additional properties are not allowed.
+```json
+{
+ "compatibility": [
+ {
+ "deviceManufacturer": "Contoso",
+ "deviceModel": "Toaster"
+ }
+ ]
+}
+```
-#### instructions.steps
+## instructions object
-* **Type**: `array[1-10]`
- * Each element in the array must be one of the following values:
- * `inlineStep` object
- * `referenceStep` object
-* **Required**: Yes
+The *instructions* object provides the update installation instructions. The instructions object contains a list of steps to be performed. Steps can either be code to execute or a pointer to another update.
-### inlineStep object
-
-Installation instruction step that performs code execution.
-
-**Properties**
-
-|Name|Type|Description|Required|
+|Property|Type|Description|Required|
|||||
-|**type**|`string`|Instruction step type that performs code execution.|No|
-|**description**|`string`|Optional instruction step description.|No|
-|**handler**|`string`|Identity of handler on device that can execute this step.|Yes|
-|**files**|`string` `[1-10]`|Names of update files that agent will pass to handler.|Yes|
+|**steps**|`array[1-10]`|Each element in the array must be either an [inlineStep object](#inlinestep-object) or a [referenceStep object](#referencestep-object).|Yes|
+
+Additional properties aren't allowed.
+
+For example:
+
+```json
+{
+ "instructions": {
+ "steps": [
+ {
+ "type": "inline",
+ ...
+ },
+ {
+ "type": "reference",
+ ...
+ }
+ ]
+ }
+}
+```
+
+## inlineStep object
+
+An *inline* step object is an installation instruction step that performs code execution.
+
+|Property|Type|Description|Required|
+|||||
+|**type**|`string`|Instruction step type that performs code execution. Must be `inline`.<br><br>Defaults to `inline` if no value is provided.|No|
+|**description**|`string`|Optional instruction step description.<br><br>Maximum length: 64 characters|No|
+|**handler**|`string`|Identity of the handler on the device that can execute this step.<br><br>Pattern: `^\S+/\S+:\d{1,5}$`<br>Minimum length: 5 characters<br>Maximum length: 32 characters<br>Examples: `microsoft/script:1`, `microsoft/swupdate:1`, `microsoft/apt:1` |Yes|
+|**files**|`string` `[1-10]`| Names of update files defined as [file objects](#file-object) that the agent will pass to the handler. Each element in the array must have length between 1 and 255 characters. |Yes|
|**handlerProperties**|`inlineStepHandlerProperties`|JSON object that agent will pass to handler as arguments.|No|
-Additional properties are not allowed.
-
-#### inlineStep.type
-
-Instruction step type that performs code execution. Must be `inline`.
-
-* **Type**: `string`
-* **Required**: No
-
-#### inlineStep.description
-
-Optional instruction step description.
-
-* **Type**: `string`
-* **Required**: No
-* **Minimum Length**: `>= 1`
-* **Maximum Length**: `<= 64`
-
-#### inlineStep.handler
-
-Identity of handler on device that can execute this step.
-
-* **Type**: `string`
-* **Required**: Yes
-* **Pattern**: `^\S+/\S+:\d{1,5}$`
-* **Minimum Length**: `>= 5`
-* **Maximum Length**: `<= 32`
-* **Examples**:
- * `microsoft/script:1`
- * `microsoft/swupdate:1`
- * `microsoft/apt:1`
-
-#### inlineStep.files
-
-Names of update files that agent will pass to handler.
-
-* **Type**: `string` `[1-10]`
- * Each element in the array must have length between `1` and `255`.
-* **Required**: Yes
-
-#### inlineStep.handlerProperties
-
-JSON object that agent will pass to handler as arguments.
+Additional properties aren't allowed.
-* **Type**: `object`
-* **Required**: No
+For example:
-### referenceStep object
+```json
+{
+ "steps": [
+ {
+ "description": "pre-install script",
+ "handler": "microsoft/script:1",
+ "handlerProperties": {
+ "arguments": "--pre-install"
+ },
+ "files": [
+ "configure.sh"
+ ]
+ }
+ ]
+}
+```
-Installation instruction step that installs another update.
+## referenceStep object
-**Properties**
+A *reference* step object is an installation instruction step that installs another update.
-|Name|Type|Description|Required|
+|Property|Type|Description|Required|
|||||
-|**type**|`referenceStepType`|Instruction step type that installs another update.|Yes|
-|**description**|`stepDescription`|Optional instruction step description.|No|
+|**type**|`referenceStepType`|Instruction step type that installs another update. Must be `reference`.|Yes|
+|**description**|`stepDescription`|Optional instruction step description.<br><br>Maximum length: 64 characters |No|
|**updateId**|`updateId`|Unique update identifier.|Yes|
-Additional properties are not allowed.
-
-#### referenceStep.type
-
-Instruction step type that installs another update. Must be `reference`.
-
-* **Type**: `string`
-* **Required**: Yes
-
-#### referenceStep.description
-
-Optional instruction step description.
-
-* **Type**: `string`
-* **Required**: No
-* **Minimum Length**: `>= 1`
-* **Maximum Length**: `<= 64`
-
-#### referenceStep.updateId
-
-Unique update identifier.
+Additional properties aren't allowed.
-* **Type**: `updateId`
-* **Required**: Yes
+For example:
-### file object
+```json
+{
+ "steps": [
+ {
+ "type": "reference",
+ "updateId": {
+ "provider": "Contoso",
+ "name": "Toaster.HeatingElement",
+ "version": "1.0"
+ }
+ }
+ ]
+}
+```
-Update payload file, e.g. binary, firmware, script, etc. Must be unique within update.
+## file object
-**Properties**
+A *file* object is an update payload file, for example, binary, firmware, script, etc. Each file object must be unique within an update.
-|Name|Type|Description|Required|
+|Property|Type|Description|Required|
|||||
-|**filename**|`string`|Update payload file name.|Yes|
-|**sizeInBytes**|`number`|File size in number of bytes.|Yes|
+|**filename**|`string`|Update payload file name.<br><br>Maximum length: 255 characters|Yes|
+|**sizeInBytes**|`number`|File size in number of bytes.<br><br>Maximum size: 2147483648 bytes|Yes|
|**hashes**|`fileHashes`|Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. See below for details on how to calculate the hash. |Yes|
-Additional properties are not allowed.
+Additional properties aren't allowed.
-#### file.filename
+For example:
-Update payload file name.
+```json
+{
+ "files": [
+ {
+ "filename": "configure.sh",
+ "sizeInBytes": 7558,
+ "hashes": {...}
+ }
+ ]
+}
+```
-* **Type**: `string`
-* **Required**: Yes
-* **Minimum Length**: `>= 1`
-* **Maximum Length**: `<= 255`
+## fileHashes object
-#### file.sizeInBytes
+Base64-encoded file hashes with the algorithm name as key. At least the SHA-256 algorithm must be specified, and other algorithms may be specified if supported by the agent. For an example of how to calculate the hash correctly, see the Get-AduFileHashes function in [AduUpdate.psm1 script](https://github.com/Azure/iot-hub-device-update/blob/main/tools/AduCmdlets/AduUpdate.psm1).
-File size in number of bytes.
-
-* **Type**: `number`
-* **Required**: Yes
-* **Minimum**: ` >= 1`
-* **Maximum**: ` <= 2147483648`
-
-#### file.hashes
-
-File hashes.
-
-* **Type**: `fileHashes`
-* **Required**: Yes
-* **Type of each property**: `string`
-
-### fileHashes object
-
-Base64-encoded file hashes with algorithm name as key. At least SHA-256 algorithm must be specified, and additional algorithm may be specified if supported by agent. For an example of how to calculate the hash correctly, see the Get-AduFileHashes function in [AduUpdate.psm1 script](https://github.com/Azure/iot-hub-device-update/blob/main/tools/AduCmdlets/AduUpdate.psm1).
-
-**Properties**
-
-|Name|Type|Description|Required|
+|Property|Type|Description|Required|
||||| |**sha256**|`string`|Base64-encoded file hash value using SHA-256 algorithm.|Yes| Additional properties are allowed.
-#### fileHashes.sha256
-
-Base64-encoded file hash value using SHA-256 algorithm.
+For example:
-* **Type**: `string`
-* **Required**: Yes
+```json
+{
+ "hashes": {
+ "sha256": "/CD7Sn6fiknWa3NgcFjGlJ+ccA81s1QAXX4oo5GHiFA="
+ }
+}
+```
## Next steps
iot-hub Iot Hub Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-event-grid.md
For non-telemetry events like DeviceConnected, DeviceDisconnected, DeviceCreated
## Limitations for device connected and device disconnected events
-Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications. For IoT Hub to start sending device connection state events, after opening a connection a device must call either the *cloud-to-device receive message* operation or the *device-to-cloud send telemetry* operation. Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+### Device State Events
+Device connection state events are available for devices connecting using either the MQTT or AMQP protocol, or using either of these protocols over WebSockets. Requests made only with HTTPS won't trigger device connection state notifications.
+* For devices connecting using Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [MQTT protocol](iot-hub-mqtt-support.md) will have connection states sent automatically.
+* For devices connecting using the Java, Node, or Python [Azure IoT SDKs](iot-hub-devguide-sdks.md) with the [AMQP protocol](iot-hub-amqp-support.md), a cloud-to-device link should be created to reduce any delay in accurate connection states.
+* For devices connecting using the .NET [Azure IoT SDK](iot-hub-devguide-sdks.md) with the [MQTT](iot-hub-mqtt-support.md) or [AMQP](iot-hub-amqp-support.md) protocol wonΓÇÖt send a device connected event until an initial device-to-cloud or cloud-to-device message is sent/received.
+* Outside of the Azure IoT SDKs, in MQTT these operations equate to SUBSCRIBE or PUBLISH operations on the appropriate messaging [topics](iot-hub-mqtt-support.md). Over AMQP these equate to attaching or transferring a message on the [appropriate link paths](iot-hub-amqp-support.md).
+
+### Device State Interval
IoT Hub does not report each individual device connect and disconnect, but rather publishes the current connection state taken at a periodic 60 second snapshot. Receiving either the same connection state event with different sequence numbers or different connection state events both mean that there was a change in the device connection state during the 60 second window.
+![image](https://user-images.githubusercontent.com/94493443/178398214-7423f7ca-8dfe-4202-8e9a-46cc70974b5e.png)
+ ## Tips for consuming events
machine-learning Concept Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow-models.md
+
+ Title: From artifacts to models in MLflow
+
+description: Learn about how MLflow uses the concept of models instead of artifacts to represent your trained models and enable a streamlined path to deployment.
+++++ Last updated : 07/8/2022++++
+# From artifacts to models in MLflow
+
+The following article explains the differences between an artifact and a model in MLflow and how to transition from one to the other. It also explains how Azure Machine Learning uses the MLflow model's concept to enabled streamlined deployment workflows.
+
+## What's the difference between an artifact and a model?
+
+If you are not familiar with MLflow, you may not be aware of the difference between logging artifacts or files vs. logging MLflow models. There are some fundamental differences between the two:
+
+### Artifacts
+
+Any file generated (and captured) from an experiment's run or job is an artifact. It may represent a model serialized as a Pickle file, the weights of a PyTorch or TensorFlow model, or even a text file containing the coefficients of a linear regression. Other artifacts can have nothing to do with the model itself, but they can contain configuration to run the model, pre-processing information, sample data, etc. As you can see, an artifact can come in any format.
+
+You can log artifacts in MLflow in a similar way you log a file with Azure ML SDK v1:
+
+```python
+filename = 'model.pkl'
+with open(filename, 'wb') as f:
+ pickle.dump(model, f)
+
+mlflow.log_artifact(filename)
+```
+
+### Models
+
+A model in MLflow is also an artifact, as it matches the definition we introduced above. However, we make stronger assumptions about this type of artifacts. Such assumptions allow us to create a clear contract between the saved artifacts and what they mean. When you log your models as artifacts (simple files), you need to know what the model builder meant for each of them in order to know how to load the model for inference. When you log your models as a Model entity, you should be able to tell what it is based on the contract mentioned.
+
+Logging models has the following advantages:
+> [!div class="checklist"]
+> * You don't need to provide an scoring script nor an environment for deployment.
+> * Swagger is enabled in endpoints automatically and the __Test__ feature can be used in Azure ML studio.
+> * Models can be used as pipelines inputs directly.
+> * You can use the Responsable AI dashbord.
+
+## The MLModel format
+
+MLflow adopts the MLModel format as a way to create a contract between the artifacts and what they represent. The MLModel format stores assets in a folder. Among them, there is a particular file named MLModel. This file is the single source of truth about how a model can be loaded and used.
+
+The following example shows how the `MLmodel` file for a computer version model trained with `fastai` may look like:
+
+__MLmodel__
+
+```yaml
+artifact_path: classifier
+flavors:
+ fastai:
+ data: model.fastai
+ fastai_version: 2.4.1
+ python_function:
+ data: model.fastai
+ env: conda.yaml
+ loader_module: mlflow.fastai
+ python_version: 3.8.12
+model_uuid: e694c68eba484299976b06ab9058f636
+run_id: e13da8ac-b1e6-45d4-a9b2-6a0a5cfac537
+signature:
+ inputs: '[{"type": "tensor",
+ "tensor-spec":
+ {"dtype": "uint8", "shape": [-1, 300, 300, 3]}
+ }]'
+ outputs: '[{"type": "tensor",
+ "tensor-spec":
+ {"dtype": "float32", "shape": [-1,2]}
+ }]'
+```
+
+### The model's flavors
+
+Considering the variety of machine learning frameworks available to use, MLflow introduced the concept of flavor as a way to provide a unique contract to work across all of them. A flavor indicates what to expect for a given model created with a specific framework. For instance, TensorFlow has its own flavor, which specifies how a TensorFlow model should be persisted and loaded. Because each model flavor indicates how they want to persist and load models, the MLModel format doesn't enforce a single serialization mechanism that all the models need to support. Such decision allows each flavor to use the methods that provide the best performance or best support according to their best practices - without compromising compatibility with the MLModel standard.
+
+The following is an example of the `flavors` section for an `fastai` model.
+
+```yaml
+flavors:
+ fastai:
+ data: model.fastai
+ fastai_version: 2.4.1
+ python_function:
+ data: model.fastai
+ env: conda.yaml
+ loader_module: mlflow.fastai
+ python_version: 3.8.12
+```
+
+### Signatures
+
+[Model signatures in MLflow](https://www.mlflow.org/docs/latest/models.html#model-signature) are an important part of the model specification, as they serve as a data contract between the model and the server running our models. They are also important for parsing and enforcing model's input's types at deployment time. [MLflow enforces types when data is submitted to your model if a signature is available](https://www.mlflow.org/docs/latest/models.html#signature-enforcement).
+
+Signatures are indicated when the model gets logged and persisted in the `MLmodel` file, in the `signature` section. **Autolog's** feature in MLflow automatically infers signatures in a best effort way. However, it may be required to [log the models manually if the signatures inferred are not the ones you need](https://www.mlflow.org/docs/latest/models.html#how-to-log-models-with-signatures).
+
+There are two types of signatures:
+
+* **Column-based signature** corresponding to signatures that operate to tabular data. Models with this signature can expect to receive `pandas.DataFrame` objects as inputs.
+* **Tensor-based signature:** corresponding to signatures that operate with n-dimensional arrays or tensors. Models with this signature can expect to receive a `numpy.ndarray` as inputs (or a dictionary of `numpy.ndarray` in the case of named-tensors).
+
+The following example corresponds to a computer vision model trained with `fastai`. This model receives a batch of images represented as tensors of shape `(300, 300, 3)` with the RGB representation of them (unsigned integers). It outputs batches of predictions (probabilities) for two classes.
+
+__MLmodel__
+
+```yaml
+signature:
+ inputs: '[{"type": "tensor",
+ "tensor-spec":
+ {"dtype": "uint8", "shape": [-1, 300, 300, 3]}
+ }]'
+ outputs: '[{"type": "tensor",
+ "tensor-spec":
+ {"dtype": "float32", "shape": [-1,2]}
+ }]'
+```
+
+> [!TIP]
+> Azure Machine Learning generates Swagger endpoints for MLflow models with a signature available. This makes easier to test deployed endpoints using the Azure ML studio.
+
+### Model's environment
+
+Requirements for the model to run are specified in the `conda.yaml` file. Dependencies can be automatically detected by MLflow or they can be manually indicated when you call `mlflow.<flavor>.log_model()` method. The latter can be needed in cases that the libraries included in your environment are not the ones you intended to use.
+
+The following is an example of an environment used for a model created with `fastai` framework:
+
+__conda.yaml__
+
+```yaml
+channels:
+- conda-forge
+dependencies:
+- python=3.8.5
+- pip
+- pip:
+ - mlflow
+ - astunparse==1.6.3
+ - cffi==1.15.0
+ - configparser==3.7.4
+ - defusedxml==0.7.1
+ - fastai==2.4.1
+ - google-api-core==2.7.1
+ - ipython==8.2.0
+ - psutil==5.9.0
+name: mlflow-env
+```
+
+> [!NOTE]
+> MLflow environments and Azure Machine Learning environments are different concepts. While the former opperates at the level of the model, the latter operates at the level of the workspace (for registered environments) or jobs/deployments (for annonymous environments). When you deploy MLflow models in Azure Machine Learning, the model's environment is built and used for deployment. Alternatively, you can override this behaviour with the [Azure ML CLI v2](concept-v2.md) and deploy MLflow models using a specific Azure Machine Learning environments.
+
+### Model's predict function
+
+All MLflow models contain a `predict` function. This function is the one that is called when a model is deployed using a no-code-deployment experience. What the `predict` function returns (classes, probabilities, a forecast, etc.) depend on the framework (i.e. flavor) used for training. Read the documentation of each flavor to know what they return.
+
+In same cases, you may need to customize this function to change the way inference is executed. On those cases, you will need to [log models with a different behavior in the predict method](how-to-log-mlflow-models.md#logging-models-with-a-different-behavior-in-the-predict-method) or [log a custom model's flavor](how-to-log-mlflow-models.md#logging-custom-models).
+
+## Start logging models
+
+We recommend starting taking advantage of MLflow models in Azure Machine Learning. There are different ways to start using the model's concept with MLflow. Read [How to log MLFlow models](how-to-log-mlflow-models.md) to a comprehensive guide.
+
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
With MLflow Tracking you can connect Azure Machine Learning as the backend of yo
+ Azure Machine Learning also supports remote tracking of experiments by configuring MLflow to point to the Azure Machine Learning workspace. By doing so, you can leverage the capabilities of Azure Machine Learning while keeping your experiments where they are. + Lift and shift existing MLflow experiments to Azure Machine Learning. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
-Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiment via the Azure Machine Learning Python SDK, Azure Machine Learning CLI or the Azure Machine Learning studio. Learn more at [Track experiments with MLflow](how-to-use-mlflow-cli-runs.md).
+Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiment via the Azure Machine Learning Python SDK, Azure Machine Learning CLI or the Azure Machine Learning studio. Learn more at [Log & view metrics and log files with MLflow](how-to-log-view-metrics.md).
> [!IMPORTANT] > - MLflow in R support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. RStudio or Jupyter Notebooks with R kernels are not supported. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
machine-learning How To Convert Custom Model To Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-convert-custom-model-to-mlflow.md
With Azure Machine Learning, MLflow models get the added benefits of,
* Portability as an open source standard format * Ability to deploy both locally and on cloud
-MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) (scikit-learn, Keras, Pytorch, and more); however, it might not cover every use case. For example, you may want to create an MLflow model with a framework that MLflow does not natively support or you may want to change the way your model does pre-processing or post-processing when running jobs.
+MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) (scikit-learn, Keras, Pytorch, and more); however, it might not cover every use case. For example, you may want to create an MLflow model with a framework that MLflow does not natively support or you may want to change the way your model does pre-processing or post-processing when running jobs. To know more about MLflow models read [From artifacts to models in MLflow](concept-mlflow-models.md).
If you didn't train your model with MLFlow and want to use Azure Machine Learning's MLflow no-code deployment offering, you need to convert your custom model to MLFLow. Learn more about [custom python models and MLflow](https://mlflow.org/docs/latest/models.html#custom-python-models).
machine-learning How To Deploy Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models.md
For no-code-deployment, Azure Machine Learning
* `pandas` * The scoring script baked into the image.
+> [!IMPORTANT]
+> If you are used to deploying models using scoring scripts and custom environments and you are looking to know how to achieve the same functionality using MLflow models, we recommend reading [Using MLflow models for no-code deployment](how-to-log-mlflow-models.md).
+ > [!NOTE] > Consider the following limitations when deploying MLflow models to Azure Machine Learning: > - Spark flavor is not supported at the moment for deployment.
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
+
+ Title: Logging MLflow models
+
+description: Learn how to start logging MLflow models instead of artifacts using MLflow SDK in Azure Machine Learning.
+++++ Last updated : 07/8/2022++++
+# Logging MLflow models
+
+The following article explains how to start logging your trained models (or artifacts) as MLflow models. It explores the different methods to customize the way MLflow packages your models and hence how it runs them.
+
+## Why logging models instead of artifacts?
+
+If you are not familiar with MLflow, you may not be aware of the difference between logging artifacts or files vs. logging MLflow models. We recommend reading the article [From artifacts to models in MLflow](concept-mlflow-models.md) for an introduction to the topic.
+
+A model in MLflow is also an artifact, but with a specific structure that serves as a contract between the person that created the model and the person that intends to use it. Such contract helps build the bridge about the artifacts themselves and what they mean.
+
+Logging models has the following advantages:
+> [!div class="checklist"]
+> * You don't need to provide a scoring script nor an environment for deployment.
+> * Swagger is enabled in endpoints automatically and the __Test__ feature can be used in Azure ML studio.
+> * Models can be used as pipelines inputs directly.
+> * You can use the Responsable AI dashbord.
+
+There are different ways to start using the model's concept in Azure Machine Learning with MLflow, as explained in the following sections:
+
+## Logging models using autolog
+
+One of the simplest ways to start using this approach is by using MLflow autolog functionality. Autolog allows MLflow to instruct the framework associated to with the framework you are using to log all the metrics, parameters, artifacts and models that the framework considers relevant. By default, most models will be log if autolog is enabled. Some flavors may decide not to do that in specific situations. For instance, the flavor PySpark won't log models if they exceed a certain size.
+
+You can turn on autologging by using either `mlflow.autolog()` or `mlflow.<flavor>.autolog()`. The following example uses `autolog()` for logging a classifier model trained with XGBoost:
+
+```python
+import mlflow
+from xgboost import XGBClassifier
+from sklearn.metrics import accuracy_score
+
+with mlflow.start_run():
+ mlflow.autolog()
+
+ model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+ model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+ y_pred = model.predict(X_test)
+
+ accuracy = accuracy_score(y_test, y_pred)
+```
+
+> [!TIP]
+> If you are using Machine Learning pipelines, like for instance [Scikit-Learn pipelines](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html), use the `autolog` functionality of that flavor for logging models. Models are automatically logged when the `fit()` method is called on the pipeline object. The notebook [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/notebooks/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb) demostrates how to log a model with preprocessing using pipelines.
+
+## Logging models with a custom signature, environment or samples
+
+You can log models manually using the method `mlflow.<flavor>.log_model` in MLflow. Such workflow has the advantages of retaining control of different aspects of how the model is logged.
+
+Use this method when:
+> [!div class="checklist"]
+> * You want to indicate pip packages or a conda environment different from the ones that are automatically detected.
+> * You want to include input examples.
+> * You want to include specific artifacts into the package that will be needed.
+> * Your signature is not correctly inferred by `autolog`. This is specifically important when you deal with inputs that are tensors where the signature needs specific shapes.
+> * Somehow the default behavior of autolog doesn't fill your purpose.
+
+The following example code logs a model for an XGBoost classifier:
+
+```python
+import mlflow
+from xgboost import XGBClassifier
+from sklearn.metrics import accuracy_score
+from mlflow.models import infer_signature
+from mlflow.utils.environment import _mlflow_conda_env
+
+with mlflow.start_run():
+ mlflow.autolog(log_models=False)
+
+ model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+ model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+ y_pred = model.predict(X_test)
+
+ accuracy = accuracy_score(y_test, y_pred)
+
+ # Signature
+ signature = infer_signature(X_test, y_test)
+
+ # Conda environment
+ custom_env =_mlflow_conda_env(
+ additional_conda_deps=None,
+ additional_pip_deps=["xgboost==1.5.2"],
+ additional_conda_channels=None,
+ )
+
+ # Sample
+ input_example = X_train.sample(n=1)
+
+ # Log the model manually
+ mlflow.xgboost.log_model(model,
+ artifact_path="classifier",
+ conda_env=custom_env,
+ signature=signature,
+ input_example=input_example)
+```
+
+> [!NOTE]
+> * `log_models=False` is configured in `autolog`. This prevents MLflow to automatically log the model, as it is done manually later.
+> * `infer_signature` is a convenient method to try to infer the signature directly from inputs and outpus.
+> * `mlflow.utils.environment._mlflow_conda_env` is a private method in MLflow SDK and it may change in the future. This example uses it just for sake of simplicity, but it must be used with caution or generate the YAML definition manually as a Python dictionary.
+
+## Logging models with a different behavior in the predict method
+
+When you log a model using either `mlflow.autolog` or using `mlflow.<flavor>.log_model`, the flavor used for the model decides how inference should be executed and what gets returned by the model. MLflow doesn't enforce any specific behavior in how the `predict` generate results. There are scenarios where you probably want to do some pre-processing or post-processing before and after your model is executed.
+
+A solution to this scenario is to implement machine learning pipelines that moves from inputs to outputs directly. Although this is possible (and sometimes encourageable for performance considerations), it may be challenging to achieve. For those cases, you probably want to [customize how your model does inference using a custom models](#logging-custom-models) as explained in the following section.
+
+## Logging custom models
+
+MLflow provides support for a variety of [machine learning frameworks](https://mlflow.org/docs/latest/models.html#built-in-model-flavors) including FastAI, MXNet Gluon, PyTorch, TensorFlow, XGBoost, CatBoost, h2o, Keras, LightGBM, MLeap, ONNX, Prophet, spaCy, Spark MLLib, Scikit-Learn, and statsmodels. However, they may be times where you need to change how a flavor works, log a model not natively supported by MLflow or even log a model that uses multiple elements from different frameworks. For those cases, you may need to create a custom model flavor.
+
+For this type of models, MLflow introduces a flavor called `pyfunc` (standing from Python function). Basically this flavor allows you to log any object you want as a model, as long as it satisfies two conditions:
+
+* You implement the method `predict` (at least).
+* The Python object inherits from `mlflow.pyfunc.PythonModel`.
+
+> [!TIP]
+> Serializable models that implements the Scikit-learn API can use the Scikit-learn flavor to log the model, regardless of whether the model was built with Scikit-learn. If your model can be persisted in Pickle format and the object has methods `predict()` and `predict_proba()` (at least), then you can use `mlflow.sklearn.log_model()` to log it inside a MLflow run.
+
+# [Using a model wrapper](#tab/wrapper)
+
+The simplest way of creating your custom model's flavor is by creating a wrapper around your existing model object. MLflow will serialize it and package it for you. Python objects are serializable when the object can be stored in the file system as a file (generally in Pickle format). During runtime, the object can be materialized from such file and all the values, properties and methods available when it was saved will be restored.
+
+Use this method when:
+> [!div class="checklist"]
+> * Your model can be serialized in Pickle format.
+> * You want to retain the models state as it was just after training.
+> * You want to customize the way the `predict` function works.
+
+The following sample wraps a model created with XGBoost to make it behaves in a different way to the default implementation of the XGBoost flavor (it returns the probabilities instead of the classes):
+
+```python
+from mlflow.pyfunc import PythonModel, PythonModelContext
+
+class ModelWrapper(PythonModel):
+ def __init__(self, model):
+ self._model = model
+
+ def predict(self, context: PythonModelContext, data):
+ # You don't have to keep the semantic meaning of `predict`. You can use here model.recommend(), model.forecast(), etc
+ return self._model.predict_proba(data)
+
+ # You can even add extra functions if you need to. Since the model is serialized,
+ # all of them will be available when you load your model back.
+ def predict_batch(self, data):
+ pass
+```
+
+Then, a custom model can be logged in the run like this:
+
+```python
+mport mlflow
+from xgboost import XGBClassifier
+from sklearn.metrics import accuracy_score
+from mlflow.models import infer_signature
+
+with mlflow.start_run():
+ mlflow.xgboost.autolog(log_models=False)
+
+ model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+ model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+ y_probs = model.predict_proba(X_test)
+
+ accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
+ mlflow.log_metric("accuracy", accuracy)
+
+ signature = infer_signature(X_test, y_probs)
+ mlflow.pyfunc.log_model("classifier",
+ python_model=ModelWrapper(model),
+ signature=signature)
+```
+
+> [!TIP]
+> Note how the `infer_signature` method now uses `y_probs` to infer the signature. Our target column has the target class, but our model now returns the two probabilities for each class.
++
+# [Using artifacts](#tab/artifacts)
+
+Wrapping your model may be simple, but sometimes your model needs multiple pieces to be loaded or it can't just be serialized simply as a Pickle file. In those cases, the `PythonModel` supports indicating an arbitrary list of **artifacts**. Each artifact will be packaged along with your model.
+
+Use this method when:
+> [!div class="checklist"]
+> * Your model can't be serialized in Pickle format or there is a better format available for that.
+> * Your model have one or many artifacts that need to be referenced in order to load the model.
+> * You may want to persist some inference configuration properties (i.e. number of items to recommend).
+> * You want to customize the way the model is loaded and how the `predict` function works.
+
+To log a custom model using artifacts, you can do something as follows:
+
+```python
+encoder_path = 'encoder.pkl'
+joblib.dump(encoder, encoder_path)
+
+model_path = 'xgb.model'
+model.save_model(model_path)
+
+mlflow.pyfunc.log_model("classifier",
+ python_model=ModelWrapper(),
+ artifacts={
+ 'encoder': encoder_path,
+ 'model': model_path
+ },
+ signature=signature)
+```
+
+> [!NOTE]
+> * The model was saved using the save method of the framework used (it's not saved as a pickle).
+> * `ModelWrapper()` is the model wrapper, but the model is not passed as a parameter to the constructor.
+> A new parameter is indicated, `artifacts`, that is a dictionary with keys as the name of the artifact and values as the path is the local file system where the artifact is stored.
+
+The corresponding model wrapper then would look as follows:
+
+```python
+from mlflow.pyfunc import PythonModel, PythonModelContext
+
+class ModelWrapper(PythonModel):
+ def load_context(self, context: PythonModelContext):
+ import pickle
+ from xgboost import XGBClassifier
+ from sklearn.preprocessing import OrdinalEncoder
+
+ self._encoder = pickle.loads(context.artifacts["encoder"])
+ self._model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+ model.load_model(context.artifacts["model"])
+
+ def predict(self, context: PythonModelContext, data):
+ return self._model.predict_proba(data)
+```
+
+The complete training routine would look as follows:
+
+```python
+import mlflow
+from xgboost import XGBClassifier
+from sklearn.preprocessing import OrdinalEncoder
+from sklearn.metrics import accuracy_score
+from mlflow.models import infer_signature
+
+with mlflow.start_run():
+ mlflow.xgboost.autolog(log_models=False)
+
+ encoder = OrdinalEncoder(handle_unknown='ignore')
+ X_train['thal'] = enc.fit_transform(X_train['thal'])
+ X_test['thal'] = enc.transform(X_test['thal'])
+
+ model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+ model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+ y_probs = model.predict_proba(X_test)
+
+ accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
+ mlflow.log_metric("accuracy", accuracy)
+
+ encoder_path = 'encoder.pkl'
+ joblib.dump(encoder, encoder_path)
+ model_path = "xgb.model"
+ model.save_model(model_path)
+
+ signature = infer_signature(X, y_probs)
+ mlflow.pyfunc.log_model("classifier",
+ python_model=ModelWrapper(),
+ artifacts={
+ 'encoder': encoder_path,
+ 'model': model_path
+ },
+ signature=signature)
+```
+
+# [Using a model loader](#tab/loader)
+
+Sometimes your model logic is complex and there are several source code files being used to make your model work. This would be the case when you have a Python library for your model for instance. In this scenario, you want to package the library all along with your model so it can move as a single piece.
+
+Use this method when:
+> [!div class="checklist"]
+> * Your model can't be serialized in Pickle format or there is a better format available for that.
+> * Your model can be stored in a folder where all the requiered artifacts are placed.
+> * Your model's logic is complex and it requires multiple source files. Potentially, there is a library that supports your model.
+> * You want to customize the way the model is loaded and how the `predict` function works.
+
+MLflow supports this kind of models too by allowing you to specify any arbitrary source code to package along with the model as long as it has a *loader module*. Loader modules can be specified in the `log_model()` instruction using the argument `loader_module` which indicates the Python namespace where the loader is implemented. The argument `code_path` is also required, where you indicate the source files where the `loader_module` is defined. You are required to implement in this namespace a function called `_load_pyfunc(data_path: str)` that received the path of the artifacts and returns an object with a method predict (at least).
+
+```python
+model_path = 'xgb.model'
+model.save_model(model_path)
+
+mlflow.pyfunc.log_model("classifier",
+ data_path=model_path,
+ code_path=['loader_module.py'],
+ loader_module='loader_module'
+ signature=signature)
+```
+
+> [!NOTE]
+> * The model was saved using the save method of the framework used (it's not saved as a pickle).
+> * A new parameter, `data_path`, was added pointing to the folder where the model's artifacts are located. This can be a folder or a file. Whatever is on that folder or file, it will be packaged with the model.
+> * A new parameter, `code_path`, was added pointing to the location where the source code is placed. This can be a path or a single file. Whatever is on that folder or file, it will be packaged with the model.
+
+The corresponding `loader_module.py` implementation would be:
+
+__loader_module.py__
+
+```python
+class MyModel():
+ def __init__(self, model):
+ self._model = model
+
+ def predict(self, data):
+ return self._model.predict_proba(data)
+
+def _load_pyfunc(data_path: str):
+ import os
+
+ model = XGBClassifier(use_label_encoder=False, eval_metric='logloss')
+ model.load_model(os.path.abspath(data_path))
+
+ return MyModel(model)
+```
+
+> [!NOTE]
+> * The class `MyModel` doesn't inherits from `PythonModel` as we did before, but it has a `predict` function.
+> * The model's source code is on a file. This can be any source code you want. If your project has a folder src, it is a great candidate.
+> * We added a function `_load_pyfunc` which returns an instance of the model's class.
+
+The complete training code would look as follows:
+
+```python
+import mlflow
+from xgboost import XGBClassifier
+from sklearn.metrics import accuracy_score
+from mlflow.models import infer_signature
+
+with mlflow.start_run():
+ mlflow.xgboost.autolog(log_models=False)
+
+ model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+ model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+ y_probs = model.predict_proba(X_test)
+
+ accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
+ mlflow.log_metric("accuracy", accuracy)
+
+ model_path = "xgb.model"
+ model.save_model(model_path)
+
+ signature = infer_signature(X_test, y_probs)
+ mlflow.pyfunc.log_model("classifier",
+ data_path=model_path,
+ code_path=["loader_module.py"],
+ loader_module="loader_module",
+ signature=signature)
+```
+++
+## Next steps
+
+* [Deploy MLflow models](how-to-deploy-mlflow-models.md)
machine-learning How To Log View Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-view-metrics.md
MLflow supports two ways of logging images:
## Logging models
-MLflow introduces the concept of "models" as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be __registered__ and then __deployed__. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio.
+MLflow introduces the concept of "models" as a way to package all the artifacts required for a given model to function. Models in MLflow are always a folder with an arbitrary number of files, depending on the framework used to generate the model. Logging models has the advantage of tracking all the elements of the model as a single entity that can be __registered__ and then __deployed__. On top of that, MLflow models enjoy the benefit of [no-code deployment](how-to-deploy-mlflow-models.md) and can be used with the [Responsible AI dashboard](how-to-responsible-ai-dashboard.md) in studio. Read the article [From artifacts to models in MLflow](concept-mlflow-models.md) for more information.
-To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For frameworks that MLflow doesn't support, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
+To save the model from a training run, use the `log_model()` API for the framework you're working with. For example, [mlflow.sklearn.log_model()](https://mlflow.org/docs/latest/python_api/mlflow.sklearn.html#mlflow.sklearn.log_model). For more details about how to log MLflow models see [Logging MLflow models](how-to-log-mlflow-models.md) For migrating existing models to MLflow, see [Convert custom models to MLflow](how-to-convert-custom-model-to-mlflow.md).
## Automatic logging
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
The MLflow client exposes several methods to retrieve and manage models. The fol
| Registering models in MLflow format | **&check;** | **&check;** | **&check;** | **&check;** | | Registering models not in MLflow format | | | **&check;** | **&check;** | | Registering models from runs outputs/artifacts | **&check;** | **&check;**<sup>1</sup> | **&check;**<sup>2</sup> | **&check;** |
+| Registering models from runs outputs/artifacts in a different tracking server/workspace | **&check;** | | | |
| Listing registered models | **&check;** | **&check;** | **&check;** | **&check;** | | Retrieving details of registered model's versions | **&check;** | **&check;** | **&check;** | **&check;** | | Editing registered model's versions description | **&check;** | **&check;** | **&check;** | **&check;** |
The MLflow client exposes several methods to retrieve and manage models. The fol
> - <sup>3</sup> Registered models are immutable objects in Azure ML. > - <sup>4</sup> Use search box in Azure ML Studio. Partial match supported.
+### Prerequisites
+
+* Install the `azureml-mlflow` package.
+* If you are running outside an Azure ML compute, configure the MLflow tracking URI or MLflow's registry URI to point to the workspace you are working on. For more information about how to Set up tracking environment, see [Track runs using MLflow with Azure Machine Learning](how-to-use-mlflow-cli-runs.md#set-up-tracking-environment) for more details.
+ ## Registering new models in the registry ### Creating models from an existing run
If you have an MLflow model logged inside of a run and you want to register it i
mlflow.register_model(f"runs:/{run_id}/{artifact_path}", model_name) ```
+> [!NOTE]
+> Models can only be registered to the registry in the same workspace where the run was tracked. Cross-workspace operations are not supported by the moment in Azure Machine Learning.
+ ### Creating models from assets If you have a folder with an MLModel MLflow model, then you can register it directly. There's no need for the model to be always in the context of a run. To do that you can use the URI schema `file://path/to/model` to register MLflow models stored in the local file system. Let's create a simple model using `Scikit-Learn` and save it in MLflow format in the local storage:
mlflow.sklearn.save_model(reg, "./regressor")
``` > [!TIP]
-> The method `save_model` works in the same way as `log_model`. While the latter requires an MLflow run to be active so the model can be logged there, the former uses the local file system for the stage of the model's artifacts.
+> The method `save_model()` works in the same way as `log_model()`. While `log_model()` saves the model inside on an active run, `save_model()` uses the local file system for saving the model.
You can now register the model from the local path:
client.get_model_version_stages(model_name, version="latest")
You can see what model's version is on each stage by getting the model from the registry. The following example gets the model's version currently in the stage `Staging`.
-> [!WARNING]
-> Stage names are case sensitive.
- ```python client.get_latest_versions(model_name, stages=["Staging"]) ```
client.get_latest_versions(model_name, stages=["Staging"])
> [!NOTE] > Multiple versions can be in the same stage at the same time in Mlflow, however, this method returns the latest version (greater version) among all of them.
+> [!WARNING]
+> Stage names are case sensitive.
+ ### Transitioning models Transitioning a model's version to a particular stage can be done using the MLflow client.
machine-learning How To Train Mlflow Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md
# Train ML models with MLflow Projects and Azure Machine Learning (preview) -- [!INCLUDE [preview disclaimer](../../includes/machine-learning-preview-generic-disclaimer.md)] In this article, learn how to enable MLflow's tracking URI and logging API, collectively known as [MLflow Tracking](https://mlflow.org/docs/latest/quickstart.html#using-the-tracking-api), to submit training jobs with [MLflow Projects](https://www.mlflow.org/docs/latest/projects.html) and Azure Machine Learning backend support. You can submit jobs locally with Azure Machine Learning tracking or migrate your runs to the cloud like via an [Azure Machine Learning Compute](./how-to-create-attach-compute-cluster.md).
In this article, learn how to enable MLflow's tracking URI and logging API, coll
## Prerequisites * Install the `azureml-mlflow` package.
- * This package automatically brings in `azureml-core` of the [The Azure Machine Learning Python SDK](/python/api/overview/azure/ml/install), which provides the connectivity for MLflow to access your workspace.
* [Create an Azure Machine Learning Workspace](quickstart-create-resources.md). * See which [access permissions you need to perform your MLflow operations with your workspace](how-to-assign-roles.md#mlflow-operations).
+ * Configure MLflow for tracking in Azure Machine Learning, as explained in the next section.
-## Train MLflow Projects on local compute
+### Set up tracking environment
+
+To configure MLflow for working with Azure Machine Learning, you need to point your MLflow environment to the Azure Machine Learning MLflow Tracking URI.
+
+> [!NOTE]
+> When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.
+
+# [Using the Azure ML SDK v2](#tab/azuremlsdk)
++
+You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
+
+1. Using the workspace configuration file:
+
+ ```Python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ import mlflow
-This example shows how to submit MLflow projects locally with Azure Machine Learning tracking.
+ ml_client = MLClient.from_config(credential=DefaultAzureCredential()
+ azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
-Install the `azureml-mlflow` package to use MLflow Tracking with Azure Machine Learning on your experiments locally. Your experiments can run via a Jupyter Notebook or code editor.
+ > [!TIP]
+ > You can download the workspace configuration file by:
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
+ > 2. Click on the uper-right corner of the page -> Download config file.
+ > 3. Save the file `config.json` in the same directory where you are working on.
-```shell
-pip install azureml-mlflow
+1. Using the subscription ID, resource group name and workspace name:
+
+ ```Python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ import mlflow
+
+ #Enter details of your AzureML workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AZUREML_WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!IMPORTANT]
+ > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
+
+# [Using an environment variable](#tab/environ)
++
+Another option is to set one of the MLflow environment variables [MLFLOW_TRACKING_URI](https://mlflow.org/docs/latest/tracking.html#logging-to-a-tracking-server) directly in your terminal.
+
+```Azure CLI
+export MLFLOW_TRACKING_URI=$(az ml workspace show --query mlflow_tracking_uri | sed 's/"//g')
```
-Import the `mlflow` and [`Workspace`](/python/api/azureml-core/azureml.core.workspace%28class%29) classes to access MLflow's tracking URI and configure your workspace.
+>[!IMPORTANT]
+> Make sure you are logged in to your Azure account on your local machine, otherwise the tracking URI returns an empty string. If you are using any Azure ML compute the tracking environment and experiment name is already configured.
+
+# [Building the MLflow tracking URI](#tab/build)
-```Python
+The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
+
+```python
import mlflow
-from azureml.core import Workspace
-ws = Workspace.from_config()
+region = ""
+subscription_id = ""
+resource_group = ""
+workspace_name = ""
-mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
+azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
+mlflow.set_tracking_uri(azureml_mlflow_uri)
```
-Set the MLflow experiment name with `set_experiment()` and start your training run with `start_run()`. Then, use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
+> [!NOTE]
+> You can also get this URL by:
+> 1. Navigate to [Azure ML studio](https://ml.azure.com)
+> 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+> 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
-```Python
-experiment_name = 'experiment-with-mlflow-projects'
-mlflow.set_experiment(experiment_name)
-```
++
+## Train MLflow Projects on local compute
+
+This example shows how to submit MLflow projects locally with Azure Machine Learning.
Create the backend configuration object to store necessary information for the integration such as, the compute target and which type of managed environment to use.
local_env_run = mlflow.projects.run(uri=".",
This example shows how to submit MLflow projects on a remote compute with Azure Machine Learning tracking.
-Install the `azureml-mlflow` package to use MLflow Tracking with Azure Machine Learning on your experiments locally. Your experiments can run via a Jupyter Notebook or code editor.
-
-```shell
-pip install azureml-mlflow
-```
-
-Import the `mlflow` and [`Workspace`](/python/api/azureml-core/azureml.core.workspace%28class%29) classes to access MLflow's tracking URI and configure your workspace.
-
-```Python
-import mlflow
-from azureml.core import Workspace
-
-ws = Workspace.from_config()
-
-mlflow.set_tracking_uri(ws.get_mlflow_tracking_uri())
-```
-
-Set the MLflow experiment name with `set_experiment()` and start your training run with `start_run()`. Then, use `log_metric()` to activate the MLflow logging API and begin logging your training run metrics.
-
-```Python
-experiment_name = 'train-mlflow-project-amlcompute'
-mlflow.set_experiment(experiment_name)
-```
- Create the backend configuration object to store necessary information for the integration such as, the compute target and which type of managed environment to use. The integration accepts "COMPUTE" and "USE_CONDA" as parameters where "COMPUTE" is set to the name of your remote compute cluster and "USE_CONDA" which creates a new environment for the project from the environment configuration file. If "COMPUTE" is present in the object, the project will be automatically submitted to the remote compute and ignore "USE_CONDA". MLflow accepts a dictionary object or a JSON file.
machine-learning How To Use Mlflow Azure Databricks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-databricks.md
If you prefer to manage your tracked experiments in a centralized location, you
You have to configure the MLflow tracking URI to point exclusively to Azure Machine Learning, as it is demonstrated in the following example:
- # [Using the Azure ML SDK v2](#tab/sdkv2)
+ # [Using the Azure ML SDK v2](#tab/azuremlsdk)
- [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-
- You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you're using:
-
- ```python
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v2.md)]
+
+ You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
+
+ a. Using the workspace configuration file:
+
+ ```Python
from azure.ai.ml import MLClient
- from azure.identity import DeviceCodeCredential
-
- subscription_id = ""
- aml_resource_group = ""
- aml_workspace_name = ""
-
- ml_client = MLClient(credential=DeviceCodeCredential(),
+ from azure.identity import DefaultAzureCredential
+ import mlflow
+
+ ml_client = MLClient.from_config(credential=DefaultAzureCredential()
+ azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!TIP]
+ > You can download the workspace configuration file by:
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
+ > 2. Click on the uper-right corner of the page -> Download config file.
+ > 3. Save the file `config.json` in the same directory where you are working on.
+
+ b. Using the subscription ID, resource group name and workspace name:
+
+ ```Python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ import mlflow
+
+ #Enter details of your AzureML workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AZUREML_WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
subscription_id=subscription_id,
- resource_group_name=aml_resource_group)
-
- azureml_mlflow_uri = ml_client.workspaces.get(aml_workspace_name).mlflow_tracking_uri
+ resource_group_name=resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
mlflow.set_tracking_uri(azureml_mlflow_uri) ```
-
- # [Building the MLflow tracking URI](#tab/custom)
-
+
+ > [!IMPORTANT]
+ > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
+
+ # [Building the MLflow tracking URI](#tab/build)
+ The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-
+ ```python import mlflow
- aml_region = ""
+ region = ""
subscription_id = ""
- aml_resource_group = ""
- aml_workspace_name = ""
+ resource_group = ""
+ workspace_name = ""
- azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{aml_resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{aml_workspace_name}"
+ azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
mlflow.set_tracking_uri(azureml_mlflow_uri) ```
-
+ > [!NOTE] > You can also get this URL by:
- > 1. Navigate to the [Azure ML Studio web portal](https://ml.azure.com).
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
> 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI. > 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
Azure Synapse Analytics can be configured to track experiments using MLflow to A
To use Azure Machine Learning as your centralized repository for experiments, you can leverage MLflow. On each notebook where you are working on, you have to configure the tracking URI to point to the workspace you will be using. The following example shows how it can be done:
- # [Using the Azure ML SDK v2](#tab/sdkv2)
+ # [Using the Azure ML SDK v2](#tab/azuremlsdk)
- [!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-
- You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you're using:
-
- ```python
+ [!INCLUDE [sdk v1](../../includes/machine-learning-sdk-v2.md)]
+
+ You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
+
+ a. Using the workspace configuration file:
+
+ ```Python
from azure.ai.ml import MLClient
- from azure.identity import DeviceCodeCredential
-
- subscription_id = ""
- aml_resource_group = ""
- aml_workspace_name = ""
-
- ml_client = MLClient(credential=DeviceCodeCredential(),
+ from azure.identity import DefaultAzureCredential
+ import mlflow
+
+ ml_client = MLClient.from_config(credential=DefaultAzureCredential()
+ azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!TIP]
+ > You can download the workspace configuration file by:
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
+ > 2. Click on the uper-right corner of the page -> Download config file.
+ > 3. Save the file `config.json` in the same directory where you are working on.
+
+ b. Using the subscription ID, resource group name and workspace name:
+
+ ```Python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ import mlflow
+
+ #Enter details of your AzureML workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AZUREML_WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
subscription_id=subscription_id,
- resource_group_name=aml_resource_group)
-
- azureml_mlflow_uri = ml_client.workspaces.get(aml_workspace_name).mlflow_tracking_uri
+ resource_group_name=resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
mlflow.set_tracking_uri(azureml_mlflow_uri) ```
-
- # [Building the MLflow tracking URI](#tab/custom)
-
+
+ > [!IMPORTANT]
+ > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
+
+ # [Building the MLflow tracking URI](#tab/build)
+ The Azure Machine Learning Tracking URI can be constructed using the subscription ID, region of where the resource is deployed, resource group name and workspace name. The following code sample shows how:
-
+ ```python import mlflow
- aml_region = ""
+ region = ""
subscription_id = ""
- aml_resource_group = ""
- aml_workspace_name = ""
+ resource_group = ""
+ workspace_name = ""
- azureml_mlflow_uri = f"azureml://{aml_region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{aml_resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{aml_workspace_name}"
+ azureml_mlflow_uri = f"azureml://{region}.api.azureml.ms/mlflow/v1.0/subscriptions/{subscription_id}/resourceGroups/{resource_group}/providers/Microsoft.MachineLearningServices/workspaces/{workspace_name}"
mlflow.set_tracking_uri(azureml_mlflow_uri) ```
-
+ > [!NOTE] > You can also get this URL by:
- > 1. Navigate to the [Azure ML Studio web portal](https://ml.azure.com).
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
> 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI. > 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
To track a run that is not running on Azure Machine Learning compute (from now o
> [!NOTE] > When running on Azure Compute (Azure Notebooks, Jupyter Notebooks hosted on Azure Compute Instances or Compute Clusters) you don't have to configure the tracking URI. It's automatically configured for you.-
+
# [Using the Azure ML SDK v2](#tab/azuremlsdk) You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning SDK v2 for Python](concept-v2.md). Ensure you have the library `azure-ai-ml` installed in the cluster you are using. The following sample gets the unique MLFLow tracking URI associated with your workspace. Then the method [`set_tracking_uri()`](https://mlflow.org/docs/latest/python_api/mlflow.html#mlflow.set_tracking_uri) points the MLflow tracking URI to that URI.
-```Python
-from azure.ai.ml import MLClient
-from azure.identity import DefaultAzureCredential
-import mlflow
+1. Using the workspace configuration file:
-#Enter details of your AzureML workspace
-subscription_id = '<SUBSCRIPTION_ID>'
-resource_group = '<RESOURCE_GROUP>'
-workspace_name = '<AZUREML_WORKSPACE_NAME>'
+ ```Python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ import mlflow
-ml_client = MLClient(credential=DefaultAzureCredential(),
- subscription_id=subscription_id,
- resource_group_name=resource_group)
-
-azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
-mlflow.set_tracking_uri(azureml_mlflow_uri)
-```
+ ml_client = MLClient.from_config(credential=DefaultAzureCredential()
+ azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
->[!IMPORTANT]
-> `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
+ > [!TIP]
+ > You can download the workspace configuration file by:
+ > 1. Navigate to [Azure ML studio](https://ml.azure.com)
+ > 2. Click on the uper-right corner of the page -> Download config file.
+ > 3. Save the file `config.json` in the same directory where you are working on.
+
+1. Using the subscription ID, resource group name and workspace name:
+
+ ```Python
+ from azure.ai.ml import MLClient
+ from azure.identity import DefaultAzureCredential
+ import mlflow
+
+ #Enter details of your AzureML workspace
+ subscription_id = '<SUBSCRIPTION_ID>'
+ resource_group = '<RESOURCE_GROUP>'
+ workspace_name = '<AZUREML_WORKSPACE_NAME>'
+
+ ml_client = MLClient(credential=DefaultAzureCredential(),
+ subscription_id=subscription_id,
+ resource_group_name=resource_group)
+
+ azureml_mlflow_uri = ml_client.workspaces.get(workspace_name).mlflow_tracking_uri
+ mlflow.set_tracking_uri(azureml_mlflow_uri)
+ ```
+
+ > [!IMPORTANT]
+ > `DefaultAzureCredential` will try to pull the credentials from the available context. If you want to specify credentials in a different way, for instance using the web browser in an interactive way, you can use `InteractiveBrowserCredential` or any other method available in `azure.identity` package.
# [Using an environment variable](#tab/environ)
mlflow.set_tracking_uri(azureml_mlflow_uri)
``` > [!NOTE]
-> You can also get this URL by: Navigate to [Azure ML studio](https://ml.azure.com) -> Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+> You can also get this URL by:
+> 1. Navigate to [Azure ML studio](https://ml.azure.com)
+> 2. Click on the uper-right corner of the page -> View all properties in Azure Portal -> MLflow tracking URI.
+> 3. Copy the URI and use it with the method `mlflow.set_tracking_uri`.
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
This quickstart demonstrates how to use the Azure CLI commands to configure a hy
> ```azurecli-interactive > az managed-cassandra cluster update --cluster-name --resource-group--repair-enabled false > ```
- > Then run `nodetool repair --full` on all the nodes in your existing cluster's data center. You should run this only after all of the above steps have been taken. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra. Prior to changing the replication settings, you should also make sure that any application code that connects to your existing Cassandra cluster is using LOCAL_QUORUM. You should leave it at this setting during the migration (it can be switched back afterwards if required). After everyhting is done and the old datacenter decommissioned you can enable automatic repair again).
+ > Then run `nodetool repair --full` on all the nodes in your existing cluster's data center. You should run this **only after all of the prior steps have been taken**. This should ensure that all historical data is replicated to your new data centers in Azure Managed Instance for Apache Cassandra. If you have a very large amount of data in your existing cluster, it may be necessary to run the repairs at the keyspace or even table level - see [here](https://cassandra.apache.org/doc/latest/cassandra/operating/repair.html) for more details on running repairs in Cassandra. Prior to changing the replication settings, you should also make sure that any application code that connects to your existing Cassandra cluster is using LOCAL_QUORUM. You should leave it at this setting during the migration (it can be switched back afterwards if required). After the migration is completed, you can enable automatic repair again, and point your application code to the new Cassandra Managed Instance data center's seed nodes (and revert the quorum settings if preferred).
+ >
+ > Finally, to decommission your old data center:
+ >
+ > - Run `ALTER KEYSPACE` for each keyspace, removing the old data center.
+ > - We recommend running `nodetool repair` for each keyspace as well, before the below step.
+ > - Run [nodetool decommision](https://cassandra.apache.org/doc/latest/cassandra/operating/topo_changes.html#removing-nodes) for each on premise data center node.
> [!NOTE] > To speed up repairs we advise (if system load permits it) to increase both stream throughput and compaction throughput as in the example below:
migrate Concepts Migration Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/concepts-migration-webapps.md
Support | Details
- Learn how to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md). - Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
- - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain.md).
- - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings.md).
+ - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain).
+ - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings).
- [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview). - [Deployment best practices](/azure/app-service/deploy-best-practices). - [Security recommendations](/azure/app-service/security-recommendations). - [Networking features](/azure/app-service/networking-features). - [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service). - [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).-- [Review best practices](/azure/app-service/deploy-best-practices.md) for deploying to Azure App service.
+- [Review best practices](/azure/app-service/deploy-best-practices) for deploying to Azure App service.
migrate Troubleshoot Webapps Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/troubleshoot-webapps-migration.md
UnableToConnectToServer | Connecting to the remote server failed. | Check error
- Continue to [perform at-scale agentless migration of ASP.NET web apps to Azure App Service](./tutorial-migrate-webapps.md). - Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s):
- - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain.md).
- - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings.md).
+ - [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain).
+ - [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings).
- [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview). - [Deployment best practices](/azure/app-service/deploy-best-practices). - [Security recommendations](/azure/app-service/security-recommendations). - [Networking features](/azure/app-service/networking-features). - [Monitor App Service with Azure Monitor](/azure/app-service/monitor-app-service). - [Configure Azure AD authentication](/azure/app-service/configure-authentication-provider-aad).-- [Review best practices](/azure/app-service/deploy-best-practices.md) for deploying to Azure App service.
+- [Review best practices](/azure/app-service/deploy-best-practices) for deploying to Azure App service.
migrate Tutorial Migrate Webapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-webapps.md
Once the migration is initiated, you can track the status using the Azure Resour
Once you have successfully completed migration, you may explore the following steps based on web app specific requirement(s): -- [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain.md).-- [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings.md).
+- [Map existing custom DNS name](/azure/app-service/app-service-web-tutorial-custom-domain).
+- [Secure a custom DNS with a TLS/SSL binding](/azure/app-service/configure-ssl-bindings).
- [Securely connect to Azure resources](/azure/app-service/tutorial-connect-overview) - [Deployment best practices](/azure/app-service/deploy-best-practices). - [Security recommendations](/azure/app-service/security-recommendations).
Once you have successfully completed migration, you may explore the following st
## Next steps - Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.-- [Review best practices](/azure/app-service/deploy-best-practices.md) for deploying to Azure App service.
+- [Review best practices](/azure/app-service/deploy-best-practices) for deploying to Azure App service.
postgresql Concepts Intelligent Tuning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-intelligent-tuning.md
Allow up to 35 minutes for the first batch of data to persist in the *azure_sys*
## Information about intelligent tuning
-Intelligent tuning operates around three main parameters for the given time: `checkpoint_completion_target`, `min_wal_size`, and `bgwriter_delay`.
+Intelligent tuning operates around three main parameters for the given time: `checkpoint_completion_target`, `max_wal_size`, and `bgwriter_delay`.
These three parameters mostly affect:
search Search Indexer Howto Access Private https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-indexer-howto-access-private.md
To create a private endpoint that an indexer can use, use the Azure portal or th
## Terminology
-Private endpoints created through Azure Cognitive Search APIs are referred to as *shared private links* or *managed outbound private endpoints*. The concept of a "shared private link" is that an Azure PaaS resource already has a private endpoint through [Azure Private Link service](https://azure.microsoft.com/services/private-link/), and Azure Cognitive Search is sharing access. Although access is shared, a shared private link creates its own private connection. The shared private link is the mechanism by which Azure Cognitive Search makes the connection to resources in a private network.
+Private endpoints created through Azure Cognitive Search APIs are referred to as "shared private links" or "managed private endpoints". The concept of a shared private link is that an Azure PaaS resource already has a private endpoint through [Azure Private Link service](https://azure.microsoft.com/services/private-link/), and Azure Cognitive Search is sharing access. Although access is shared, a shared private link creates its own private connection that's used exclusively by Azure Cognitive Search. The shared private link is the mechanism by which Azure Cognitive Search makes the connection to resources in a private network.
## Prerequisites + The Azure resource that provides content or code must be previously registered with the [Azure Private Link service](https://azure.microsoft.com/services/private-link/).
-+ The search service must be Basic tier or higher. If you're using [AI enrichment](cognitive-search-concept-intro.md) and skillsets, the tier must be Standard 2 (S2) or higher. For more information, see [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits).
++ The search service must be Basic tier or higher. If you're using [AI enrichment](cognitive-search-concept-intro.md) and skillsets, the tier must be Standard 2 (S2) or higher. See [Service limits](search-limits-quotas-capacity.md#shared-private-link-resource-limits). + If you're connecting to a preview data source, such as Azure Database for MySQL or Azure Functions, use a preview version of the Management REST API to create the shared private link. Preview versions that support a shared private link include `2020-08-01-preview` or `2021-04-01-preview`. + Connections from the search client should be programmatic, either REST APIs or an Azure SDK, rather than through the Azure portal. The device must connect using an authorized IP in the Azure PaaS resource's firewall rules.
-+ Indexer execution must use the private execution environment that's specific to your search service. Private endpoint connections aren't supported from the multi-tenant environment.
-
-+ If you're using the [Azure portal](https://portal.azure.com/), make sure that access to all public networks is enabled in the data source resource firewall while going through the instructions below. Otherwise, you need to enable access to all public networks during this setup and then disable it again, or instead, you must use REST API from a device with an authorized IP in the firewall rules, to perform these operations. If the supported data source resource has public networks access disabled, there will be errors when connecting from the portal to it.
++ Indexer execution must use the private execution environment that's specific to your search service. Private endpoint connections aren't supported from the multi-tenant environment. Instructions for this requirement are provided in this article. ++ If you're using [Azure portal](https://portal.azure.com/) to create a shared private link, make sure that access to all public networks is enabled in the data source resource firewall, or there'll be errors when you try to create the object. You only need to enable access while setting up the shared private link. If you can't enable access for this task, use the REST API from a device with an authorized IP in the firewall rules. > [!NOTE]
-> When using Private Link for data sources, Azure portal access (from Cognitive Search to your content) - such as through the [Import data](search-import-data-portal.md) wizard - is not supported.
+> [Import data wizard](search-import-data-portal.md) or invoking indexer-based indexing from the portal over a shared private link is not supported.
<a name="group-ids"></a>
search Search Performance Optimization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-performance-optimization.md
No SLA is provided for the Free tier. For more information, see [SLA for Azure C
## Data residency
-Azure Cognitive Search won't store data outside of your specified region without your authorization. Specifically, the following features write to an Azure Storage resource: [enrichment cache](cognitive-search-incremental-indexing-conceptual.md), [debug session](cognitive-search-debug-session.md), [knowledge store](knowledge-store-concept-intro.md).
+Azure Cognitive Search won't store data outside of your specified region without your authorization. Specifically, the following features write to an Azure Storage resource: [enrichment cache](cognitive-search-incremental-indexing-conceptual.md), [debug session](cognitive-search-debug-session.md), [knowledge store](knowledge-store-concept-intro.md). The storage account is one that you provide, and it could be in any region.
-The storage account is one that you provide, and it could be in any region. If you put storage and search in the same region, and you also need network security, be aware of the [IP firewall restrictions](search-indexer-howto-access-ip-restricted.md) that prevent service connections in this scenario. When network security is a requirement, consider using the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) as a firewall alternative.
+If both the storage account and the search service are in the same region, network traffic between search and storage uses a private IP address and occurs over the Microsoft backbone network. Because private IP addresses are used, you can't configure IP firewalls or a private endpoint for network security. Instead, use the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) as an alternative when both services are in the same region.
<a name="availability-zones"></a>
search Search Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-overview.md
In Azure Cognitive Search, Resource Manager is used to create or delete the serv
## Data residency
-Azure Cognitive Search won't store data outside of your specified region without your authorization. Specifically, the following features write to an Azure Storage resource: [enrichment cache](cognitive-search-incremental-indexing-conceptual.md), [debug session](cognitive-search-debug-session.md), [knowledge store](knowledge-store-concept-intro.md).
+Azure Cognitive Search won't store data outside of your specified region without your authorization. Specifically, the following features write to an Azure Storage resource: [enrichment cache](cognitive-search-incremental-indexing-conceptual.md), [debug session](cognitive-search-debug-session.md), [knowledge store](knowledge-store-concept-intro.md). The storage account is one that you provide, and it could be in any region.
-The storage account is one that you provide, and it could be in any region. If you put storage and search in the same region, and you also need network security, be aware of the [IP firewall restrictions](search-indexer-howto-access-ip-restricted.md) that prevent service connections in this scenario. When network security is a requirement, consider using the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) as a firewall alternative.
+If both the storage account and the search service are in the same region, network traffic between search and storage uses a private IP address and occurs over the Microsoft backbone network. Because private IP addresses are used, you can't configure IP firewalls or a private endpoint for network security. Instead, use the [trusted service exception](search-indexer-howto-access-trusted-service-exception.md) as an alternative when both services are in the same region.
<a name="encryption"></a>
sentinel Connect Microsoft 365 Defender https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-365-defender.md
## Background
-Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**.
+Microsoft Sentinel's [Microsoft 365 Defender](/microsoft-365/security/mtp/microsoft-threat-protection) connector with incident integration allows you to stream all Microsoft 365 Defender incidents and alerts into Microsoft Sentinel, and keeps the incidents synchronized between both portals. Microsoft 365 Defender incidents include all their alerts, entities, and other relevant information, and they group together, and are enriched by, alerts from Microsoft 365 Defender's component services **Microsoft Defender for Endpoint**, **Microsoft Defender for Identity**, **Microsoft Defender for Office 365**, and **Microsoft Defender for Cloud Apps**, as well as alerts from other services such as **Microsoft Purview Data Loss Prevention (DLP)**.
The connector also lets you stream **advanced hunting** events from *all* of the above components into Microsoft Sentinel, allowing you to copy those Defender components' advanced hunting queries into Microsoft Sentinel, enrich Sentinel alerts with the Defender components' raw event data to provide additional insights, and store the logs with increased retention in Log Analytics.
In the **Next steps** tab, youΓÇÖll find some useful workbooks, sample queries,
In this document, you learned how to integrate Microsoft 365 Defender incidents, and advanced hunting event data from Microsoft Defender for Endpoint and Defender for Office 365, into Microsoft Sentinel, using the Microsoft 365 Defender connector. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data, and potential threats](get-visibility.md).-- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
+- Get started [detecting threats with Microsoft Sentinel](./detect-threats-built-in.md).
service-connector Tutorial Django Webapp Postgres Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md
The resource group, app name, db name are drawn from the cached values. You need
::: zone pivot="postgres-flexible-server" ```azurecli
-az webapp connection create postgres --client-type django
+az webapp connection create postgres-flexible --client-type django
``` The resource group, app name, db name are drawn from the cached values. You need to provide admin password of your postgres database during the execution of this command.
site-recovery Site Recovery Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/site-recovery-whats-new.md
For Site Recovery components, we support N-4 versions, where N is the latest rel
**Update** | **Unified Setup** | **Configuration server/Replication appliance** | **Mobility service agent** | **Site Recovery Provider** | **Recovery Services agent** | | | | |
+[Rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) | 9.49.6395.1 | 5.1.7418.0 | 9.49.6395.1 | 5.1.7418.0 | 2.0.9248.0
[Rollup 61](https://support.microsoft.com/topic/update-rollup-61-for-azure-site-recovery-kb5012960-a1cc029b-03ad-446f-9365-a00b41025d39) | 9.48.6349.1 | 5.1.7387.0 | 9.48.6349.1 | 5.1.7387.0 | 2.0.9245.0 [Rollup 60](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) | 9.47.6219.1 | 5.1.7127.0 | 9.47.6219.1 | 5.1.7127.0 | 2.0.9241.0 [Rollup 59](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e) | 9.46.6149.1 | 5.1.7029.0 | 9.46.6149.1 | 5.1.7030.0 | 2.0.9239.0 [Rollup 58](https://support.microsoft.com/topic/update-rollup-58-for-azure-site-recovery-kb5007075-37ba21c3-47d9-4ea9-9130-a7d64f517d5d) | 9.45.6096.1 | 5.1.6952.0 | 9.45.6096.1 | 5.1.6952.0 | 2.0.9237.0
-[Rollup 57](https://support.microsoft.com/topic/update-rollup-57-for-azure-site-recovery-kb5006172-9fccc879-6e0c-4dc8-9fec-e0600cf94094) | 9.44.6068.1 | 5.1.6899.0 | 9.44.6068.1 | 5.1.6899.0 | 2.0.9236.0
[Learn more](service-updates-how-to.md) about update installation and support.
+## Updates (July 2022)
+
+### Update Rollup 62
+
+[Update rollup 62](https://support.microsoft.com/topic/update-rollup-62-for-azure-site-recovery-e7aff36f-b6ad-4705-901c-f662c00c402b) provides the following updates:
+
+**Update** | **Details**
+ |
+**Providers and agents** | Updates to Site Recovery agents and providers as detailed in the rollup KB article.
+**Issue fixes/improvements** | A number of fixes and improvement as detailed in the rollup KB article.
+**Azure VM disaster recovery** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros.
+**VMware VM/physical disaster recovery to Azure** | Added support for RHEL 8.6 and CentOS 8.6 Linux distros.<br/><br/> Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints.<br/><br/> Added fixes related to various security issues present in the classic experience.
+**Hyper-V disaster recovery to Azure** | Added support for configuring proxy bypass rules for VMware and Hyper-V replications, using private endpoints.
+ ## Updates (March 2022) ### Update Rollup 61
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Windows 7 with SP1 64-bit | Supported from [Update rollup 36](https://support.mi
**Operating system** | **Details** | Linux | Only 64-bit system is supported. 32-bit system isn't supported.<br/><br/>Every Linux server should have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) installed. It is required to boot the server in Azure after test failover/failover. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/> Site Recovery orchestrates failover to run Linux servers in Azure. However Linux vendors might limit support to only distribution versions that haven't reached end-of-life.<br/><br/> On Linux distributions, only the stock kernels that are part of the distribution minor version release/update are supported.<br/><br/> Upgrading protected machines across major Linux distribution versions isn't supported. To upgrade, disable replication, upgrade the operating system, and then enable replication again.<br/><br/> [Learn more](https://support.microsoft.com/help/2941892/support-for-linux-and-open-source-technology-in-azure) about support for Linux and open-source technology in Azure.<br/><br/> Chained IO is not supported by Site Recovery.
-Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
-Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure.
+Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9 Beta version](https://support.microsoft.com/help/4578241/), [7.9](https://support.microsoft.com/help/4590304/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or higher), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or higher) <br/> Few older kernels on servers running Red Hat Enterprise Linux 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/>Note: RHEl 5 is not supported for modernized VMware protection experience (in Preview).
+Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), 8.4, 8.5 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 do not have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. <br/><br/>Note: CentOS 5 is not supported for modernized VMware protection experience (in Preview).
Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions); Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 is not supported.), Debian 10 [(Review supported kernel versions)](#debian-kernel-versions) SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 is not supported. To upgrade, disable replication and re-enable after the upgrade. <br/>|
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
For a step-by-step tutorial that leads you through the process of encrypting blo
The Azure Blob Storage client library uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. There are two versions of client-side encryption available in the client library: -- Version 2.x uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.-- Version 1.x uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
+- Version 2 uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.
+- Version 1 uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
> [!WARNING]
-> Using version 1.x of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1.x, we recommend that you update your application to use version 2.x and migrate your data. See the following section, [Mitigate the security vulnerability in your applications](#mitigate-the-security-vulnerability-in-your-applications), for further guidance.
+> Using version 1 of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1, we recommend that you update your application to use version 2 and migrate your data. See the following section, [Mitigate the security vulnerability in your applications](#mitigate-the-security-vulnerability-in-your-applications), for further guidance.
-## Mitigate the security vulnerability in your applications
+### Mitigate the security vulnerability in your applications
Due to a security vulnerability discovered in the Blob Storage client library's implementation of CBC mode, Microsoft recommends that you take one or more of the following actions immediately:
Due to a security vulnerability discovered in the Blob Storage client library's
- If you need to use client-side encryption, then migrate your applications from client-side encryption v1 to client-side encryption v2.
- Client-side encryption v2 is available only in version 12.x and later of the Azure Blob Storage client libraries. If your application is using an earlier version of the client library, you must first upgrade your code to version 12.x or later, and then decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use version 12.x side-by-side with an earlier version of the client library while you are migrating your code. For code examples, see [Example: Encrypting and decrypting a blob with client-side encryption v2](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2).
- The following table summarizes the steps you'll need to take if you choose to migrate your applications to client-side encryption v2: | Client-side encryption status | Recommended actions | |||
-| Application is using client-side encryption with Azure Blob Storage SDK version 11.x or earlier | Update your application to use Blob Storage SDK version 12.x or later. If necessary, you can use version 12.x side-by-side with an earlier version of the client library while you are migrating your code. [Learn more...](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md)<br/><br/>Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. [Learn more...](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2) |
-| Application is using client-side encryption with Azure Blob Storage SDK version 12.x or later | Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. [Learn more...](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2) |
+| Application is using client-side encryption a version of the client library that supports only client-side encryption v1. | Update your application to use a version of the client library that supports client-side encryption v2. See [SDK support matrix for client-side encryption](#sdk-support-matrix-for-client-side-encryption) for a list of supported versions. [Learn more...](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md)<br/><br/>Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. [Learn more...](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2) |
+| Application is using client-side encryption with a version of the client library that supports client-side encryption v2. | Update your code to use client-side encryption v2. [Learn more...](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2)<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. [Learn more...](#reencrypt-previously-encrypted-data-with-client-side-encryption-v2) |
Additionally, Microsoft recommends that you take the following steps to help secure your data: - Configure your storage accounts to use private endpoints to secure all traffic between your virtual network (VNet) and your storage account over a private link. For more information, see [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md). - Limit network access to specific networks only.
+### SDK support matrix for client-side encryption
+
+The following table shows which versions of the client libraries for .NET, Java, and Python support which versions of client-side encryption:
+
+| | .NET | Java | Python |
+|--|--|--|--|
+| **Client-side encryption v2 and v1** | [Versions 12.13.0 and later](https://www.nuget.org/packages/Azure.Storage.Blobs) | [Versions 12.18.0 and later](https://search.maven.org/artifact/com.azure/azure-storage-blob) | [Versions 12.13.0 and later](https://pypi.org/project/azure-storage-blob) |
+| **Client-side encryption v1 only** | Versions 12.12.0 and earlier | Versions 12.17.0 and earlier | Versions 12.12.0 and earlier |
+
+If your application is using client-side encryption with an earlier version of the .NET, Java, or Python client library, you must first upgrade your code to a version that supports client-side encryption v2. Next, you must decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use a version of the client library that supports client-side encryption v2 side-by-side with an earlier version of the client library while you are migrating your code. For code examples, see [Example: Encrypting and decrypting a blob with client-side encryption v2](#example-encrypting-and-decrypting-a-blob-with-client-side-encryption-v2).
+ ## How client-side encryption works The Azure Blob Storage client libraries use envelope encryption to encrypt and decrypt your data on the client side. Envelope encryption encrypts a key with one or more additional keys.
After you update your code to use client-side encryption v2, make sure that you
### [Java](#tab/java)
-To use client-side encryption from your Java code, reference the [Blob Storage client library](/jav).
+To use client-side encryption from your Java code, reference the [Blob Storage client library](/jav).
For sample code that shows how to use client-side encryption v2 from Java, see [ClientSideEncryptionV2Uploader.java](https://github.com/wastore/azure-storage-samples-for-java/blob/f1621c807a4b2be8b6e04e226cbf0a288468d7b4/ClientSideEncryptionMigration/src/main/java/ClientSideEncryptionV2Uploader.java).
storage Network File System Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/network-file-system-protocol-known-issues.md
This article describes limitations and known issues of Network File System (NFS)
- GRS, GZRS, and RA-GRS redundancy options aren't supported when you create an NFS 3.0 storage account. -- NFS 3.0 and SSH File Transfer Protocol (SFTP) can't be enabled on the same storage account.- ## NFS 3.0 features The following NFS 3.0 features aren't yet supported.
storage Storage Quickstart Blobs Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-dotnet.md
To see Blob storage sample apps, continue to:
> [Azure Blob Storage SDK v12 .NET samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/storage/Azure.Storage.Blobs/samples) - For tutorials, samples, quick starts and other documentation, visit [Azure for .NET and .NET Core developers](/dotnet/azure/).-- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
+- To learn more about .NET Core, see [Get started with .NET in 10 minutes](https://dotnet.microsoft.com/learn/dotnet/hello-world-tutorial/intro).
storage Storage Service Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-service-encryption.md
Previously updated : 07/11/2022 Last updated : 07/12/2022
The Azure Blob Storage client libraries for .NET, Java, and Python support encry
The Blob Storage and Queue Storage client libraries uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. There are two versions of client-side encryption available in the client libraries: -- Version 2.x uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.-- Version 1.x uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
+- Version 2 uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.
+- Version 1 uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
> [!WARNING]
-> Using version 1.x of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1.x, we recommend that you update your application to use version 2.x and migrate your data.
+> Using version 1 of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1, we recommend that you update your application to use version 2 and migrate your data.
+>
+> The Azure Table Storage SDK supports only version 1 of client-side encryption. Using client-side encryption with Table Storage is not recommended.
The following table shows which client libraries support which versions of client-side encryption and provides guidelines for migrating to client-side encryption v2. | Client library | Version of client-side encryption supported | Recommended migration | Additional guidance | |--|--|--|--|
-| Blob Storage client libraries for .NET, Java, and Python, version 12.x and above | 2.0<br/><br/>1.0 (for backward compatibility only) | Update your code to use client-side encryption v2.<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. | [Client-side encryption for blobs](../blobs/client-side-encryption.md) |
-| Blob Storage client library for .NET, Java, and Python, version 11.x and below | 1.0 (not recommended) | Update your application to use Blob Storage SDK version 12.x or later.<br/><br/>Update your code to use client-side encryption v2.<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. | [Client-side encryption for blobs](../blobs/client-side-encryption.md) |
-| Queue Storage client library for .NET and Python, version 12.x and above | 2.0<br/><br/>1.0 (for backward compatibility only) | Update your code to use client-side encryption v2. | [Client-side encryption for queues](../queues/client-side-encryption.md) |
-| Queue Storage client library for .NET and Python, version 11.x and below | 1.0 (not recommended) | Update your application to use Blob Storage SDK version 12.x or later.<br/><br/>Update your code to use client-side encryption v2. | [Client-side encryption for queues](../queues/client-side-encryption.md) |
+| Blob Storage client libraries for .NET (version 12.13.0 and above), Java (version 12.18.0 and above), and Python (version 12.13.0 and above) | 2.0<br/><br/>1.0 (for backward compatibility only) | Update your code to use client-side encryption v2.<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. | [Client-side encryption for blobs](../blobs/client-side-encryption.md) |
+| Blob Storage client library for .NET (version 12.12.0 and below), Java (version 12.17.0 and below), and Python (version 12.12.0 and below) | 1.0 (not recommended) | Update your application to use a version of the Blob Storage SDK that supports client-side encryption v2. See [SDK support matrix for client-side encryption](../blobs/client-side-encryption.md#sdk-support-matrix-for-client-side-encryption) for details.<br/><br/>Update your code to use client-side encryption v2.<br/><br/>Download any encrypted data to decrypt it, then reencrypt it with client-side encryption v2. | [Client-side encryption for blobs](../blobs/client-side-encryption.md) |
+| Queue Storage client library for .NET (version 12.11.0 and above) and Python (version 12.4 and above) | 2.0<br/><br/>1.0 (for backward compatibility only) | Update your code to use client-side encryption v2. | [Client-side encryption for queues](../queues/client-side-encryption.md) |
+| Queue Storage client library for .NET (version 12.10.0 and below) and Python (version 12.3.0 and below) | 1.0 (not recommended) | Update your application to use a version of the Queue Storage SDK version that supports client-side encryption v2. See [SDK support matrix for client-side encryption](../queues/client-side-encryption.md#sdk-support-matrix-for-client-side-encryption)<br/><br/>Update your code to use client-side encryption v2. | [Client-side encryption for queues](../queues/client-side-encryption.md) |
| Table Storage client library for .NET, Java, and Python | 1.0 (not recommended) | Not available. | N/A | ## Next steps
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/client-side-encryption.md
The Azure Queue Storage client libraries for .NET and Python support encrypting
The Azure Queue Storage client library uses [AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) in order to encrypt user data. There are two versions of client-side encryption available in the client library: -- Version 2.x uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.-- Version 1.x uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
+- Version 2 uses [Galois/Counter Mode (GCM)](https://en.wikipedia.org/wiki/Galois/Counter_Mode) mode with AES.
+- Version 1 uses [Cipher Block Chaining (CBC)](https://en.wikipedia.org/wiki/Block_cipher_mode_of_operation#Cipher-block_chaining_.28CBC.29) mode with AES.
> [!WARNING]
-> Using version 1.x of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1.x, we recommend that you update your application to use version 2.x and migrate your data. See the following section, [Mitigate the security vulnerability in your applications](#mitigate-the-security-vulnerability-in-your-applications), for further guidance.
+> Using version 1 of client-side encryption is no longer recommended due to a security vulnerability in the client library's implementation of CBC mode. For more information about this security vulnerability, see [Azure Storage updating client-side encryption in SDK to address security vulnerability](https://aka.ms/azstorageclientencryptionblog). If you are currently using version 1, we recommend that you update your application to use version 2 and migrate your data. See the following section, [Mitigate the security vulnerability in your applications](#mitigate-the-security-vulnerability-in-your-applications), for further guidance.
-## Mitigate the security vulnerability in your applications
+### Mitigate the security vulnerability in your applications
Due to a security vulnerability discovered in the Queue Storage client library's implementation of CBC mode, Microsoft recommends that you take one or more of the following actions immediately:
Due to a security vulnerability discovered in the Queue Storage client library's
- If you need to use client-side encryption, then migrate your applications from client-side encryption v1 to client-side encryption v2.
- Client-side encryption v2 is available only in version 12.x and later of the Azure Queue Storage client libraries for .NET and Python. If your application is using an earlier version of the client library, you must first upgrade your code to version 12.x or later, and then decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use version 12.x side-by-side with an earlier version of the client library while you are migrating your code.
- The following table summarizes the steps you'll need to take if you choose to migrate your applications to client-side encryption v2: | Client-side encryption status | Recommended actions | |||
-| Application is using client-side encryption with Azure Queue Storage SDK version 11.x or earlier | Update your application to use Queue Storage SDK version 12.x or later. If necessary, you can use version 12.x side-by-side with an earlier version of the client library while you are migrating your code.<br/><br/>Update your code to use client-side encryption v2. |
-| Application is using client-side encryption with Azure Queue Storage SDK version 12.x or later | Update your code to use client-side encryption v2. |
+| Application is using client-side encryption a version of the client library that supports only client-side encryption v1. | Update your application to use a version of the client library that supports client-side encryption v2. See [SDK support matrix for client-side encryption](#sdk-support-matrix-for-client-side-encryption) for a list of supported versions. <br/><br/>Update your code to use client-side encryption v2. |
+| Application is using client-side encryption with a version of the client library that supports client-side encryption v2. | Update your code to use client-side encryption v2. |
Additionally, Microsoft recommends that you take the following steps to help secure your data: - Configure your storage accounts to use private endpoints to secure all traffic between your virtual network (VNet) and your storage account over a private link. For more information, see [Use private endpoints for Azure Storage](../common/storage-private-endpoints.md). - Limit network access to specific networks only.
+### SDK support matrix for client-side encryption
+
+The following table shows which versions of the client libraries for .NET and Python support which versions of client-side encryption:
+
+| | .NET | Python |
+|--|--|--|--|
+| **Client-side encryption v2 and v1** | [Versions 12.11.0 and later](https://www.nuget.org/packages/Azure.Storage.Queues) | [Versions 12.4.0 and later](https://pypi.org/project/azure-storage-queue) |
+| **Client-side encryption v1 only** | Versions 12.10.0 and earlier | Versions 12.3.0 and earlier |
+
+If your application is using client-side encryption with an earlier version of the .NET or Python client library, you must first upgrade your code to a version that supports client-side encryption v2. Next, you must decrypt and re-encrypt your data with client-side encryption v2. If necessary, you can use a version of the client library that supports client-side encryption v2 side-by-side with an earlier version of the client library while you are migrating your code.
+ ## How client-side encryption works The Azure Queue Storage client libraries use envelope encryption to encrypt and decrypt your data on the client side. Envelope encryption encrypts a key with one or more additional keys.
synapse-analytics Quickstart Read From Gen2 To Pandas Dataframe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-read-from-gen2-to-pandas-dataframe.md
Previously updated : 03/23/2021 Last updated : 07/11/2022
In this quickstart, you'll learn how to easily use Python to read data from an A
From a Synapse Studio notebook, you'll: -- connect to a container in Data Lake Storage Gen2 that is linked to your Azure Synapse Analytics workspace-- read the data from a PySpark Notebook using `spark.read.load`-- convert the data to a Pandas dataframe using `.toPandas()`
+- Connect to a container in Azure Data Lake Storage (ADLS) Gen2 that is linked to your Azure Synapse Analytics workspace.
+- Read the data from a PySpark Notebook using `spark.read.load`.
+- Convert the data to a Pandas dataframe using `.toPandas()`.
## Prerequisites - Azure subscription - [Create one for free](https://azure.microsoft.com/free/).-- Synapse Analytics workspace with Data Lake Storage Gen2 configured as the default storage - You need to be the **Storage Blob Data Contributor** of the Data Lake Storage Gen2 filesystem that you work with. For details on how to create a workspace, see [Creating a Synapse workspace](get-started-create-workspace.md).
+- Synapse Analytics workspace with ADLS Gen2 configured as the default storage - You need to be the **Storage Blob Data Contributor** of the ADLS Gen2 filesystem that you work with. For details on how to create a workspace, see [Creating a Synapse workspace](get-started-create-workspace.md).
- Apache Spark pool in your workspace - See [Create a serverless Apache Spark pool](get-started-analyze-spark.md#create-a-serverless-apache-spark-pool). ## Sign in to the Azure portal
Sign in to the [Azure portal](https://portal.azure.com/).
## Upload sample data to ADLS Gen2
-1. In the Azure portal, create a container in the same Data Lake Storage Gen2 used by Synapse Studio. You can skip this step if you want to use the default linked storage account in your Azure Synapse Analytics workspace.
+1. In the Azure portal, create a container in the same ADLS Gen2 used by Synapse Studio. You can skip this step if you want to use the default linked storage account in your Azure Synapse Analytics workspace.
-1. In Synapse Studio, click **Data**, select the **Linked** tab, and select the container under **Azure Data Lake Storage Gen2**.
+1. In Synapse Studio, select **Data**, select the **Linked** tab, and select the container under **Azure Data Lake Storage Gen2**.
1. Download the sample file [RetailSales.csv](https://github.com/Azure-Samples/Synapse/blob/main/Notebooks/PySpark/Synapse%20Link%20for%20Cosmos%20DB%20samples/Retail/RetailData/RetailSales.csv) and upload it to the container.
-1. Select the uploaded file, click **Properties**, and copy the **ABFSS Path** value.
+1. Select the uploaded file, select **Properties**, and copy the **ABFSS Path** value.
## Read data from ADLS Gen2 into a Pandas dataframe
-1. In the left pane, click **Develop**.
+1. In the left pane, select **Develop**.
-1. Click **+** and select "Notebook" to create a new notebook.
+1. Select **+** and select "Notebook" to create a new notebook.
-1. In **Attach to**, select your Apache Spark Pool. If you don't have one, click **Create Apache Spark pool**.
+1. In **Attach to**, select your Apache Spark Pool. If you don't have one, select **Create Apache Spark pool**.
1. In the notebook code cell, paste the following Python code, inserting the ABFSS path you copied earlier:
Sign in to the [Azure portal](https://portal.azure.com/).
1. Run the cell.
-After a few minutes the text displayed should look similar to the following.
+After a few minutes, the text displayed should look similar to the following.
```text Command executed in 25s 324ms by gary on 03-23-2021 17:40:23.481 -07:00
Converting to Pandas.
- [What is Azure Synapse Analytics?](overview-what-is.md) - [Get Started with Azure Synapse Analytics](get-started.md) - [Create a serverless Apache Spark pool](get-started-analyze-spark.md#create-a-serverless-apache-spark-pool)
+- [How to use file mount/unmount API in Synapse](spark/synapse-file-mount-api.md)
+- [Azure Architecture Center: Explore data in Azure Blob storage with the pandas Python package](/azure/architecture/data-science-process/explore-data-blob)
+- [Tutorial: Use Pandas to read/write Azure Data Lake Storage Gen2 data in serverless Apache Spark pool in Synapse Analytics](spark/tutorial-use-pandas-spark-pool.md)
synapse-analytics Synapse File Mount Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/synapse-file-mount-api.md
Title: Introduction to file mount in synapse
-description: "Tutorial: How to use file mount/unmount API in Synapse"
+ Title: Introduction to file mount in Azure Synapse Analytics
+description: "Tutorial: How to use file mount/unmount API in Azure Synapse Analytics"
Previously updated : 11/18/2021 Last updated : 07/12/2022 -+ # How to use file mount/unmount API in Synapse
-Synapse studio team built two new mount/unmount APIs in mssparkutils package, you can use mount to attach remote storage (Blob, Gen2) to all working nodes (driver node and worker nodes), after that, you can access data in storage as if they were one the local file system with local file API.
+Synapse studio team built two new mount/unmount APIs in the Microsoft Spark Utilities (MSSparkUtils) package, you can use mount to attach remote storage (Azure Blob Storage, Azure Data Lake Storage (ADLS) Gen2) to all working nodes (driver node and worker nodes). Once in place, you can access data in storage as if they were one the local file system with local file API. For more information, see [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md).
-The document will show you How to use mount/unmount API in your workspace, mainly includes below sections:
+The document will show you how to use mount/unmount API in your workspace, mainly includes below sections:
-+ How to mount ADLS Gen2 Storage or Azure Blob Storage
++ How to mount Azure Data Lake Storage (ADLS) Gen2 or Azure Blob Storage + How to access files under mount point via local file system API
-+ How to access files under mount point using mssparktuils fs API
++ How to access files under mount point using `mssparktuils` fs API + How to access files under mount point using Spark Read API + How to unmount the mount point > [!WARNING]
-> + Azure Fileshare mount is temporarily disabled, you can use Gen2/blob mount following the [How to mount Gen2/blob Storage](#How-to-mount-Gen2/blob-Storage).
+> + Azure Fileshare mount is temporarily disabled, you can use ADLS Gen2/blob mount following the [How to mount Gen2/blob Storage](#How-to-mount-Gen2/blob-Storage).
>
-> + Azure Gen1 storage is not supported, you can migrate to Gen2 following the [Migration gudiance](../../storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md) before using mount APIs.
+> + Azure Data Lake storage Gen1 storage is not supported, you can migrate to ADLS Gen2 following the [Migration gudiance](../../storage/blobs/data-lake-storage-migrate-gen1-to-gen2-azure-portal.md) before using mount APIs.
-<h2 id="How-to-mount-Gen2/blob-Storage">How to mount Gen2/blob Storage</h2>
+<a id="How-to-mount-Gen2/blob-Storage"></a>
+## How to mount storage
-Here we will illustrate how to mount gen2 storage account step by step as an example, mounting blob storage works similarly.
+Here we will illustrate how to mount Azure Data Lake Storage (ADLS) Gen2 or Azure Blob Storage step by step as an example, mounting blob storage works similarly.
-Assuming you have one gen2 storage account named **storegen2** and the account has one container name **mycontainer**, and you want to mount the **mycontainer** to **/test** of your Spark pool.
+Assuming you have one ADLS Gen2 account named `storegen2` and the account has one container name `mycontainer`, and you want to mount the `mycontainer` to `/test` of your Spark pool.
-![Screenshot of gen2 storage account](./media/synapse-file-mount-api/gen2-storage-account.png)
+![Screenshot of ADLS Gen2 storage account](./media/synapse-file-mount-api/gen2-storage-account.png)
-To mount container **mycontainer**, mssparkutils need to check whether you have the permission to access the container at first, currently we support three authentication methods to trigger mount operation, **LinkedService**, **accountKey**, and **sastoken**.
+To mount container `mycontainer`, `mssparkutils` need to check whether you have the permission to access the container at first, currently we support three authentication methods to trigger mount operation, **LinkedService**, **accountKey**, and **sastoken**.
### Via Linked Service (recommend):
-Trigger mount via linked Service is our recommend way, there wonΓÇÖt have any security leak issue by this way, since mssparkutils doesnΓÇÖt store any secret/auth value itself, and it will always fetch auth value from linked service to request blob data from the remote storage.
+Trigger mount via linked Service is our recommend way, there won't have any security leak issue by this way, since `mssparkutils` doesn't store any secret/auth value itself, and it will always fetch auth value from linked service to request blob data from the remote storage.
![Screenshot of link services](./media/synapse-file-mount-api/synapse-link-service.png)
-You can create linked service for ADLS gen2 or blob storage. Currently, two authentication methods are supported when created linked service, one is using account key, another is using managed identity.
+You can create linked service for ADLS Gen2 or blob storage. Currently, two authentication methods are supported when created linked service, one is using account key, another is using managed identity.
+ **Create linked service using account key** ![Screenshot of link services using account key](./media/synapse-file-mount-api/synapse-link-service-using-account-key.png)
You can create linked service for ADLS gen2 or blob storage. Currently, two auth
> + If you create linked service using managed identity as authentication method, please make sure that the workspace MSI has the Storage Blob Data Contributor role of the mounted container. > + Please always check the linked service connection to guarantee that the linked service is created successfully.
-After you create linked service successfully, you can easily mount the container to your Spark pool with below code.
+After you create linked service successfully, you can easily mount the container to your Spark pool with below Python code.
```python mssparkutils.fs.mount(
mssparkutils.fs.mount(
**Notice**:
-+ You may need to import mssparkutils if it not available.
++ You may need to import `mssparkutils` if it not available. ```python From notebookutils import mssparkutils ```
-+ ItΓÇÖs not recommended to mount a root folder, no matter which authentication method is used.
++ It's not recommended to mount a root folder, no matter which authentication method is used. ### Via Shared Access Signature Token or Account Key
-In addition to mount with linked Service, mssparkutils also support explicitly passing account key or [SAS (shared access signature)](/samples/azure-samples/storage-dotnet-sas-getting-started/storage-dotnet-sas-getting-started/) token as parameter to mount the target.
-
-If you want to use account key or SAS token directly to mount container. A more secure way is to store account key or SAS token in Azure Key Vaults (as the below example figure shows), then retrieving them with `mssparkutil.credentials.getSecret` API.
+In addition to mount with linked service, `mssparkutils` also support explicitly passing account key or [SAS (shared access signature)](/samples/azure-samples/storage-dotnet-sas-getting-started/storage-dotnet-sas-getting-started/) token as parameter to mount the target.
+For security reasons, it is recommended to store Account key or SAS token in Azure Key Vaults (as the below example figure shows), then retrieving them with `mssparkutil.credentials.getSecret` API. For more information, see [Manage storage account keys with Key Vault and the Azure CLI (legacy)](../../key-vault/secrets/overview-storage-keys.md).
![Screenshot of key vaults](./media/synapse-file-mount-api/key-vaults.png)
mssparkutils.fs.mount(
``` > [!Note]
-> We do not recommend writing credentials in code.
+> For security reasons, do not store credentials in code.
<! ## How to mount Azure File Shares > [!WARNING] > Fileshare mount is temporarily disable due to tech limitation issue. Please using blob/gen2 mount following above steps as workaround.
-Assuming you have a gen2 storage account named **storegen2** and the account has one file share named **myfileshare**, and you want to mount the **myfileshare** to **/test** of your spark pool.
+Assuming you have a ADLS Gen2 storage account named `storegen2` and the account has one file share named `myfileshare`, and you want to mount the `myfileshare` to `/test` of your spark pool.
![Screenshot of file share](./media/synapse-file-mount-api/file-share.png)
-Mount azure file share only supports the account key authentication method, below is the code sample to mount **myfileshare** to **/test** and we reuse the Azure Key Value settings of `MountKV` here:
+Mount azure file share only supports the account key authentication method, below is the code sample to mount **myfileshare** to `/test` and we reuse the Azure Key Value settings of `MountKV` here:
```python from notebookutils import mssparkutils
mssparkutils.fs.mount(
) ```
-In the above example, we pre-defined the schema format of source URL for the file share to: `https://<filesharename>@<accountname>.file.core.windows.net`, and we stored the account key in AKV, and retrieving them with mssparkutil.credentials.getSecret API instead of explicitly passing it to the mount API.
+In the above example, we pre-defined the schema format of source URL for the file share to: `https://<filesharename>@<accountname>.file.core.windows.net`, and we stored the account key in AKV, and retrieving them with `mssparkutil.credentials.getSecret` API instead of explicitly passing it to the mount API.
In the above example, we pre-defined the schema format of source URL for the fil
Once the mount run successfully, you can access data via local file system API, while currently we limit the mount point always be created under **/synfs** folder of node and it was scoped to job/session level.
-So, for example if you mount **mycontainer** to **/test** folder, the created local mount point is `/synfs/{jobid}/test`, that means if you want to access mount point via local fs APIs after a successful mount, the local path used should be `/synfs/{jobid}/test`
+So, for example if you mount `mycontainer` to `/test` folder, the created local mount point is `/synfs/{jobid}/test`, that means if you want to access mount point via local fs APIs after a successful mount, the local path used should be `/synfs/{jobid}/test`
Below is an example to show how it works.
f.close()
## How to access files under mount point using mssparktuils fs API
-The main purpose of the mount operation is to let customer access the data stored in remote storage account using local file system API, you can also access data using mssparkutils fs API with mounted path as a parameter. While the path format used here is a little different.
+The main purpose of the mount operation is to let customer access the data stored in remote storage account using local file system API, you can also access data using `mssparkutils fs` API with mounted path as a parameter. The path format used here is a little different.
-Assuming you mounted to the ADLS gen2 container **mycontainer** to **/test** using mount API.
+Assuming you mounted to the ADLS Gen2 container `mycontainer` to `/test` using mount API.
When you access the data using local file system API, as above section shared, the path format is like `/synfs/{jobId}/test/{filename}`
-While when you want to access the data with mssparkutils fs API, the path format is like:
+While when you want to access the data with `mssparkutils fs` API, the path format is like:
`synfs:/{jobId}/test/{filename}`
-You can see the **synfs** is used as schema in this case instead of a part of the mounted path.
+You can see the `synfs` is used as schema in this case instead of a part of the mounted path.
-Below are three examples to show how to access file with mount point path using mssparkutils fs, while **49** is a Spark job ID we got from calling mssparkutils.env.getJobId().
+Below are three examples to show how to access file with mount point path using `mssparkutils fs`, while **49** is a Spark job ID we got from calling `mssparkutils.env.getJobId()`.
+ List dirs:
Below are three examples to show how to access file with mount point path using
## How to access files under mount point using Spark Read API
-You can also use Spark read API with mounted path as parameter to access the data after mount as well, the path format here is same with the format of using mssparkutils fs API:
+You can also use Spark read API with mounted path as parameter to access the data after mount as well, the path format here is same with the format of using `mssparkutils fs` API:
`synfs:/{jobId}/test/{filename} `
-Below are two code examples, one is for a mounted gen2 storage, another is for a mounted blob storage.
+Below are two code examples, one is for a mounted ADLS Gen2 storage, another for a mounted blob storage.
+
+<a id="read-file-from-a-mounted-gen2-storage-account"></a>
+### Read file from a mounted ADLS Gen2 storage account
-### Read file from a mounted gen2 storage account
+The below example assumes an ADLS Gen2 storage was already mounted then read file using mount path.
```python %%pyspark
-# Assume a gen2 storage was already mounted then read file using mount path
+# Assume a ADLS Gen2 storage was already mounted then read file using mount path
df = spark.read.load("synfs:/49/test/myFile.csv", format='csv') df.show()
df.show()
### Read file from a mounted blob storage account
-Notice that if you mounted a blob storage account then want to access it using **mssparkutils or Spark API**, you need to explicitly configure the sas token via spark configuration at first before try to mount container using mount API.
+Notice that if you mounted a blob storage account then want to access it using `mssparkutils` or Spark API, you need to explicitly configure the sas token via spark configuration at first before try to mount container using mount API.
+
+1. Update Spark configuration as below code example if you want to access it using `mssparkutils` or Spark API after trigger mount, you can bypass this step if you only want to access it using local file api after mount:
-1. Update Spark configuration as below code example if you want to access it using **mssparkutils or Spark API** after trigger mount, you can bypass this step if you only want to access it using local file api after mount:
```python blob_sas_token = mssparkutils.credentials.getConnectionStringOrCreds("myblobstorageaccount") spark.conf.set('fs.azure.sas.mycontainer.<blobStorageAccountName>.blob.core.windows.net', blob_sas_token) ```
-2. Create link service **myblobstorageaccount**, mount blob storage account with link service
+2. Create link service `myblobstorageaccount` and mount blob storage account with link service:
```python %%spark
Notice that if you mounted a blob storage account then want to access it using *
) ```
-3. Read data from mounted blob storage through local file API.
+3. Mount the blob storage container and then read file using mount path through the local file API:
+ ```python # mount blob storage container and then read file using mount path with open("/synfs/64/test/myFile.txt") as f: print(f.read()) ```
-4. Read data from mounted blob storage through Spark read API.
+4. Read data from mounted blob storage through Spark read API:
+ ```python %%spark // mount blob storage container and then read file using mount path
Notice that if you mounted a blob storage account then want to access it using *
## How to unmount the mount point
-Unmount with your mount point, **/test** in our example:
+Unmount with your mount point, `/test` in our example:
+ ```python mssparkutils.fs.unmount("/test") ``` ## Known limitations
-+ The mssparkutils fs help function hasnΓÇÖt added the description about mount/unmount part yet.
++ The `mssparkutils fs help` function hasn't added the description about mount/unmount part yet. + In further, we will support auto unmount mechanism to remove the mount point when application run finished, currently it not implemented yet. If you want to unmount the mount point to release the disk space, you need to explicitly call unmount API in your code, otherwise, the mount point will still exist in the node even after the application run finished. + Mounting ADLS Gen1 storage account is not supported for now.
-
+## Next steps
+
+- [Get Started with Azure Synapse Analytics](../get-started.md)
+- [Monitor your Synapse Workspace](../get-started-monitor.md)
+- [Introduction to Microsoft Spark Utilities](microsoft-spark-utilities.md)
virtual-machine-scale-sets Disk Encryption Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-azure-resource-manager.md
Then follow these steps:
3. Click **Purchase** to deploy the template. > [!NOTE]
-> Virtual machine scale set encryption is supported with API version `2017-03-30` onwards. If you are using templates to enable scale set encryption, update the API version for virtual machine scale sets and the ADE extension inside the template. See this [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/201-encrypt-running-vmss-windows/azuredeploy.json) for more information.
+> Virtual machine scale set encryption is supported with API version `2017-03-30` onwards. If you are using templates to enable scale set encryption, update the API version for virtual machine scale sets and the ADE extension inside the template. See this [sample template](https://github.com/Azure/azure-quickstart-templates/blob/master/quickstarts/microsoft.compute/encrypt-running-vmss-windows/azuredeploy.json) for more information.
## Next steps
virtual-machine-scale-sets Disk Encryption Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-key-vault.md
Previously updated : 10/10/2019 Last updated : 05/23/2022 ms.devlang: azurecli
Creating and configuring a key vault for use with Azure Disk Encryption involves
2. Creating a key vault. 3. Setting key vault advanced access policies.
-These steps are illustrated in the following quickstarts:
- You may also, if you wish, generate or import a key encryption key (KEK). ## Install tools and connect to Azure
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md
The following platform SKUs are currently supported (and more are added periodic
| Publisher | OS Offer | Sku | |-||--|
-| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
+| Canonical | UbuntuServer | 18.04-LTS |
+| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
+| Canonical | UbuntuServer | 20.04-LTS |
+| Canonical | UbuntuServer | 20.04-LTS-Gen2 |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-1 |
+| MicrosoftCblMariner | Cbl-Mariner | 1-Gen2 |
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2
+| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2
+| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter |
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gensecond |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gs |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-gs |
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-containers |
+| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-Containers |
| MicrosoftWindowsServer | WindowsServer | 2016-Datacenter-with-containers-gs | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-core |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-core-with-containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-Core-with-Containers |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gensecond | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-gs | | MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-smalldisk |
-| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-containers |
+| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-Containers |
+| MicrosoftWindowsServer | WindowsServer | 2012-R2-Datacenter |
| MicrosoftWindowsServer | WindowsServer | 2019-Datacenter-with-containers-gs | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk |
The following platform SKUs are currently supported (and more are added periodic
| MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-core-smalldisk | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-g2 | | MicrosoftWindowsServer | WindowsServer | 2022-Datacenter-smalldisk-g2 |
-| Canonical | UbuntuServer | 20.04-LTS |
-| Canonical | UbuntuServer | 20.04-LTS-Gen2 |
-| Canonical | UbuntuServer | 18.04-LTS |
-| Canonical | UbuntuServer | 18.04-LTS-Gen2 |
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-1 |
-| MicrosoftCblMariner | Cbl-Mariner | 1-Gen2 |
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2 |
-| MicrosoftCblMariner | Cbl-Mariner | cbl-mariner-2-Gen2 |
## Requirements for configuring automatic OS image upgrade
virtual-machine-scale-sets Virtual Machine Scale Sets Orchestration Modes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md
The following table compares the Flexible orchestration mode, Uniform orchestrat
| Assign VM to a Specific Fault Domain | Yes | No | No | | Update Domains | Depreciated (platform maintenance performed FD by FD) | 5 update domains | Up to 20 update domains | | Perform Maintenance | Trigger maintenance on each instance using VM API | Yes | N/A |
+| [Capacity Reservation](../virtual-machines/capacity-reservation-overview.md) | Yes | Yes | Yes |
### Networking 
virtual-machine-scale-sets Virtual Machine Scale Sets Upgrade Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-upgrade-scale-set.md
Let's say you have a scale set with an Azure Load Balancer, and you want to repl
$vmss=Get-AzVmss -ResourceGroupName "myResourceGroup" -Name "myScaleSet" # Create a local PowerShell object for the new desired IP configuration, which includes the reference to the application gateway
- $ipconf = New-AzVmssIPConfig "myNic" -ApplicationGatewayBackendAddressPoolsId /subscriptions/{subscriptionId}/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/backendAddressPools/{applicationGatewayBackendAddressPoolName} -SubnetId $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].Subnet.Id ΓÇôName $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].Name
+ $ipconf = New-AzVmssIPConfig -ApplicationGatewayBackendAddressPoolsId /subscriptions/{subscriptionId}/resourceGroups/myResourceGroup/providers/Microsoft.Network/applicationGateways/{applicationGatewayName}/backendAddressPools/{applicationGatewayBackendAddressPoolName} -SubnetId $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].Subnet.Id -Name $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0].Name
# Replace the existing IP configuration in the local PowerShell object (which contains the references to the current Azure Load Balancer) with the new IP configuration $vmss.VirtualMachineProfile.NetworkProfile.NetworkInterfaceConfigurations[0].IpConfigurations[0] = $ipconf
virtual-machines Capacity Reservation Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-create.md
# Create a Capacity Reservation
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
+ Capacity Reservation is always created as part of a Capacity Reservation group. The first step is to create a group if a suitable one doesnΓÇÖt exist already, then create reservations. Once successfully created, reservations are immediately available for use with virtual machines. The capacity is reserved for your use as long as the reservation is not deleted. A well-formed request for Capacity Reservation group should always succeed as it does not reserve any capacity. It just acts as a container for reservations. However, a request for Capacity Reservation could fail if you do not have the required quota for the VM series or if Azure doesnΓÇÖt have enough capacity to fulfill the request. Either request more quota or try a different VM size, location, or zone combination.
virtual-machines Capacity Reservation Modify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-modify.md
# Modify a Capacity Reservation (preview)
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
+ After creating a Capacity Reservation group and Capacity Reservation, you may want to modify your reservations. This article explains how to do the following actions using API, Azure portal, and PowerShell. > [!div class="checklist"]
virtual-machines Capacity Reservation Overallocate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overallocate.md
# Overallocating Capacity Reservation
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
+ Azure permits association of extra VMs beyond the reserved count of a Capacity Reservation to facilitate burst and other scale-out scenarios, without the overhead of managing around the limits of reserved capacity. The only difference is that the count of VMs beyond the quantity reserved does not receive the capacity availability SLA benefit. As long as Azure has available capacity that meets the virtual machine requirements, the extra allocations will succeed. The Instance View of a Capacity Reservation group provides a snapshot of usage for each member Capacity Reservation. You can use the Instance View to see how overallocation works.
virtual-machines Capacity Reservation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-overview.md
# On-demand Capacity Reservation
+**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
+ On-demand Capacity Reservation enables you to reserve Compute capacity in an Azure region or an Availability Zone for any duration of time. Unlike [Reserved Instances](https://azure.microsoft.com/pricing/reserved-vm-instances/), you do not have to sign up for a 1-year or a 3-year term commitment. Create and delete reservations at any time and have full control over how you want to manage your reservations. Once the Capacity Reservation is created, the capacity is available immediately and is exclusively reserved for your use until the reservation is deleted.
In the previous image, the VM Reserved Instance discount is applied to VM 0, whi
## Next steps
-Create a Capacity Reservation and start reserving Compute capacity in an Azure region or an Availability Zone.
-
-> [!div class="nextstepaction"]
-> [Create a Capacity Reservation](capacity-reservation-create.md)
+Get started reserving Compute capacity. Check out our other related Capacity Reservation articles:
+- [Create a capacity reservation](capacity-reservation-create.md)
+- [Overallocating capacity reservation](capacity-reservation-overallocate.md)
+- [Modify a capacity reservation](capacity-reservation-modify.md)
+- [Associate a VM](capacity-reservation-associate-vm.md)
+- [Remove a VM](capacity-reservation-remove-vm.md)
+- [Associate a VM scale set - Flexible](capacity-reservation-associate-virtual-machine-scale-set-flex.md)
+- [Associate a VM scale set - Uniform](capacity-reservation-associate-virtual-machine-scale-set.md)
+- [Remove a VM scale set](capacity-reservation-remove-virtual-machine-scale-set.md)
virtual-machines Capacity Reservation Remove Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/capacity-reservation-remove-virtual-machine-scale-set.md
# Remove a virtual machine scale set association from a Capacity Reservation group
-**Applies to:** :heavy_check_mark: Uniform scale set
+**Applies to:** :heavy_check_mark: Uniform scale set :heavy_check_mark: Flexible scale sets
This article walks you through removing a virtual machine scale set association from a Capacity Reservation group. To learn more about capacity reservations, see the [overview article](capacity-reservation-overview.md).
virtual-machines Agent Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/agent-windows.md
When logged in to a Windows VM, Task Manager can be used to examine running proc
The Azure VM Agent for Windows is automatically upgraded on images deployed from the Azure Marketplace. The new versions are stored in Azure Storage, so please ensure you don't have firewalls blocking access. As new VMs are deployed to Azure, they receive the latest VM agent at VM provision time. If you have installed the agent manually or are deploying custom VM images you will need to manually update to include the new VM agent at image creation time. ## Windows Guest Agent Automatic Logs Collection
-Windows Guest Agent has a feature to automatically collect some logs. This feature is controller by the CollectGuestLogs.exe process.
+Windows Guest Agent has a feature to automatically collect some logs. This feature is controlled by the CollectGuestLogs.exe process.
It exists for both PaaS Cloud Services and IaaS Virtual Machines and its goal is to quickly & automatically collect some diagnostics logs from a VM - so they can be used for offline analysis. The collected logs are Event Logs, OS Logs, Azure Logs and some registry keys. It produces a ZIP file that is transferred to the VMΓÇÖs Host. This ZIP file can then be looked at by Engineering Teams and Support professionals to investigate issues on request of the customer owning the VM.
virtual-machines Custom Script Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/custom-script-windows.md
The Custom Script Extension for Windows will run on these supported operating sy
* Windows Server 2016 Core * Windows Server 2019 * Windows Server 2019 Core
+* Windows 11
### Script location
virtual-machines Mitigate Se https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/mitigate-se.md
Previously updated : 06/14/2022 Last updated : 07/12/2022
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/overview.md
The resources in this table are used by the VM and need to exist or be created w
| Resource | Required | Description | | | | | | [Resource group](../../azure-resource-manager/management/overview.md) |Yes |The VM must be contained in a resource group. |
-| [Storage account](../../storage/common/storage-account-create.md) |Yes |The VM needs the storage account to store its virtual hard disks. |
+| [OS disk](../managed-disks-overview.md) |Yes |The VM needs a disk to store the OS in most cases. |
| [Virtual network](../../virtual-network/virtual-networks-overview.md) |Yes |The VM must be a member of a virtual network. | | [Public IP address](../../virtual-network/ip-services/public-ip-addresses.md) |No |The VM can have a public IP address assigned to it to remotely access it. | | [Network interface](../../virtual-network/virtual-network-network-interface.md) |Yes |The VM needs the network interface to communicate in the network. |
virtual-network Monitor Public Ip Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/monitor-public-ip-reference.md
For more information on the schema of Activity Log entries, see [Activity Log sc
- See [Monitoring Azure Public IP Address](monitor-public-ip.md) for a description of monitoring Azure Public IP addresses. -- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/insights/monitor-azure-resources) for details on monitoring Azure resources.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
virtual-network Public Ip Address Prefix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-address-prefix.md
Resource|Scenario|Steps|
## Limitations -- You can't specify the IP addresses for the prefix. Azure gives the IP addresses for the prefix, based on the size that you specify. Additionally, all public IP addresses created from the prefix must exist in the same Azure region and subscription as the prefix. Addresses must be assigned to resources in the same region and subscription.
+- You can't specify the set of IP addresses for the prefix (though you can specify which IP you want from the prefix). Azure gives the IP addresses for the prefix, based on the size that you specify. Additionally, all public IP addresses created from the prefix must exist in the same Azure region and subscription as the prefix. Addresses must be assigned to resources in the same region and subscription.
- You can create a prefix of up to 16 IP addresses. Review [Network limits increase requests](../../azure-portal/supportability/networking-quota-requests.md) and [Azure limits](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) for more information. - The size of the range cannot be modified after the prefix has been created. - Only static public IP addresses created with the standard SKU can be assigned from the prefix's range. To learn more about public IP address SKUs, see [public IP address](public-ip-addresses.md#public-ip-addresses).
virtual-network Manage Virtual Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-virtual-network.md
Complete the following tasks before completing steps in any section of this arti
## Create a virtual network 1. Select **+ Create a resource** > **Networking** > **Virtual network**.
-2. Enter or select values for the following settings, then select **Create**:
- - **Name**: The name must be unique in the [resource group](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) that you select to create the virtual network in. You can't change the name after the virtual network is created. You can create multiple virtual networks over time. For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks.
+2. In **Create virtual network**, enter or select values for the following settings on the *Basics* tab:
+ | **Setting** | **Description** |
+ | | |
+ | **Project details** | |
+ | **Subscription** | Select a [subscription](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription). You cannot use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions with [virtual network peering](virtual-network-peering-overview.md). Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network.|
+ |**Resource group**|Select an existing [resource group](../azure-resource-manager/management/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-groups) or create a new one. An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group.|
+ | **Instance details** |
+ | **Name** |The name must be unique in the [resource group](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-group) that you select to create the virtual network in. You cannot change the name after the virtual network is created. You can create multiple virtual networks over time. For naming suggestions, see [Naming conventions](/azure/cloud-adoption-framework/ready/azure-best-practices/naming-and-tagging#naming-and-tagging-resources). Following a naming convention can help make it easier to manage multiple virtual networks.|
+ | **Region** | Select an Azure [region](https://azure.microsoft.com/regions/). A virtual network can be in only one Azure region. However, you can connect a virtual network in one region to a virtual network in another region by using a VPN gateway. Any Azure resource that you connect to the virtual network must be in the same region as the virtual network.|
+1. Select **IP Addresses** tab or **Next: IP Addresses >**, and enter the following IP address information:
- **Address space**: The address space for a virtual network is composed of one or more non-overlapping address ranges that are specified in CIDR notation. The address range you define can be public or private (RFC 1918). Whether you define the address range as public or private, the address range is reachable only from within the virtual network, from interconnected virtual networks, and from any on-premises networks that you've connected to the virtual network. You can't add the following address ranges: - 224.0.0.0/4 (Multicast) - 255.255.255.255/32 (Broadcast)
Complete the following tasks before completing steps in any section of this arti
> If a virtual network has address ranges that overlap with another virtual network or on-premises network, the two networks can't be connected. Before you define an address range, consider whether you might want to connect the virtual network to other virtual networks or on-premises networks in the future. Microsoft recommends configuring virtual network address ranges with private address space or public address space owned by your organization. >
+ - **Add IPv6 address space**
+ - **Subnet name**: The subnet name must be unique within the virtual network. You can't change the subnet name after the subnet is created. The portal requires that you define one subnet when you create a virtual network, even though a virtual network isn't required to have any subnets. In the portal, you can define one or more subnets when you create a virtual network. You can add more subnets to the virtual network later, after the virtual network is created. To add a subnet to a virtual network, see [Manage subnets](virtual-network-manage-subnet.md). You can create a virtual network that has multiple subnets by using Azure CLI or PowerShell. >[!TIP]
Complete the following tasks before completing steps in any section of this arti
> - **Subnet address range**: The range must be within the address space you entered for the virtual network. The smallest range you can specify is /29, which provides eight IP addresses for the subnet. Azure reserves the first and last address in each subnet for protocol conformance. Three more addresses are reserved for Azure service usage. As a result, a virtual network with a subnet address range of /29 has only three usable IP addresses. If you plan to connect a virtual network to a VPN gateway, you must create a gateway subnet. Learn more about [specific address range considerations for gateway subnets](../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md?toc=%2fazure%2fvirtual-network%2ftoc.json#gwsub). You can change the address range after the subnet is created, under specific conditions. To learn how to change a subnet address range, see [Manage subnets](virtual-network-manage-subnet.md).
- - **Subscription**: Select a [subscription](../azure-glossary-cloud-terminology.md?toc=%2fazure%2fvirtual-network%2ftoc.json#subscription). You can't use the same virtual network in more than one Azure subscription. However, you can connect a virtual network in one subscription to virtual networks in other subscriptions with [virtual network peering](virtual-network-peering-overview.md). Any Azure resource that you connect to the virtual network must be in the same subscription as the virtual network.
- - **Resource group**: Select an existing [resource group](../azure-resource-manager/management/overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#resource-groups) or create a new one. An Azure resource that you connect to the virtual network can be in the same resource group as the virtual network or in a different resource group.
- - **Location**: Select an Azure [location](https://azure.microsoft.com/regions/), also known as a region. A virtual network can be in only one Azure location. However, you can connect a virtual network in one location to a virtual network in another location by using a VPN gateway. Any Azure resource that you connect to the virtual network must be in the same location as the virtual network.
**Commands**
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureBotService** | Azure Bot Service. | Outbound | No | No | | **AzureCloud** | All [datacenter public IP addresses](https://www.microsoft.com/download/details.aspx?id=56519). | Outbound | Yes | Yes | | **AzureCognitiveSearch** | Azure Cognitive Search. <br/><br/>This tag or the IP addresses covered by this tag can be used to grant indexers secure access to data sources. For more information about indexers, see [indexer connection documentation](../search/search-indexer-troubleshooting.md#connection-errors). <br/><br/> **Note**: The IP of the search service isn't included in the list of IP ranges for this service tag and **also needs to be added** to the IP firewall of data sources. | Inbound | No | No |
-| **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Inbound / Outbound | Yes | Yes |
+| **AzureConnectors** | This tag represents the IP addresses used for managed connectors that make inbound webhook callbacks to the Azure Logic Apps service and outbound calls to their respective services, for example, Azure Storage or Azure Event Hubs. | Both | Yes | Yes |
| **AzureContainerRegistry** | Azure Container Registry. | Outbound | Yes | Yes | | **AzureCosmosDB** | Azure Cosmos DB. | Outbound | Yes | Yes | | **AzureDatabricks** | Azure Databricks. | Both | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureDataLake** | Azure Data Lake Storage Gen1. | Outbound | No | Yes | | **AzureDeviceUpdate** | Device Update for IoT Hub. | Both | No | Yes | | **AzureDevSpaces** | Azure Dev Spaces. | Outbound | No | No |
-| **AzureDevOps** | Azure Dev Ops. | Inbound | No | Yes |
+| **AzureDevOps** | Azure DevOps. | Inbound | No | Yes |
| **AzureDigitalTwins** | Azure Digital Twins.<br/><br/>**Note**: This tag or the IP addresses covered by this tag can be used to restrict access to endpoints configured for event routes. | Inbound | No | Yes | | **AzureEventGrid** | Azure Event Grid. | Both | No | No | | **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | Azure Front Door. | Both | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureIoTHub** | Azure IoT Hub. | Outbound | Yes | No | | **AzureKeyVault** | Azure Key Vault.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory** tag. | Outbound | Yes | Yes | | **AzureLoadBalancer** | The Azure infrastructure load balancer. The tag translates to the [virtual IP address of the host](./network-security-groups-overview.md#azure-platform-considerations) (168.63.129.16) where the Azure health probes originate. This only includes probe traffic, not real traffic to your backend resource. If you're not using Azure Load Balancer, you can override this rule. | Both | No | No |
+| **AzureLoadTestingInstanceManagement** | This service tag is used for inbound connectivity from Azure Load Testing service to the load generation instances injected into your virtual network in the private load testing scenario. <br/><br/>**Note:** This tag is intended to be used in Azure Firewall, NSG, UDR and all other gateways for inbound connectivity. | No | Yes |
| **AzureMachineLearning** | Azure Machine Learning. | Both | No | Yes | | **AzureMonitor** | Log Analytics, Application Insights, AzMon, and custom metrics (GiG endpoints).<br/><br/>**Note**: For Log Analytics, the **Storage** tag is also required. If Linux agents are used, **GuestAndHybridManagement** tag is also required. | Outbound | No | Yes | | **AzureOpenDatasets** | Azure Open Datasets.<br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.Frontend** and **Storage** tag. | Outbound | No | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **MicrosoftContainerRegistry** | Container registry for Microsoft container images. <br/><br/>**Note**: This tag has a dependency on the **AzureFrontDoor.FirstParty** tag. | Outbound | Yes | Yes | | **PowerBI** | Power BI. | Both | No | No| | **PowerPlatformInfra** | This tag represents the IP addresses used by the infrastructure to host Power Platform services. | Outbound | Yes | Yes |
+| **PowerPlatformPlex** | This tag represents the IP addresses used by the infrastructure to host Power Platform extension execution on behalf of the customer. | Inbound | Yes | Yes |
| **PowerQueryOnline** | Power Query Online. | Both | No | No | | **ServiceBus** | Azure Service Bus traffic that uses the Premium service tier. | Outbound | Yes | Yes |
-| **ServiceFabric** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET (endpoint eg. https:// westus.servicefabric.azure.com). | Both | No | No |
+| **ServiceFabric** | Azure Service Fabric.<br/><br/>**Note**: This tag represents the Service Fabric service endpoint for control plane per region. This enables customers to perform management operations for their Service Fabric clusters from their VNET endpoint. (For example, https:// westus.servicefabric.azure.com). | Both | No | No |
| **Sql** | Azure SQL Database, Azure Database for MySQL, Azure Database for PostgreSQL, Azure Database for MariaDB, and Azure Synapse Analytics.<br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure SQL Database service, but not a specific SQL database or server. This tag doesn't apply to SQL managed instance. | Outbound | Yes | Yes | | **SqlManagement** | Management traffic for SQL-dedicated deployments. | Both | No | Yes | | **Storage** | Azure Storage. <br/><br/>**Note**: This tag represents the service, but not specific instances of the service. For example, the tag represents the Azure Storage service, but not a specific Azure Storage account. | Outbound | Yes | Yes |
virtual-wan Azure Vpn Client Optional Configurations Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/azure-vpn-client-optional-configurations-windows.md
description: Learn how to configure the Azure VPN Client optional configuration
Previously updated : 07/06/2022 Last updated : 07/12/2022
Modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<d
</azvpnprofile> ```
-### <a name="forced-tunneling"></a>Direct all traffic to the VPN tunnel (force tunnel)
+### <a name="forced-tunneling"></a>Direct all traffic to the VPN tunnel (forced tunneling)
-You can include 0/0 if you're using the Azure VPN Client version 2.1900:39.0 or higher.
+You can include 0/0 if you're using the Azure VPN Client version 2.1900:39.0 or higher. Modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags. Make sure to update the version number to **2**.
-Modify the downloaded profile xml file and add the **\<includeroutes>\<route>\<destination>\<mask> \</destination>\</mask>\</route>\</includeroutes>** tags. Make sure to update the version number to **2**.
+For more information about configuring forced tunneling, including additional configuration options, see [How to configure forced tunneling](how-to-forced-tunnel.md).
```xml <azvpnprofile>
virtual-wan How To Forced Tunnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-forced-tunnel.md
Title: 'Configure forced tunneling for Virtual WAN Point-to-site VPN'
description: Learn to configure forced tunneling for P2S VPN in Virtual WAN. - Previously updated : 3/25/2022 Last updated : 07/12/2022
Forced tunneling allows you to send **all** traffic (including Internet-bound traffic) from remote users to Azure. In Virtual WAN, forced tunneling for Point-to-site VPN remote users signifies that the 0.0.0.0/0 default route is advertised to remote VPN users.
-## Creating a Virtual WAN hub
+## Create a Virtual WAN hub
The steps in this article assume that you've already deployed a virtual WAN with one or more hubs.
To create a new virtual WAN and a new hub, use the steps in the following articl
* [Create a virtual WAN](virtual-wan-site-to-site-portal.md#openvwan) * [Create a virtual hub](virtual-wan-site-to-site-portal.md#hub)
-## Setting up Point-to-site VPN
+## Set up Point-to-site VPN
The steps in this article also assume that you already deployed a Point-to-site VPN gateway in the Virtual WAN hub. It also assumes you have created Point-to-site VPN profiles to assign to the gateway. To create the Point-to-site VPN gateway and related profiles, see [Create a Point-to-site VPN gateway](virtual-wan-point-to-site-portal.md).
-## Advertising default route to clients
+## Advertise default route to clients
There are a couple ways to configure forced-tunneling and advertise the default route (0.0.0.0/0) to your remote user VPN clients connected to Virtual WAN.
To turn on the EnableInternetSecurity flag, use the following PowerShell command
Update-AzP2sVpnGateway -ResourceGroupName "sampleRG" -Name "p2sgwsamplename" -EnableInternetSecurityFlag ```
-## Downloading the Point-to-site VPN profile
+## Download the Point-to-site VPN profile
To download the Point-to-site VPN profile, see [global and hub profiles](global-hub-profile.md). The information in the zip-file downloaded from Azure portal is critical to properly configuring your clients.
-## Configuring forced-tunneling for Azure VPN clients (OpenVPN)
+## Configure forced-tunneling for Azure VPN clients (OpenVPN)
The steps to configure forced-tunneling are different, depending on the operating system of the end user device.
-## Windows clients
+### Windows clients
> [!NOTE] > For Windows clients, forced tunneling with the Azure VPN client is only available with software version 2:1900:39.0 or newer.
The steps to configure forced-tunneling are different, depending on the operatin
1. Connect to the newly added connection. You are now force-tunneling all traffic to Azure Virtual WAN.
-## MacOS clients
+### MacOS clients
Once a macOS client learns the default route from Azure, forced tunneling is automatically configured on the client device. There are no extra steps to take. For instructions on how to use the macOS Azure VPN client to connect to the Virtual WAN Point-to-site VPN gateway, see the [macOS Configuration Guide](openvpn-azure-ad-client-mac.md).
virtual-wan Hub Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/hub-settings.md
description: This article answers common questions about virtual hub settings an
Previously updated : 05/30/2022 Last updated : 07/12/2022
Adjust the virtual hub capacity when you need to support additional virtual mach
To add additional virtual hub capacity, go to the virtual hub in the Azure portal. On the **Overview** page, click **Edit virtual hub**. Adjust the **Virtual hub capacity** using the dropdown, then **Confirm**.
-> [!NOTE]
-> When you edit virtual hub capacity, there will be data path disruption if the change in scale units has resulted in an underlying VPN GW SKU change.
->
- ### Routing infrastructure unit table For pricing information, see [Azure Virtual WAN pricing](https://azure.microsoft.com/pricing/details/virtual-wan/).