Updates from: 02/10/2023 02:44:29
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
Title: Set up a sign-up and sign-in flow
description: Learn how to set up a sign-up and sign-in flow in Azure Active Directory B2C. -+ Previously updated : 10/21/2021- Last updated : 02/09/2023+ zone_pivot_groups: b2c-policy-type
Watch this video to learn how the user sign-up and sign-in policy works.
## Prerequisites
-If you haven't already done so, [register a web application in Azure Active Directory B2C](tutorial-register-applications.md).
::: zone pivot="b2c-user-flow"
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
When applications are made up of multiple individual web application using diffe
The following figure shows an example for complex application domain structure.
-![Diagram of domain structure for a complex application showing resource sharing between primary and secondary application.](./media/application-proxy-configure-complex-application/complex-app-structure.png)
With [Azure AD Application Proxy](application-proxy.md), you can address this issue by using complex application publishing that is made up of multiple URLs across various domains.
-![Diagram of a Complex application with multiple application segments definition.](./media/application-proxy-configure-complex-application/complex-app-flow.png)
A complex app has multiple app segments, with each app segment being a pair of an internal & external URL. There is one conditional access policy associated with the app and access to any of the external URLs work with pre-authentication with the same set of policies that are enforced for all.
This article provides you with the information you need to configure wildcard ap
## Characteristics of application segment(s) for complex application. 1. Application segments can be configured only for a wildcard application. 2. External and alternate URL should match the wildcard external and alternate URL domain of the application respectively.
-3. Application segment URLΓÇÖs (internal and external) need to maintain uniqueness across complex applications.
+3. Application segment URLs (internal and external) need to maintain uniqueness across complex applications.
4. CORS Rules (optional) can be configured per application segment. 5. Access will only be granted to defined application segments for a complex application. - Note - If all application segments are deleted, a complex application will behave as a wildcard application opening access to all valid URL by specified domain.
Before you get started with Application Proxy Complex application scenario apps,
## Configure application segment(s) for complex application.
-To configure (and update) Application Segments for a complex app using the API, you first [create a wildcard application](application-proxy-wildcard.md#create-a-wildcard-application), and then update the application's onPremisesPublishing property to configure the application segments and respective CORS settings.
- > [!NOTE]
-> 2 application segment per complex application are supported for [Microsoft Azure AD premium subscription](https://azure.microsoft.com/pricing/details/active-directory). Licence requirement for more than 2 application segments per complex application to be announced soon.
-
-If successful, this method returns a `204 No Content` response code and does not return anything in the response body.
-## Example
-
-##### Request
-Here is an example of the request.
-
-```http
-PATCH https://graph.microsoft.com/beta/applications/{<object-id-of--the-complex-app-under-APP-Registrations}
-Content-type: application/json
-
-{
- "onPremisesPublishing": {
- "onPremisesApplicationSegments": [
- {
- "externalUrl": "https://home.contoso.net/",
- "internalUrl": "https://home.test.com/",
- "alternateUrl": "",
- "corsConfigurations": []
- },
- {
- "externalUrl": "https://assets.constoso.net/",
- "internalUrl": "https://assets.test.com",
- "alternateUrl": "",
- "corsConfigurations": [
- {
- "resource": "/",
- "allowedOrigins": [
- "https://home.contoso.net/"
- ],
- "allowedHeaders": [
- "*"
- ],
- "allowedMethods": [
- "*"
- ],
- "maxAgeInSeconds": 0
- }
- ]
- }
- ]
- }
-}
-
-```
-##### Response
-
-```http
-HTTP/1.1 204 No Content
-```
+> Two application segment per complex distributed application are supported for [Microsoft Azure AD premium subscription](https://azure.microsoft.com/pricing/details/active-directory). License requirement for more than two application segments per complex application to be announced soon.
+
+To publish complex distributed app through Application Proxy with application segments:
+
+1. [Create a wildcard application.](application-proxy-wildcard.md#create-a-wildcard-application)
+
+1. On the Application Proxy Basic settings page, select "Add application segments".
+
+ :::image type="content" source="./media/application-proxy-configure-complex-application/add-application-segments.png" alt-text="Screenshot of link to add an application segment.":::
+
+3. On the Manage and configure application segments page, select "+ Add app segment"
+
+ :::image type="content" source="./media/application-proxy-configure-complex-application/add-application-segment-1.png" alt-text="Screenshot pf Manage and configure application segment blade.":::
+
+4. In the Internal Url field, enter the internal URL for your app.
+
+5. In the External Url field, drop down the list and select the custom domain you want to use.
+
+6. Add CORS Rules (optional). For more information see [Configuring CORS Rule](https://learn.microsoft.com/graph/api/resources/corsconfiguration_v2?view=graph-rest-beta)
+
+7. Select Create.
+
+ :::image type="content" source="./media/application-proxy-configure-complex-application/create-app-segment.png" alt-text="Screenshot of add or edit application segment context plane.":::
+
+Your application is now set up to use the configured application segments. Be sure to assign users to your application before you test or release it.
+
+To edit/update an application segment, select respective application segment from the list in Manage and configure application segments page. Upload a certificate for the updated domain, if necessary, and update the DNS record.
+
+## DNS updates
+
+When using custom domains, you need to create a DNS entry with a CNAME record for the external URL (for example, `*.adventure-works.com`) pointing to the external URL of the application proxy endpoint. For wildcard applications, the CNAME record needs to point to the relevant external URL:
+
+> `<yourAADTenantId>.tenant.runtime.msappproxy.net`
+
+Alternatively, a DNS entry with a CNAME record for every individual application segment can be created as follows:
+
+> `'External URL of application segment'` > `'<External URL without domain>-<tenantname>.msapproxy.net'` <br>
+for example in above instance >`'home.contoso.ashcorp.us'` points to > `home-ashcorp1.msappproxy.net`
+
+For more detailed instructions for Application Proxy, see [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md).
## See also - [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure Active Directory](../app-proxy/application-proxy-add-on-premises-application.md)
active-directory Concept Certificate Based Authentication Smartcard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-certificate-based-authentication-smartcard.md
The Windows smart card sign-in works with the latest preview build of Windows 11
|&#x2705; | &#x2705; | &#x2705; |&#x2705; | >[!NOTE]
->Azure AD CBA supports both certificates on-device as well as external storage like security keys on Windows.
+>Azure AD CBA supports both certificates on-device as well as external storage like security keys on Windows.
+
+## Windows Out of the box experience (OOBE)
+
+Windows OOBE should allow the user to login using an external smart card reader and authenticate against Azure AD CBA. Windows OOBE by default should have the necessary smart card drivers or the smart card drivers previously added to the Windows image before OOBE setup.
## Restrictions and caveats
active-directory Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/onboard-aws.md
This article describes how to onboard an Amazon Web Services (AWS) account on Permissions Management. > [!NOTE]
-> A *global administrator* or *super admin* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
+> A *global administrator* or *root user* (an admin for all authorization system types) can perform the tasks in this article after the global administrator has initially completed the steps provided in [Enable Permissions Management on your Azure Active Directory tenant](onboard-enable-tenant.md).
## Explanation
active-directory Permissions Management Trial User Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/permissions-management-trial-user-guide.md
Use the **Activity triggers** dashboard to view information and set alerts and t
- See data for **identity governance** to ensure inactive users are decommissioned because they left the company or to remove vendor accounts that have been left behind, old consultant accounts, or users who as parts of the Joiner/Mover/Leaver process have moved onto another role and are no longer using their access. Consider this a fail-safe to ensure dormant accounts are removed. - Identify over-permissioned access to later use the Remediation to pursue **Zero Trust and least privileges.**
- **Example of** [**Permissions Management Report**](https://microsoft.sharepoint.com/:v:/t/MicrosoftEntraPermissionsManagementAssets/EQWmUsMsdkZEnFVv-M9ZoagBd4B6JUQ2o7zRTupYrfxbGA)
+ **Example of Permissions Management Analytics Report**
+
+ > [!div class="mx-imgBorder"]
+ > :::image type="content" source="media/permissions-management-trial-user-guide/permissions-management-report-example.png" alt-text="Example of Permissions Management Analytics Report." lightbox="media/permissions-management-trial-user-guide/permissions-management-report-example.png":::
**Actions to try** - [View system reports in the Reports dashboard](../cloud-infrastructure-entitlement-management/product-reports.md)
active-directory Usage Analytics Active Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/usage-analytics-active-tasks.md
Previously updated : 02/23/2022 Last updated : 02/08/2023
When you select **Active Tasks**, the **Analytics** dashboard provides a high-le
The **Active Tasks** table displays the results of your query. - **Task Name**: Provides the name of the task.
- - To view details about the task, select the down arrow in the table.
+ - To view details about the task, select the down arrow next to the task in the table.
- - A **Normal Task** icon displays to the left of the task name if the task is normal (that is, not risky).
- - A **Deleted Task** icon displays to the left of the task name if the task involved deleting data.
- - A **High-Risk Task** icon displays to the left of the task name if the task is high-risk.
+ - An icon (![Image of task icon](media/usage-analytics-active-tasks/normal-task.png)) displays to the left of the task name if the task is a **Normal Task** (that is, not risky).
+ - A highlighted icon (![Image of highlighted task icon](mediash; or if the task is a **High-Risk Task**.
- **Performed on (resources)**: The number of resources on which the task was used.
active-directory Migrate Azure Ad Connect To Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/migrate-azure-ad-connect-to-cloud-sync.md
+
+ Title: 'Migrate Azure AD Connect to Azure AD Connect cloud sync| Microsoft Docs'
+description: Describes steps to migrate Azure AD Connect to Azure AD Connect cloud sync.
++++++ Last updated : 01/17/2023++++++
+# Migrating from Azure AD Connect to Azure AD Connect cloud sync
+
+Azure AD Connect cloud sync is the future for accomplishing your hybrid identity goals for synchronization of users, groups, and contacts to Azure AD. It uses the Azure AD cloud provisioning agent instead of the Azure AD Connect application. If you're currently using Azure AD Connect and wish to move to cloud sync, the following document provides guidance.
+
+## Steps for migrating from Azure AD Connect to cloud sync
+++
+|Step|Description|
+|--|--|
+|Choose the best sync tool|Before moving to cloud sync, you should verify that cloud sync is currently the best synchronization tool for you. You can do this task by going through the wizard [here](https://setup.microsoft.com/azure/add-or-sync-users-to-microsoft-365).|
+|Verify the pre-requisites for migrating|The following guidance is only for users who have installed Azure AD Connect using the Express settings and aren't synchronizing devices. Also you should verify the cloud sync [pre-requisites](how-to-prerequisites.md).|
+|Back up your Azure AD Connect configuration|Before making any changes, you should back up your Azure AD Connect configuration. This way, you can role-back. For more information, see [Import and export Azure AD Connect configuration settings](../hybrid/how-to-connect-import-export-config.md).|
+|Review the migration tutorial|To become familiar with the migration process, review the [Migrate to Azure AD Connect cloud sync for an existing synced AD forest](tutorial-pilot-aadc-aadccp.md) tutorial. This tutorial guides you through the migration process in a sandbox environment.|
+|Create or identify an OU for the migration|Create a new OU or identify an existing OU that contains the users you'll test migration on.|
+|Move users into new OU (optional)|If you're using a new OU, move the users that are in scope for this pilot into that OU now. Before continuing, let Azure AD Connect pick up the changes so that it's synchronizing them in the new OU.|
+|Run PowerShell on OU|You can run the following PowerShell cmdlet to get the counts of the users that are in the pilot OU. </br>`Get-ADUser -Filter * -SearchBase "<DN path of OU>"`</br> Example: `Get-ADUser -Filter * -SearchBase "OU=Finance,OU=UserAccounts,DC=FABRIKAM,DC=COM"`|
+|Stop the scheduler|Before creating new sync rules, you need to stop the Azure AD Connect scheduler. For more information, see [how to stop the scheduler](../hybrid/how-to-connect-sync-feature-scheduler.md#stop-the-scheduler).
+|Create the custom sync rules|In the Azure AD Connect Synchronization Rules editor, you need to create an inbound sync rule that filters out users in the OU you created or identified previously. The inbound sync rule is a join rule with a target attribute of cloudNoFlow. You'll also need an outbound sync rule with a link type of JoinNoFlow and the scoping filter that has the cloudNoFlow attribute set to True. For more information, see [Migrate to Azure AD Connect cloud sync for an existing synced AD forest](tutorial-pilot-aadc-aadccp.md#create-custom-user-inbound-rule) tutorial for how to create these rules.|
+|Install the provisioning agent|If you haven't done so, install the provisioning agent. For more information, see [how to install the agent](how-to-install.md).|
+|Configure cloud sync|Once the agent is installed, you need to configure cloud sync. In the configuration, you need to create a scope to the OU that was created or identified previously. For more information, see [Configuring cloud sync](how-to-configure.md).|
+|Verify pilot users are synchronizing and being provisioned|Verify that the users are now being synchronized in the portal. You can use the PowerShell script below to get a count of the number of users that have the on-premises pilot OU in their distinguished name. This number should match the count of users in the previous step. If you create a new user in this OU, verify that it's being provisioned.|
+|Start the scheduler|Now that you've verified users are provisioning and synchronizing, you can go ahead and start the Azure AD Connect scheduler. For more information, see [how to start the scheduler](../hybrid/how-to-connect-sync-feature-scheduler.md#start-the-scheduler).
+|Schedule you remaining users|Now you should come up with a plan on migrating more users. You should use a phased approach so that you can verify that the migrations are successful.|
+|Verify all users are provisioned|As you migrate users, verify that they're provisioning and synchronizing correctly.|
+|Stop Azure AD Connect|Once you've verified that all of your users are migrated, you can turn off the Azure AD Connect synchronization service. Microsoft recommends that you leave the server is a disabled state for a period of time, so you can verify the migration was successful
+|Verify everything is good|After a period of time, verify that everything is good.|
+|Decommission the Azure AD Connect server|Once you've verified everything is good you can use the steps below to take the Azure AD Connect server offline.|
++++++
+## Verify Users script
+```PowerShell
+# Filename: VerifyAzureUsers.ps1
+# Description: Counts the number of users in Azure that have a specific on-premises distinguished name.
+#
+# DISCLAIMER:
+# Copyright (c) Microsoft Corporation. All rights reserved. This
+# script is made available to you without any express, implied or
+# statutory warranty, not even the implied warranty of
+# merchantability or fitness for a particular purpose, or the
+# warranty of title or non-infringement. The entire risk of the
+# use or the results from the use of this script remains with you.
+#
+#
+#
+#
++
+Connect-AzureAD -Confirm
+
+#Declare variables
+
+$Users = Get-AzureADUser -All:$true -Filter "DirSyncEnabled eq true"
+$OU = "OU=Sales,DC=contoso,DC=com"
+$counter = 0
+
+#Search users
+
+foreach ($user in $Users) {
+ $test = $User.ExtensionProperty
+ $DN = $test["onPremisesDistinguishedName"]
+ if ($DN -match $OU)
+ {
+ $counter++
+ }
+}
+
+Write-Host "Total Users found:" + $counter
+
+```
+## More information
+
+- [What is provisioning?](what-is-provisioning.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
+- [Migrate to Azure AD Connect cloud sync for an existing synced AD forest](tutorial-pilot-aadc-aadccp.md)
+``
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced AD forest
+ Title: Tutorial - Migrate to Azure AD Connect cloud sync for an existing synced AD forest
description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
Previously updated : 01/18/2023 Last updated : 01/23/2023
-# Pilot cloud sync for an existing synced AD forest
+# Migrate to Azure AD Connect cloud sync for an existing synced AD forest
-This tutorial walks you through piloting cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
+This tutorial walks you through how you would migrate to cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
+
+> [!NOTE]
+> This article provides information for a basic migration and you should review the [Migrating to cloud sync](migrate-azure-ad-connect-to-cloud-sync.md) documentation before attempting to migrate your production environment.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-migrate-aadc-aadccp/diagram-2.png)
This tutorial walks you through piloting cloud sync for a test Active Directory
Before you try this tutorial, consider the following items:
-1. Ensure that you're familiar with basics of cloud sync.
-
-1. Ensure that you're running Azure AD Connect sync version 1.4.32.0 or later and have configured the sync rules as documented.
-
-1. When piloting, you'll be removing a test OU or group from Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
+ 1. Ensure that you're familiar with basics of cloud sync.
+ 2. Ensure that you're running Azure AD Connect sync version 1.4.32.0 or later and have configured the sync rules as documented.
+ 3. When piloting, you'll be removing a test OU or group from Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
- User objects, the objects in Azure AD are soft-deleted and can be restored. - Group objects, the objects in Azure AD are hard-deleted and can't be restored.
-
- A new link type has been introduced in Azure AD Connect sync, which will prevent the deletion in a piloting scenario.
+
+ A new link type has been introduced in Azure AD Connect sync, which will prevent the deletion in a piloting scenario.
-1. Ensure that the objects in the pilot scope have ms-ds-consistencyGUID populated so cloud sync hard matches the objects.
+ 4. Ensure that the objects in the pilot scope have ms-ds-consistencyGUID populated so cloud sync hard matches the objects.
> [!NOTE] > Azure AD Connect sync does not populate *ms-ds-consistencyGUID* by default for group objects.
-1. This configuration is for advanced scenarios. Ensure that you follow the steps documented in this tutorial precisely.
+ 5. This configuration is for advanced scenarios. Ensure that you follow the steps documented in this tutorial precisely.
## Prerequisites
The following are prerequisites required for completing this tutorial
As a minimum, you should have [Azure AD connect](https://www.microsoft.com/download/details.aspx?id=47594) 1.4.32.0. To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
+## Back up your Azure AD Connect configuration
+Before making any changes, you should back up your Azure AD Connect configuration. This way, you can role-back. See [Import and export Azure AD Connect configuration settings](../hybrid/how-to-connect-import-export-config.md) for more information.
+ ## Stop the scheduler Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. In order to modify and add custom rules, you want to disable the scheduler so that synchronizations won't run while you're working making the changes. To stop the scheduler, use the following steps:
active-directory Tutorial Blazor Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-server.md
Previously updated : 12/13/2022 Last updated : 02/09/2023 #Customer intent: As a developer, I want to add authentication to a Blazor app.
Finally, because the app calls a protected API (in this case Microsoft Graph), i
## Create the app using the .NET CLI
-Run the following command to download the templates for `Microsoft.Identity.Web`, which we'll make use of in this tutorial.
-
-```dotnetcli
-dotnet new --install Microsoft.Identity.Web.ProjectTemplates
-```
-
-Then, run the following command to create the application. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
+To create the application, run the following command. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
```dotnetcli dotnet new blazorserver --auth SingleOrg --calls-graph -o {APP NAME} --client-id "{CLIENT ID}" --tenant-id "{TENANT ID}" --domain "{DOMAIN}" -f net7.0
After granting consent, navigate to the "Fetch data" page to read some email.
Learn about calling building web apps that sign in users in our multi-part scenario series:
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Scenario: Web app that signs in users](scenario-web-app-sign-user-overview.md)
active-directory Tutorial Blazor Webassembly https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/tutorial-blazor-webassembly.md
Previously updated : 12/14/2022 Last updated : 02/09/2023 #Customer intent: As a developer, I want to add authentication and authorization to a Blazor WebAssembly app and call Microsoft Graph.
Every app that uses Azure AD for authentication must be registered with Azure AD
## Create the app using the .NET Core CLI
-To create the app, you need the latest Blazor templates. You can install them for the .NET Core CLI with the following command:
-
-```dotnetcli
-dotnet new install Microsoft.Identity.Web.ProjectTemplates
-```
-
-Then run the following command to create the application. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
+To create the application, run the following command. Replace the placeholders in the command with the proper information from your app's overview page and execute the command in a command shell. The output location specified with the `-o|--output` option creates a project folder if it doesn't exist and becomes part of the app's name.
```dotnetcli dotnet new blazorwasm --auth SingleOrg --calls-graph -o {APP NAME} --client-id "{CLIENT ID}" --tenant-id "{TENANT ID}" -f net7.0
After granting consent, navigate to the "Fetch data" page to read some email.
## Next steps
-> [!div class="nextstepaction"]
+> [!div class="nextstepaction"]
> [Microsoft identity platform best practices and recommendations](./identity-platform-integration-checklist.md)
active-directory 1 Secure Access Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/1-secure-access-posture.md
The primary goals of delegating access are:
Levels of control can be accomplished through various methods, depending on your version of Azure AD and Microsoft 365. * [Azure AD plans and pricing](https://www.microsoft.com/security/business/identity-access-management/azure-ad-pricing)
-* [Microsoft 365](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans).
+* [Compare Microsoft 365 Enterprise pricing](https://www.microsoft.com/microsoft-365/compare-microsoft-365-enterprise-plans)
#### Reduce attack surface
active-directory 4 Secure Access Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/4-secure-access-groups.md
Title: Secure external access with groups in Azure Active Directory and Microsoft 365 description: Azure Active Directory and Microsoft 365 Groups can be used to increase security when external users access your resources. -+ Previously updated : 02/01/2023 Last updated : 02/09/2023
Hybrid organizations have infrastructure for on-premises and an Azure AD. Hybrid
## Microsoft 365 Groups
-Microsoft 365 Groups is the membership service for access across Microsoft 365. They can be created from the Azure portal, or the Microsoft 365 portal. When you create a Microsoft 365 Group, you grant access to a group of resources for collaboration.
+Microsoft 365 Groups is the membership service for access across Microsoft 365. They can be created from the Azure portal, or the Microsoft 365 admin center. When you create a Microsoft 365 Group, you grant access to a group of resources for collaboration.
Learn more: * [Overview of Microsoft 365 Groups for administrators](/microsoft-365/admin/create-groups/office-365-groups?view=o365-worldwide&preserve-view=true) * [Create a group in the Microsoft 365 admin center](/microsoft-365/admin/create-groups/create-groups?view=o365-worldwide&preserve-view=true) * [Azure portal](https://portal.azure.com/)
-* [Microsoft 365 portal](https://admin.microsoft.com/)
+* [Microsoft 365 admin center](https://admin.microsoft.com/)
### Microsoft 365 Groups roles
active-directory 9 Secure Access Teams Sharepoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/9-secure-access-teams-sharepoint.md
Teams differentiates between external users (outside your organization) and gues
Learn more: [Use guest access and external access to collaborate with people outside your organization](/microsoftteams/communicate-with-users-from-other-organizations).
-> [!NOTE]
-> The External Identities collaboration feaure in Azure AD controls permissions. You can increase restrictions in Teams, but restrictions can't be lower than Azure AD settings.
+The External Identities collaboration feaure in Azure AD controls permissions. You can increase restrictions in Teams, but restrictions can't be lower than Azure AD settings.
Learn more:
Learn more:
* [SharePoint and OneDrive integration with Azure AD B2B](/sharepoint/sharepoint-azureb2b-integration) * [B2B collaboration overview](../external-identities/what-is-b2b.md)
-> [!NOTE]
-> If you enable Azure AD B2B integration, then SharePoint and OneDrive sharing is subject to the Azure AD organizational relationships settings, such as **Members can invite** and **Guests can invite**.
+If you enable Azure AD B2B integration, then SharePoint and OneDrive sharing is subject to the Azure AD organizational relationships settings, such as **Members can invite** and **Guests can invite**.
### Sharing policies in SharePoint and OneDrive
active-directory Service Accounts Governing Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-governing-azure.md
Previously updated : 02/06/2023 Last updated : 02/09/2023
Learn more:
> [!NOTE] > We do not recommend user accounts as service accounts because they are less secure. This includes on-premises service accounts synced to Azure AD, because they aren't converted to service principals. Instead, we recommend managed identities, or service principals, and the use of Conditional Access.
-[What is Conditional Access?](../conditional-access/overview.md)
+Lear more: [What is Conditional Access?](../conditional-access/overview.md)
## Plan your service account
We recommend the following practices for service account privileges.
### Permissions * Don't assign built-in roles to service accounts
- * Instead, use the [OAuth2 permission grant model for Microsoft Graph](/graph/api/resources/oauth2permissiongrant)
+ * See, [oAuth2PermissionGrant resource type](/graph/api/resources/oauth2permissiongrant)
* The service principal is assigned a privileged role * [Create and assign a custom role in Azure Active Directory](../roles/custom-create.md) * Don't include service accounts as members of any groups with elevated permissions
-* [Use PowerShell to enumerate members of privileged roles](/powershell/module/azuread/get-azureaddirectoryrolemember):
+ * See, [Get-AzureADDirectoryRoleMember](/powershell/module/azuread/get-azureaddirectoryrolemember):
>`Get-AzureADDirectoryRoleMember`, and filter for objectType "Service Principal", or use</br> >`Get-AzureADServicePrincipal | % { Get-AzureADServiceAppRoleAssignment -ObjectId $_ }`
-* [Use OAuth 2.0 scopes](../develop/v2-permissions-and-consent.md) to limit the functionality a service account can access on a resource
+* See, [Introduction to permissions and consent](../develop/v2-permissions-and-consent.md) to limit the functionality a service account can access on a resource
* Service principals and managed identities can use OAuth 2.0 scopes in a delegated context impersonating a signed-on user, or as service account in the application context. In the application context, no one is signed in. * Confirm the scopes service accounts request for resources * If an account requests Files.ReadWrite.All, evaluate if it needs File.Read.All
- * [Overview of Microsoft Graph permissions](/graph/permissions-reference)
+ * [Microsoft Graph permissions reference](/graph/permissions-reference)
* Ensure you trust the application developer, or API, with the requested access ### Duration
Use one of the following monitoring methods:
* Azure AD Sign-In Logs in the Azure AD portal * Export the Azure AD Sign-In Logs to
- * [Azure Storage](../../storage/index.yml)
- * [Azure Event Hubs](../../event-hubs/index.yml), or
+ * [Azure Storage documentation](../../storage/index.yml)
+ * [Azure Event Hubs documentation](../../event-hubs/index.yml), or
* [Azure Monitor Logs overview](../../azure-monitor/logs/data-platform-logs.md) Use the following screenshot to see service principal sign-ins.
We recommend you export Azure AD sign-in logs, and then import them into a secur
Regularly review service account permissions and accessed scopes to see if they can be reduced or eliminated.
-* Use [PowerShell](/powershell/module/azuread/get-azureadserviceprincipaloauth2permissiongrant) to [build automation to check and document](https://gist.github.com/psignoret/41793f8c6211d2df5051d77ca3728c09) scopes for service account
-* Use PowerShell to [review service principal credentials](https://github.com/AzureAD/AzureADAssessment) and confirm validity
+* See, [Get-AzureADServicePrincipalOAuth2PermissionGrant](/powershell/module/azuread/get-azureadserviceprincipaloauth2permissiongrant)
+ * [Script to list all delegated permissions and application permissions in Azure AD](https://gist.github.com/psignoret/41793f8c6211d2df5051d77ca3728c09) scopes for service account
+* See, [Azure AD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment) and confirm validity
* Don't set service principal credentials to **Never expire** * Use certificates or credentials stored in Azure Key Vault, when possible
- * [what is Azure Key Vault?](../../key-vault/general/basic-concepts.md)
+ * [What is Azure Key Vault?](../../key-vault/general/basic-concepts.md)
-The free PowerShell sample collects service principal OAuth2 grants and credential information, records them in a comma-separated values (CSV) file, and a Power BI sample dashboard. For more information, see [Microsoft Azure AD Assessment, Assessor Guide](https://github.com/AzureAD/AzureADAssessment).
+The free PowerShell sample collects service principal OAuth2 grants and credential information, records them in a comma-separated values (CSV) file, and a Power BI sample dashboard. For more information, see [Azure AD/AzureADAssessment](https://github.com/AzureAD/AzureADAssessment).
### Recertify service account use
Deprovisioning includes the following tasks:
After the associated application or script is deprovisioned:
-* [Monitor sign-ins](../reports-monitoring/concept-sign-ins.md) and resource access by the service account
+* [Sign-in logs in Azure AD](../reports-monitoring/concept-sign-ins.md) and resource access by the service account
* If the account is active, determine how it's being used before continuing * For a managed service identity, disable service account sign-in, but don't remove it from the directory * Revoke service account role assignments and OAuth2 consent grants
active-directory Service Accounts Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-principal.md
Title: Securing service principals in Azure Active Directory description: Find, assess, and secure service principals. -+ Previously updated : 11/28/2022 Last updated : 02/08/2023
-# Securing service principals
+# Securing service principals in Azure Active Directory
-An Azure Active Directory (Azure AD) [service principal](../develop/app-objects-and-service-principals.md) is the local representation of an application object in a single tenant or directory. It functions as the identity of the application instance. Service principals define who can access the application, and what resources the application can access. A service principal is created in each tenant where the application is used, and references the globally unique application object. The tenant secures the service principal sign-in and access to resources.
+An Azure Active Directory (Azure AD) service principals are the local representation of an application object in a tenant or directory. It's the identity of the application instance. Service principals define application access, and resources the application accesses. A service principal is created in each tenant where the application is used, and references the globally unique application object. The tenant secures the service principal sign-in and access to resources.
+
+Learn more: [Application and service principal objects in Azure AD](../develop/app-objects-and-service-principals.md)
### Tenant-service principal relationships A single-tenant application has one service principal in its home tenant. A multi-tenant web application or API requires a service principal in each tenant. A service principal is created when a user from that tenant consents to use of the application or API. This consent creates a one-to-many relationship between the multi-tenant application and its associated service principals.
-A multi-tenant application is homed in a single tenant and has instances in other tenants. Most software-as-a-service (SaaS) applications accommodate multi-tenancy. Use service principals to ensure the needed security posture for the application and its users in single-tenant and multi-tenant scenarios.
+A multi-tenant application is homed in a tenant and has instances in other tenants. Most software-as-a-service (SaaS) applications accommodate multi-tenancy. Use service principals to ensure the needed security posture for the application, and its users, in single- and multi-tenant scenarios.
## ApplicationID and ObjectID
-An application instance has two properties: the ApplicationID (also known as ClientID) and the ObjectID.
+An application instance has two properties: the ApplicationID (or ClientID) and the ObjectID.
> [!NOTE]
-> It's possible the terms application and service principal are used interchangeably when referring to an application in the context of authentication-related tasks. However, they are two representations of applications in Azure AD.
+> The terms **application** and **service principal** are used interchangeably, when referring to an application in authentication tasks. However, they are two representations of applications in Azure AD.
-The ApplicationID represents the global application and is the same for application instances across tenants. The ObjectID is a unique value for an application object. As with users, groups, and other resources, the ObjectID helps to identify an application instance in Azure AD.
+The ApplicationID represents the global application and is the same for application instances, across tenants. The ObjectID is a unique value for an application object. As with users, groups, and other resources, the ObjectID helps to identify an application instance in Azure AD.
+
+To learn more, see [Application and service principal relationship](../develop/app-objects-and-service-principals.md)
+
+### Create an application and its service principal object
-To learn more, see [Application and service principal relationship](../develop/app-objects-and-service-principals.md).
+You can create an application and its service principal object (ObjectID) in a tenant using:
-You can create an application and its service principal object (ObjectID) in a tenant using Azure PowerShell, Azure CLI, Microsoft Graph, the Azure portal, and other tools.
+* Azure PowerShell
+* Azure command-line interface (CLI)
+* Microsoft Graph
+* The Azure portal
+* Other tools
-![Screen shot showing a new application registration, with the Application ID and Object ID fields highlighted.](./media/securing-service-accounts/secure-principal-image-1.png)
+![Screenshot of Application or Client ID and Object ID on the New App page.](./media/securing-service-accounts/secure-principal-image-1.png)
## Service principal authentication
-When using service principalsΓÇöclient certificates and client secrets, there are two mechanisms for authentication.
+There are two mechanisms for authentication, when using service principalsΓÇöclient certificates and client secrets.
-![ Screen shot of New App page showing the Certificates and client secrets areas highlighted.](./media/securing-service-accounts/secure-principal-certificates.png)
+![Screenshot of Certificates and Client secrets under New App, Certificates and secrets.](./media/securing-service-accounts/secure-principal-certificates.png)
-Certificates are more secure, therefore use them, if possible. Unlike client secrets, client certificates can't be embedded in code, accidentally. When possible, use Azure Key Vault for certificate and secrets management to encrypt the following assets with keys protected by hardware security modules:
+Because certificates are more secure, it's recommended you use them, when possible. Unlike client secrets, client certificates can't be embedded in code, accidentally. When possible, use Azure Key Vault for certificate and secrets management to encrypt assets with keys protected by hardware security modules:
* Authentication keys- * Storage account keys- * Data encryption keys- * .pfx files- * Passwords
-For more information on Azure Key Vault and how to use it for certificate and secret management, see
-[About Azure Key Vault](../../key-vault/general/overview.md) and [Assign a Key Vault access policy using the Azure portal](../../key-vault/general/assign-access-policy-portal.md).
+For more information on Azure Key Vault and how to use it for certificate and secret management, see:
+
+* [About Azure Key Vault](../../key-vault/general/overview.md)
+* [Assign a Key Vault access policy](../../key-vault/general/assign-access-policy.md)
### Challenges and mitigations
-Use the following table to match challenges and mitigations, when using service principals.
+When using service principals, use the following table to match challenges and mitigations.
-| ChallengesΓÇï| MitigationsΓÇï |
+| Challenge| Mitigation|
| - | - |
-| Access reviews for service principals assigned to privileged roles| This functionality is in preview, and not widely available |
-| Reviews service principal access| Manual check of resource access control list using the Azure portal |
-| Over-permissioned service principals| When you create automation service accounts or or service principals, provide permissions required for the task. Evaluate service principals to reduce privileges |
-|Identify modifications to service principal credentials or authentication methods |Use the Sensitive Operations Report workbook to mitigate. See also the Tech Community blog post [Azure AD workbook to help you assess Solorigate risk](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718).|
+| Access reviews for service principals assigned to privileged roles| This functionality is in preview |
+| Service principal access reviews| Manual check of resource access control list using the Azure portal |
+| Over-permissioned service principals| When you create automation service accounts, or service principals, grant permissions for the task. Evaluate service principals to reduce privileges. |
+|Identify modifications to service principal credentials or authentication methods | - See, [Sensitive operations report workbook](../reports-monitoring/workbook-sensitive-operations-report.md) </br> - See the Tech Community blog post, [Azure AD workbook to help you assess Solorigate risk](https://techcommunity.microsoft.com/t5/azure-active-directory-identity/azure-ad-workbook-to-help-you-assess-solorigate-risk/ba-p/2010718)|
## Find accounts using service principals
-Run the following commands to find accounts using service principals with Azure CLI or PowerShell.
-
-Azure CLI:
+To find accounts, run the following commands using service principals with Azure CLI or PowerShell.
-`az ad sp list`
+* Azure CLI - `az ad sp list`
+* PowerShell - `Get-AzureADServicePrincipal -All:$true`
-PowerShell:
-
-`Get-AzureADServicePrincipal -All:$true`
-
-For more information see [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal).
+For more information, see [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal)
## Assess service principal security
-To assess the security of your service principals, ensure you evaluate privileges and credential storage.
-
-Mitigate potential challenges using the following information.
+To assess the security, evaluate privileges and credential storage. Use the following table to help mitigate challenges:
-|Challenges | Mitigations|
+|Challenge | Mitigation|
| - | - |
-| Detect the user who consented to a multi-tenant app, and detect illicit consent grants to a multi-tenant app | Run the following PowerShell to find multi-tenant apps.<br>`Get-AzureADServicePrincipal -All:$true ? {$_.Tags -eq WindowsAzureActiveDirectoryIntegratedApp"}`<br>Disable user consent. <br>Allow user consent from verified publishers, for selected permissions (recommended) <br> Configure them in the user context. Use their tokens to trigger the service principal.|
+| Detect the user who consented to a multi-tenant app, and detect illicit consent grants to a multi-tenant app | - Run the following PowerShell to find multi-tenant apps <br>`Get-AzureADServicePrincipal -All:$true ? {$_.Tags -eq WindowsAzureActiveDirectoryIntegratedApp"}`</br> - Disable user consent </br> - Allow user consent from verified publishers, for selected permissions (recommended) </br> - Configure them in the user context </br> - Use their tokens to trigger the service principal|
|Use of a hard-coded shared secret in a script using a service principal|Use a certificate|
-|Tracking who uses the certificate or the secretΓÇï| Monitor the service principal sign-ins using the Azure AD sign-in logs|
-Can't manage service principal sign-in with Conditional Access| Monitor the sign-ins using the Azure AD sign-in logs
-| Contributor is the default Azure role-based access control (RBAC) role|Evaluate needs and apply the role with the least possible permissions|
+|Tracking who uses the certificate or the secret| Monitor the service principal sign-ins using the Azure AD sign-in logs|
+|Can't manage service principal sign-in with Conditional Access| Monitor the sign-ins using the Azure AD sign-in logs
+| Contributor is the default Azure role-based access control (Azure RBAC) role|Evaluate needs and apply the least possible permissions|
-## Move from a user account to a service principalΓÇï
+Learn more: [What is Conditional Access?](../conditional-access/overview.md)
-If you're using an Azure user account as a service principal, evaluate if you can move to a [Managed Identity](../../app-service/overview-managed-identity.md?tabs=dotnet) or a service principal. If you can't use a managed identity, provision a service principal with enough permissions and scope to run the required tasks. You can create a service principal by [registering an application](../develop/howto-create-service-principal-portal.md), or with [PowerShell](../develop/howto-authenticate-service-principal-powershell.md).
+## Move from a user account to a service principal
-When using Microsoft Graph, check the API documentation. See, [Create an Azure service principal](/powershell/azure/create-azure-service-principal-azureps). Ensure the permission type for application is supported.
+If you're using an Azure user account as a service principal, evaluate if you can move to a managed identity or a service principal. If you can't use a managed identity, grant a service principal enough permissions and scope to run the required tasks. You can create a service principal by registering an application, or with PowerShell.
-## Next steps
+When using Microsoft Graph, check the API documentation. Ensure the permission type for application is supported. </br>See, [Create servicePrincipal](/graph/api/serviceprincipal-post-serviceprincipals?view=graph-rest-1.0&tabs=http&preserve-view=true)
-Learn more about service principals:
+Learn more:
-[Create a service principal](../develop/howto-create-service-principal-portal.md)
+* [How to use managed identities for App Service and Azure Functions](../../app-service/overview-managed-identity.md?tabs=dotnet)
+* [Create an Azure AD application and service principal that can access resources](../develop/howto-create-service-principal-portal.md)
+* [Use Azure PowerShell to create a service principal with a certificate](../develop/howto-authenticate-service-principal-powershell.md)
-[Monitor service principal sign-ins](../reports-monitoring/concept-sign-ins.md)
-
-Learn more about securing service accounts:
+## Next steps
-[Introduction to Azure service accounts](service-accounts-introduction-azure.md)
+Learn more about service principals:
-[Securing managed identities](service-accounts-managed-identities.md)
+* [Create an Azure AD application and service principal that can access resources](../develop/howto-create-service-principal-portal.md)
+* [Sign-in logs in Azure AD](../reports-monitoring/concept-sign-ins.md)
-[Governing Azure service accounts](service-accounts-governing-azure.md)
+Secure service accounts:
-[Introduction to on-premises service accounts](service-accounts-on-premises.md)
+* [Securing cloud-based service accounts](service-accounts-introduction-azure.md)
+* [Securing managed identities in Azure AD](service-accounts-managed-identities.md)
+* [Governing Azure AD service accounts](service-accounts-governing-azure.md)
+* [Securing on-premises service accounts](service-accounts-on-premises.md)
Conditional Access:
-Use Conditional Access to block service principals from untrusted locations. See, [Create a location-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy).
+Use Conditional Access to block service principals from untrusted locations.
+
+See, [Conditional Access for workload identities](../conditional-access/workload-identity.md#create-a-location-based-conditional-access-policy)
active-directory Service Accounts Standalone Managed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-standalone-managed.md
Title: Secure standalone managed service accounts | Azure Active Directory
-description: A guide to securing standalone managed service accounts.
+ Title: Secure standalone managed service accounts
+description: Learn when to use, how to assess, and to secure standalone managed service accounts (sMSAs)
-+ Previously updated : 08/20/2022 Last updated : 02/08/2023
# Secure standalone managed service accounts
-Standalone managed service accounts (sMSAs) are managed domain accounts that you use to help secure one or more services that run on a server. They can't be reused across multiple servers. sMSAs provide automatic password management, simplified service principal name (SPN) management, and the ability to delegate management to other administrators.
+Standalone managed service accounts (sMSAs) are managed domain accounts that help secure services running on a server. They can't be reused across multiple servers. sMSAs have automatic password management, simplified service principal name (SPN) management, and delegated management to administrators.
-In Active Directory, sMSAs are tied to a specific server that runs a service. You can find these accounts listed in the Active Directory Users and Computers snap-in of the Microsoft Management Console.
+In Active Directory (AD), sMSAs are tied to a server that runs a service. You can find accounts in the Active Directory Users and Computers snap-in in Microsoft Management Console.
-![Screenshot of the Active Directory users and computers snap-in showing the managed service accounts OU.](./media/securing-service-accounts/secure-standalone-msa-image-1.png)
+ ![Screenshot of a service name and type under Active Directory Users and Computers.](./media/securing-service-accounts/secure-standalone-msa-image-1.png)
-Managed service accounts were introduced with Windows Server 2008 R2 Active Directory Schema, and they require at least Windows Server 2008 R2ΓÇï.
+> [!NOTE]
+> Managed service accounts were introduced in Windows Server 2008 R2 Active Directory Schema, and they require Windows Server 2008 R2, or a later version.
-## Benefits of using sMSAs
+## sMSA benefits
-sMSAs offer greater security than user accounts that are used as service accounts. At the same time, to help reduce administrative overhead, they:
+sMSAs have greater security than user accounts used as service accounts. They help reduce administrative overhead:
-* **Set strong passwords**: sMSAs use 240-byte, randomly generated complex passwords. The complexity and length of sMSA passwords minimizes the likelihood of a service getting compromised by brute force or dictionary attacks.
+* Set strong passwords - sMSAs use 240 byte, randomly generated complex passwords
+ * The complexity minimizes the likelihood of compromise by brute force or dictionary attacks
+* Cycle passwords regularly - Windows changes the sMSA password every 30 days.
+ * Service and domain administrators donΓÇÖt need to schedule password changes or manage the associated downtime
+* Simplify SPN management - SPNs are updated if the domain functional level is Windows Server 2008 R2. The SPN is updated when you:
+ * Rename the host computer account
+ * Change the host computer domain name server (DNS) name
+ * Use PowerShell to add or remove other sam-accountname or dns-hostname parameters
+ * See, [Set-ADServiceAccount](/powershell/module/activedirectory/set-adserviceaccount)
-* **Cycle passwords regularly**: Windows automatically changes the sMSA password every 30 days. Service and domain administrators donΓÇÖt need to schedule password changes or manage the associated downtime.
+## Using sMSAs
-* **Simplify SPN management**: Service principal names are automatically updated if the domain functional level is Windows Server 2008 R2. For instance, the service principal name is automatically updated when you:
- * Rename the host computer account.
- * Change the domain name server (DNS) name of the host computer.
- * Add or remove other sam-accountname or dns-hostname parameters by using [PowerShell](/powershell/module/activedirectory/set-adserviceaccount).
-
-## When to use sMSAs
-
-sMSAs can simplify management and security tasks. Use sMSAs when you have one or more services deployed to a single server and you can't use a group managed service account (gMSA).
+Use sMSAs to simplify management and security tasks. sMSAs are useful when services are deployed to a server and you can't use a group managed service account (gMSA).
> [!NOTE]
-> Although you can use sMSAs for more than one service, we recommend that each service have its own identity for auditing purposes.
+> You can use sMSAs for more than one service, but it's recommended that each service has an identity for auditing.
-If the creator of the software canΓÇÖt tell you whether it can use an MSA, you must test your application. To do so, create a test environment and ensure that it can access all required resources. For more information, see [Create and install an sMSA](/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting).
+If the software creator canΓÇÖt tell you if the application uses an MSA, test the application. Create a test environment and ensure it accesses required resources.
-### Assess the security posture of sMSAs
+Learn more: [Managed Service Accounts: Understanding, Implementing, Best Practices, and Troubleshooting](/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting)
-sMSAs are inherently more secure than standard user accounts, which require ongoing password management. However, it's important to consider sMSAsΓÇÖ scope of access as part of their overall security posture.
+### Assess sMSA security posture
-To see how to mitigate potential security issues posed by sMSAs, refer to the following table:
+Consider the sMSA scope of access as part of the security posture. To mitigate potential security issues, see the following table:
| Security issue| Mitigation | | - | - |
-| sMSA is a member of privileged groups. | <li>Remove the sMSA from elevated privileged groups, such as Domain Admins.<li>Use the *least privileged* model, and grant the sMSA only the rights and permissions it requires to run its services.<li>If you're unsure of the required permissions, consult the service creator. |
-| sMSA has read/write access to sensitive resources. | <li>Audit access to sensitive resources.<li>Archive audit logs to a Security Information and Event Management (SIEM) program, such as Azure Log Analytics or Microsoft Sentinel, for analysis.<li>Remediate resource permissions if an undesirable level of access is detected. |
-| By default, the sMSA password rollover frequency is 30 days. | You can use group policy to tune the duration, depending on enterprise security requirements. To set the password expiration duration, use the following path:<br>*Computer Configuration\Policies\Windows Settings\Security Settings\Security Options*. For domain member, use **Maximum machine account password age**. |
-| | |
-
+| sMSA is a member of privileged groups | - Remove the sMSA from elevated privileged groups, such as Domain Admins</br> - Use the least-privileged model </br> - Grant the sMSA rights and permissions to run its services</br> - If you're unsure about permissions, consult the service creator|
+| sMSA has read/write access to sensitive resources | - Audit access to sensitive resources</br> - Archive audit logs to a security information and event management (SIEM) program, such as Azure Log Analytics or Microsoft Sentinel </br> - Remediate resource permissions if an undesirable access is detected |
+| By default, the sMSA password rollover frequency is 30 days | Use group policy to tune the duration, depending on enterprise security requirements. To set the password expiration duration, go to:<br>Computer Configuration>Policies>Windows Settings>Security Settings>Security Options. For domain member, use **Maximum machine account password age**. |
-
-### Challenges with sMSAs
-
-The challenges associated with sMSAs are as follows:
+### sMSA challenges
+
+Use the following table to associate challenges with mitigations.
| Challenge| Mitigation | | - | - |
-| sMSAs can be used on a single server only. | Use a gMSA if you need to use the account across servers. |
-| sMSAs can't be used across domains. | Use a gMSA if you need to use the account across domains. |
-| Not all applications support sMSAs. | Use a gMSA if possible. Otherwise, use a standard user account or a computer account, as recommended by the application creator. |
-| | |
-
+| sMSAs are on a single server | Use a gMSA to use the account across servers |
+| sMSAs can't be used across domains | Use a gMSA to use the account across domains |
+| Not all applications support sMSAs| Use a gMSA, if possible. Otherwise, use a standard user account or a computer account, as recommended by the creator|
## Find sMSAs
-On any domain controller, run DSA.msc, and then expand the managed service accounts container to view all sMSAs.
+On a domain controller, run DSA.msc, and then expand the managed service accounts container to view all sMSAs.
To return all sMSAs and gMSAs in the Active Directory domain, run the following PowerShell command: `Get-ADServiceAccount -Filter *`
-To return only sMSAs in the Active Directory domain, run the following command:
+To return sMSAs in the Active Directory domain, run the following command:
`Get-ADServiceAccount -Filter * | where { $_.objectClass -eq "msDS-ManagedServiceAccount" }` ## Manage sMSAs
-To manage your sMSAs, you can use the following Active Directory PowerShell cmdlets:
+To manage your sMSAs, you can use the following AD PowerShell cmdlets:
`Get-ADServiceAccount` `Install-ADServiceAccount`
To manage your sMSAs, you can use the following Active Directory PowerShell cmdl
## Move to sMSAs
-If an application service supports sMSAs but not gMSAs, and you're currently using a user account or computer account for the security context, [Create and install an sMSA](/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting) on the server.
+If an application service supports sMSAs, but not gMSAs, and you're using a user account or computer account for the security context, see</br>
+[Managed Service Accounts: Understanding, Implementing, Best Practices, and Troubleshooting](/archive/blogs/askds/managed-service-accounts-understanding-implementing-best-practices-and-troubleshooting).
-Ideally, you would move resources to Azure and use Azure Managed Identities or service principals.
+If possible, move resources to Azure and use Azure managed identities, or service principals.
## Next steps
-To learn more about securing service accounts, see the following articles:
+To learn more about securing service accounts, see:
-* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+* [Securing on-premises service accounts](service-accounts-on-premises.md)
* [Secure group managed service accounts](service-accounts-group-managed.md)
-* [Secure computer accounts](service-accounts-computer.md)
-* [Secure user accounts](service-accounts-user-on-premises.md)
+* [Secure on-premises computer accounts with AD](service-accounts-computer.md)
+* [Secure user-based service accounts in AD](service-accounts-user-on-premises.md)
* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)
active-directory Service Accounts User On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/service-accounts-user-on-premises.md
Title: Secure user-based service accounts | Azure Active Directory
-description: A guide to securing user-based service accounts.
+ Title: Secure user-based service accounts in Active Directory
+description: Learn how to locate, assess, and mitigate security issues for user-based service accounts
-+ Previously updated : 08/20/2022 Last updated : 02/09/2023
# Secure user-based service accounts in Active Directory
-Using on-premises user accounts is the traditional approach to helping secure services that run on Windows. Use these accounts as a last resort when group managed service accounts (gMSAs) and standalone managed service accounts (sMSAs) aren't supported by your service. For information about selecting the best type of account to use, see [Introduction to on-premises service accounts](service-accounts-on-premises.md).
+On-premises user accounts were the traditional approach to help secure services running on Windows. Today, use these accounts if group managed service accounts (gMSAs) and standalone managed service accounts (sMSAs) aren't supported by your service. For information about the account type to use, see [Securing on-premises service accounts](service-accounts-on-premises.md).
-You might also want to investigate whether you can move your service to use an Azure service account such as a managed identity or a service principal.
+You can investigate moving your service an Azure service account, such as a managed identity or a service principal.
-You can create on-premises user accounts to provide a security context for the services and permissions that the accounts require to access local and network resources. On-premises user accounts require manual password management, much like any other Active Directory user account. Service and domain administrators are required to observe strong password management processes to help keep these accounts secure.
+Learn more:
-When you create a user account as a service account, use it for a single service only. Name it in a way that makes it clear that it's a service account and which service it's for.
+* [What are managed identities for Azure resources?](../managed-identities-azure-resources/overview.md)
+* [Securing service principals in Azure Active Directory](service-accounts-principal.md)
+
+You can create on-premises user accounts to provide security for services and permissions the accounts use to access local and network resources. On-premises user accounts require manual password management, like other Active Directory (AD) user accounts. Service and domain administrators are required to maintain strong password management processes to help keep accounts secure.
+
+When you create a user account as a service account, use it for one service. Use a naming convention that clarifies it's a service account, and the service it's related to.
## Benefits and challenges
-On-premises user accounts can provide significant benefits. They're the most versatile account type for use with services. User accounts used as service accounts can be controlled by all the policies that govern normal user accounts. But you should use them only if you can't use an MSA. Also evaluate whether a computer account is a better option.
+On-premises user accounts are a versatile account type. User accounts used as service accounts are controlled by policies governing user accounts. Use them if you can't use an MSA. Evaluate whether a computer account is a better option.
-The challenges associated with the use of on-premises user accounts are summarized in the following table:
+The challenges of on-premises user accounts are summarized in the following table:
| Challenge | Mitigation | | - | - |
-| Password management is a manual process that can lead to weaker security and service downtime.| <li>Make sure that password complexity and password changes are governed by a robust process that ensures regular updates with strong passwords.<li>Coordinate password changes with a password update on the service, which will help reduce service downtime. |
-| Identifying on-premises user accounts that are acting as service accounts can be difficult. | <li>Document and maintain records of service accounts that are deployed in your environment.<li>Track the account name and the resources to which they're assigned access.<li>Consider adding a prefix of "svc-" to all user accounts that are used as service accounts. |
-| | |
-
+| Password management is manual and leads to weaker security and service downtime| - Ensure regular password complexity and that changes are governed by a process that maintains strong passwords</br> - Coordinate password changes with a service password, which helps reduce service downtime|
+| Identifying on-premises user accounts that are service accounts can be difficult | - Document service accounts deployed in your environment</br> - Track the account name and the resources they can access</br> - Consider adding the prefix svc to user accounts used as service accounts |
## Find on-premises user accounts used as service accounts
-On-premises user accounts are just like any other Active Directory user account. It can be difficult to find such accounts, because no single attribute of a user account identifies it as a service account.
-
-We recommend that you create an easily identifiable naming convention for any user account that you use as a service account. For example, you might add "svc-" as a prefix and name the service ΓÇ£svc-HRDataConnector.ΓÇ¥
+On-premises user accounts are like other AD user accounts. It can be difficult to find the accounts, because no user account attribute identifies it as a service account. We recommend you create a naming convention for user accounts uses as service accounts. For example, add the prefix svc to a service name: svc-HRDataConnector.
-You can use some of the following criteria to find these service accounts. However, this approach might not find all accounts, such as:
+Use some of the following criteria to find service accounts. However, this approach might not find accounts:
-* Accounts that are trusted for delegation.
-* Accounts with service principal names.
-* Accounts with passwords that are set to never expire.
+* Trusted for delegation
+* With service principal names
+* With passwords that never expire
-To find the on-premises user accounts you've created for services, you can run the following PowerShell commands.
+To find the on-premises user accounts used for services, run the following PowerShell commands:
-To find accounts that are trusted for delegation:
+To find accounts trusted for delegation:
```PowerShell
Get-ADObject -Filter {(msDS-AllowedToDelegateTo -like '*') -or (UserAccountContr
```
-To find accounts that have service principal names:
+To find accounts with service principal names:
```PowerShell
Get-ADUser -Filter * -Properties servicePrincipalName | where {$_.servicePrincip
```
-To find accounts with passwords that are set to never expire:
+To find accounts with passwords that never expire:
```PowerShell
Get-ADUser -Filter * -Properties PasswordNeverExpires | where {$_.PasswordNeverE
```
-You can also audit access to sensitive resources, and archive audit logs to a security information and event management (SIEM) system. By using systems such as Azure Log Analytics or Microsoft Sentinel, you can search for and analyze and service accounts.
+You can audit access to sensitive resources, and archive audit logs to a security information and event management (SIEM) system. By using Azure Log Analytics or Microsoft Sentinel, you can search for and analyze service accounts.
-## Assess the security of on-premises user accounts
+## Assess on-premises user account security
-You can assess the security of on-premises user accounts that are being used as service accounts by using the following criteria:
+Use the following criteria to assess the security of on-premises user accounts used as service accounts:
-* What is the password management policy?
-* Is the account a member of any privileged groups?
-* Does the account have read/write permissions to important resources?
+* Password management policy
+* Accounts with membership in privileged groups
+* Read/write permissions for important resources
### Mitigate potential security issues
-Potential security issues and their mitigations for on-premises user accounts are summarized in the following table:
+See the following table for potential on-premises user account security issues and their mitigations:
| Security issue | Mitigation | | - | - |
-| Password management.| <li>Ensure that password complexity and password change are governed by a robust process that includes regular updates and strong password requirements.<li>Coordinate password changes with a password update to minimize service downtime. |
-| The account is a member of privileged groups.| <li>Review group memberships.<li>Remove the account from privileged groups.<li>Grant the account only the rights and permissions it requires to run its service (consult with service vendor). For example, you might be able to deny sign-in locally or deny interactive sign-in. |
-| The account has read/write permissions to sensitive resources.| <li>Audit access to sensitive resources.<li>Archive audit logs to a SIEM (Azure Log Analytics or Microsoft Sentinel) for analysis.<li>Remediate resource permissions if an undesirable level of access is detected. |
-| | |
+| Password management| - Ensure password complexity and password change are governed by regular updates and strong password requirements</br> - Coordinate password changes with a password update to minimize service downtime |
+| The account is a member of privileged groups| - Review group membership</br> - Remove the account from privileged groups</br> - Grant the account rights and permissions to run its service (consult with service vendor)</br> - For example, deny sign-in locally or interactive sign-in|
+| The account has read/write permissions to sensitive resources| - Audit access to sensitive resources</br> - Archive audit logs to a SIEM: Azure Log Analytics or Microsoft Sentinel</br> - Remediate resource permissions if you detect undesirable access levels |
+## Secure account types
-## Move to more secure account types
-
-Microsoft doesn't recommend that you use on-premises user accounts as service accounts. For any service that uses this type of account, assess whether it can instead be configured to use a gMSA or an sMSA.
-
-Additionally, evaluate whether the service itself could be moved to Azure so that more secure service account types can be used.
+Microsoft doesn't recommend use of on-premises user accounts as service accounts. For services that use this account type, assess if it can be configured to use a gMSA or an sMSA. In addition, evaluate if you can move the service to Azure to enable use of safer account types.
## Next steps
-To learn more about securing service accounts, see the following articles:
+To learn more about securing service accounts:
-* [Introduction to on-premises service accounts](service-accounts-on-premises.md)
+* [Securing on-premises service accounts](service-accounts-on-premises.md)
* [Secure group managed service accounts](service-accounts-group-managed.md) * [Secure standalone managed service accounts](service-accounts-standalone-managed.md)
-* [Secure computer accounts](service-accounts-computer.md)
+* [Secure on-premises computer accounts with AD](service-accounts-computer.md)
* [Govern on-premises service accounts](service-accounts-govern-on-premises.md)-
-
active-directory How To Lifecycle Workflow Sync Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/how-to-lifecycle-workflow-sync-attributes.md
The EmployeeHireDate and EmployeeLeaveDateTime contain dates and times that must
|Scenario|Expression/Format|Target|More Information| |--|--|--|--| |Workday to Active Directory User Provisioning|FormatDateTime([StatusHireDate], , "yyyy-MM-ddzzz", "yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for Workday](../saas-apps/workday-inbound-tutorial.md#below-are-some-example-attribute-mappings-between-workday-and-active-directory-with-some-common-expressions)|
-|SuccessFactors to Active Directory User Provisioning|FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt"," yyyyMMddHHmmss.fZ ")|On-premises AD string attribute|[Attribute mappings for SAP Success Factors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)|
+|SuccessFactors to Active Directory User Provisioning|FormatDateTime([endDate], ,"M/d/yyyy hh:mm:ss tt","yyyyMMddHHmmss.fZ")|On-premises AD string attribute|[Attribute mappings for SAP Success Factors](../saas-apps/sap-successfactors-inbound-provisioning-tutorial.md)|
|Custom import to Active Directory|Must be in the format "yyyyMMddHHmmss.fZ"|On-premises AD string attribute|| |Microsoft Graph User API|Must be in the format "YYYY-MM-DDThh:mm:ssZ"|EmployeeHireDate and EmployeeLeaveDateTime|| |Workday to Azure AD User Provisioning|Can use a direct mapping. No expression is needed but may be used to adjust the time portion of EmployeeHireDate and EmployeeLeaveDateTime|EmployeeHireDate and EmployeeLeaveDateTime||
active-directory How To Connect Group Writeback V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-group-writeback-v2.md
These limitations and known issues are specific to group writeback:
- To be backwards compatible with the current version of group writeback, when you enable group writeback, all existing Microsoft 365 groups are written back and created as distribution groups, by default. - When you disable writeback for a group, the group won't automatically be removed from your on-premises Active Directory, until hard deleted in Azure AD. This behavior can be modified by following the steps detailed in [Modifying group writeback](how-to-connect-modify-group-writeback.md) - Group Writeback does not support writeback of nested group members that have a scope of ‘Domain local’ in AD, since Azure AD security groups are written back with scope ‘Universal’. If you have a nested group like this, you'll see an export error in Azure AD Connect with the message “A universal group cannot have a local group as a member.” The resolution is to remove the member with scope ‘Domain local’ from the Azure AD group or update the nested group member scope in AD to ‘Global’ or ‘Universal’ group. -- Group Writeback only supports writing back groups to a single Organization Unit (OU). Once the feature is enabled, you cannot change the OU you selected. A workaround is to disable group writeback entirely in Azure AD Connect and then select a different OU when you re-enable the feature.  - Nested cloud groups that are members of writeback enabled groups must also be enabled for writeback to remain nested in AD. - Group Writeback setting to manage new security group writeback at scale is not yet available. You will need to configure writeback for each group. 
active-directory How To Connect Install Express https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-express.md
Title: 'Azure AD Connect: Getting Started using express settings | Microsoft Docs'
-description: Learn how to download, install and run the setup wizard for Azure AD Connect.
+ Title: 'Azure AD Connect: Get started by using express settings'
+description: Learn how to download, install, and run the setup wizard for Azure AD Connect.
na
-# Getting started with Azure AD Connect using express settings
-Azure AD Connect **Express Settings** is used when you have a single-forest topology and [password hash synchronization](how-to-connect-password-hash-synchronization.md) for authentication. **Express Settings** is the default option and is used for the most commonly deployed scenario. You are only a few short clicks away to extend your on-premises directory to the cloud.
+# Get started with Azure AD Connect by using express settings
-Before you start installing Azure AD Connect, make sure to [download Azure AD Connect](https://go.microsoft.com/fwlink/?LinkId=615771) and complete the pre-requisite steps in [Azure AD Connect: Hardware and prerequisites](how-to-connect-install-prerequisites.md).
+If you have a single-forest topology and use [password hash sync](how-to-connect-password-hash-synchronization.md) for authentication, express settings is a good option to use when you install Azure AD Connect. Express settings is the default option to install Azure AD Connect, and it's used for the most commonly deployed scenario. It's only a few short steps to extend your on-premises directory to the cloud.
-If express settings does not match your topology, see [related documentation](#related-documentation) for other scenarios.
+Before you start installing Azure AD Connect, [download Azure AD Connect](https://go.microsoft.com/fwlink/?LinkId=615771), and be sure to complete the prerequisite steps in [Azure AD Connect: Hardware and prerequisites](how-to-connect-install-prerequisites.md).
+
+If the express settings installation doesn't match your topology, see [Related articles](#related-articles) for information about other scenarios.
## Express installation of Azure AD Connect
-1. Sign in as a local administrator to the server you wish to install Azure AD Connect on. You should do this on the server you wish to be the sync server.
-2. Navigate to and double-click **AzureADConnect.msi**.
-3. On the Welcome screen, select the box agreeing to the licensing terms and click **Continue**.
-4. On the Express settings screen, click **Use express settings**.
- ![Welcome to Azure AD Connect](./media/how-to-connect-install-express/express.png)
-5. On the Connect to Azure AD screen, enter the username and password of a Hybrid Identity Administrator for your Azure AD. Click **Next**.
- ![Connect to Azure AD](./media/how-to-connect-install-express/connectaad.png)
- If you receive an error and have problems with connectivity, then see [Troubleshoot connectivity problems](tshoot-connect-connectivity.md).
-6. On the Connect to AD DS screen, enter the username and password for an enterprise admin account. You can enter the domain part in either NetBios or FQDN format, that is, FABRIKAM\administrator or fabrikam.com\administrator. Click **Next**.
- ![Connect to AD DS](./media/how-to-connect-install-express/connectad.png)
-7. The [**Azure AD sign-in configuration**](plan-connect-user-signin.md#azure-ad-sign-in-configuration) page only shows if you did not complete [verify your domains](../fundamentals/add-custom-domain.md) in the [prerequisites](how-to-connect-install-prerequisites.md).
- ![Unverified domains](./media/how-to-connect-install-express/unverifieddomain.png)
- If you see this page, then review every domain marked **Not Added** and **Not Verified**. Make sure those domains you use have been verified in Azure AD. Click the Refresh symbol when you have verified your domains.
-8. On the Ready to configure screen, click **Install**.
- * Optionally on the Ready to configure page, you can unselect the **Start the synchronization process as soon as configuration completes** checkbox. You should unselect this checkbox if you want to do additional configuration, such as [filtering](how-to-connect-sync-configure-filtering.md). If you unselect this option, the wizard configures sync but leaves the scheduler disabled. It does not run until you enable it manually by [rerunning the installation wizard](how-to-connect-installation-wizard.md).
- * Leaving the **Start the synchronization process as soon as configuration completes** checkbox enabled will immediately trigger a full synchronization to Azure AD of all users, groups, and contacts.
- * If you have Exchange in your on-premises Active Directory, then you also have an option to enable [**Exchange Hybrid deployment**](/exchange/exchange-hybrid). Enable this option if you plan to have Exchange mailboxes both in the cloud and on-premises at the same time.
- ![Ready to configure Azure AD Connect](./media/how-to-connect-install-express/readytoconfigure.png)
-9. When the installation completes, click **Exit**.
-10. After the installation has completed, sign off and sign in again before you use Synchronization Service Manager or Synchronization Rule Editor.
+1. Sign in as Local Administrator on the server you want to install Azure AD Connect on.
+ The server you sign in on will be the sync server.
+1. Go to *AzureADConnect.msi* and double-click to open the installation file.
+1. In **Welcome**, select the checkbox to agree to the licensing terms, and then select **Continue**.
+1. In **Express settings**, select **Use express settings**.
-## Next steps
-Now that you have Azure AD Connect installed you can [verify the installation and assign licenses](how-to-connect-post-installation.md).
+ :::image type="content" source="media/how-to-connect-install-express/express.png" alt-text="Screenshot that shows the welcome page in the Azure AD Connect installation wizard.":::
+
+1. In **Connect to Azure AD**, enter the username and password of the Hybrid Identity Administrator account, and then select **Next**.
+
+ :::image type="content" source="media/how-to-connect-install-express/connectaad.png" alt-text="Screenshot that shows the Connect to Azure AD page in the installation wizard.":::
+
+ If an error message appears or if you have problems with connectivity, see [Troubleshoot connectivity problems](tshoot-connect-connectivity.md).
+
+1. In **Connect to AD DS**, enter the username and password for an Enterprise Admin account. You can enter the domain part in either NetBIOS or FQDN format, like `FABRIKAM\administrator` or `fabrikam.com\administrator`. Select **Next**.
+
+ :::image type="content" source="media/how-to-connect-install-express/connectad.png" alt-text="Screenshot that shows the Connect to AD DS page in the installation wizard.":::
-Learn more about these features, which were enabled with the installation: [Automatic upgrade](how-to-connect-install-automatic-upgrade.md), [Prevent accidental deletes](how-to-connect-sync-feature-prevent-accidental-deletes.md), and [Azure AD Connect Health](how-to-connect-health-sync.md).
+1. The [Azure AD sign-in configuration](plan-connect-user-signin.md#azure-ad-sign-in-configuration) page appears only if you didn't complete the step to [verify your domains](../fundamentals/add-custom-domain.md) in the [prerequisites](how-to-connect-install-prerequisites.md).
-Learn more about these common topics: [scheduler and how to trigger sync](how-to-connect-sync-feature-scheduler.md).
+ :::image type="content" source="media/how-to-connect-install-express/unverifieddomain.png" alt-text="Screenshot that shows examples of unverified domains in the installation wizard.":::
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+ If you see this page, review each domain that's marked **Not Added** or **Not Verified**. Make sure that those domains have been verified in Azure AD. When you've verified your domains, select the **Refresh** icon.
+1. In **Ready to configure**, select **Install**.
-## Related documentation
+ - Optionally in **Ready to configure**, you can clear the **Start the synchronization process as soon as configuration completes** checkbox. You should clear this checkbox if you want to do more configuration, such as to add [filtering](how-to-connect-sync-configure-filtering.md). If you clear this option, the wizard configures sync but leaves the scheduler disabled. The scheduler doesn't run until you enable it manually by [rerunning the installation wizard](how-to-connect-installation-wizard.md).
+ - If you leave the **Start the synchronization process when configuration completes** checkbox selected, a full sync of all users, groups, and contacts to Azure AD begins immediately.
+ - If you have Exchange in your instance of Windows Server Active Directory, you also have the option to enable [Exchange Hybrid deployment](/exchange/exchange-hybrid). Enable this option if you plan to have Exchange mailboxes both in the cloud and on-premises at the same time.
+
+ :::image type="content" source="media/how-to-connect-install-express/readytoconfigure.png" alt-text="Screenshot that shows the Ready to configure Azure AD Connect page in the wizard.":::
+
+1. When the installation is finished, select **Exit**.
+1. Before you use Synchronization Service Manager or Synchronization Rule Editor, sign out, and then sign in again.
+
+## Related articles
+
+For more information about Azure AD Connect, see these articles:
| Topic | Link | | | |
-| Azure AD Connect overview | [Integrate your on-premises directories with Azure Active Directory](whatis-hybrid-identity.md)
-| Install using customized settings | [Custom installation of Azure AD Connect](how-to-connect-install-custom.md) |
+| Azure AD Connect overview | [Integrate your on-premises directories with Azure Active Directory](whatis-hybrid-identity.md) |
+| Install by using customized settings | [Custom installation of Azure AD Connect](how-to-connect-install-custom.md) |
| Upgrade from DirSync | [Upgrade from Azure AD sync tool (DirSync)](how-to-dirsync-upgrade-get-started.md)| | Accounts used for installation | [More about Azure AD Connect credentials and permissions](reference-connect-accounts-permissions.md) |+
+## Next steps
+
+- Now that you have Azure AD Connect installed, you can [verify the installation and assign licenses](how-to-connect-post-installation.md).
+- Learn more about these features, which were enabled with the installation: [Automatic upgrade](how-to-connect-install-automatic-upgrade.md), [prevent accidental deletes](how-to-connect-sync-feature-prevent-accidental-deletes.md), and [Azure AD Connect Health](how-to-connect-health-sync.md).
+- Learn more about the [scheduler and how to trigger sync](how-to-connect-sync-feature-scheduler.md).
+- Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory How To Connect Install Move Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-install-move-db.md
Title: 'Move Azure AD Connect database from SQL Server Express to SQL Server. | Microsoft Docs'
-description: This document describes how to move the Azure AD Connect database from the local SQL Server Express server to a remote SQL Server.
+ Title: 'Move the Azure AD Connect database from SQL Server Express to remote SQL Server'
+description: Learn how to move the Azure AD Connect database from the default local SQL Server Express server to a computer running remote SQL Server.
-# Move Azure AD Connect database from SQL Server Express to SQL Server
+# Move Azure AD Connect database from SQL Server Express to remote SQL Server
-This document describes how to move the Azure AD Connect database from the local SQL Server Express server to a remote SQL Server. You can use the following procedures below to accomplish this task.
+This article describes how to move the Azure AD Connect database from the local SQL Server Express server to a computer running remote SQL Server. You can use the steps described in this article to accomplish this task.
-## About this scenario
-The following is some brief information about this scenario. In this scenario, Azure AD Connect version (1.1.819.0) is installed on a single Windows Server 2016 domain controller. It is using the built-in SQL Server 2012 Express Edition for its database. The database will be moved to a SQL Server 2017 server.
+## About the scenario
-![scenario architecture](media/how-to-connect-install-move-db/move1.png)
+In this scenario, Azure AD Connect version 1.1.819.0 is installed on a single Windows Server 2016 domain controller. Azure AD Connect is using the built-in SQL Server 2012 Express Edition for its database. The database will be moved to a SQL Server 2017 server.
+ ## Move the Azure AD Connect database
-Use the following steps to move the Azure AD Connect database to a remote SQL Server.
-
-1. On the Azure AD Connect server, go to **Services** and stop the **Microsoft Azure AD Sync** service.
-2. Locate the **%ProgramFiles%\Microsoft Azure AD Sync\Data** folder and copy the **ADSync.mdf** and **ADSync_log.ldf** files to the remote SQL Server.
-3. Restart the **Microsoft Azure AD Sync** service on the Azure AD Connect server.
-4. Un-install Azure AD Connect by going to Control Panel
-5. On the remote SQL server, open SQL Server Management Studio.
-6. On Databases, right-click and select Attach.
-7. On the **Attach Databases** screen, click **Add** and navigate to the ADSync.mdf file. Click **OK**.
- ![attach database](media/how-to-connect-install-move-db/move2.png)
-
-8. Once the database is attached, go back to the Azure AD Connect server and install Azure AD Connect.
-9. Once the MSI installation completes, the Azure AD Connect wizard starts with the Express mode setup. Close the screen by clicking the Exit icon.
- ![Screenshot that shows the "Welcome to Azure A D Connect" page with "Express Settings" in the left-side menu highlighted.](./media/how-to-connect-install-move-db/db1.png)
-10. Start a new command prompt or PowerShell session. Navigate to folder \<drive>\program files\Microsoft Azure AD Connect. Run command .\AzureADConnect.exe /useexistingdatabase to start the Azure AD Connect wizard in ΓÇ£Use existing databaseΓÇ¥ setup mode.
- ![PowerShell](./media/how-to-connect-install-move-db/db2.png)
-11. You are greeted with the Welcome to Azure AD Connect screen. Once you agree to the license terms and privacy notice, click **Continue**.
- ![Screenshot that shows the "Welcome to Azure A D Connect" page](./media/how-to-connect-install-move-db/db3.png)
-12. On the **Install required components** screen, the **Use an existing SQL Server** option is enabled. Specify the name of the SQL server that is hosting the ADSync database. If the SQL engine instance used to host the ADSync database is not the default instance on the SQL server, you must specify the SQL engine instance name. Further, if SQL browsing is not enabled, you must also specify the SQL engine instance port number. For example:
- ![Screenshot that shows the "Install required components" page.](./media/how-to-connect-install-move-db/db4.png)
-
-13. On the **Connect to Azure AD** screen, you must provide the credentials of a Hybrid Identity Administrator of your Azure AD directory. The recommendation is to use an account in the default onmicrosoft.com domain. This account is only used to create a service account in Azure AD and is not used after the wizard has completed.
- ![Connect](./media/how-to-connect-install-move-db/db5.png)
-
-14. On the **Connect your directories** screen, the existing AD forest configured for directory synchronization is listed with a red cross icon beside it. To synchronize changes from an on-premises AD forest, an AD DS account is required. The Azure AD Connect wizard is unable to retrieve the credentials of the AD DS account stored in the ADSync database because the credentials are encrypted and can only be decrypted by the previous Azure AD Connect server. Click **Change Credentials** to specify the AD DS account for the AD forest.
- ![Directories](./media/how-to-connect-install-move-db/db6.png)
-
-
-15. In the pop-up dialog, you can either (i) provide an Enterprise Admin credential and let Azure AD Connect create the AD DS account for you, or (ii) create the AD DS account yourself and provide its credential to Azure AD Connect. Once you have selected an option and provide the necessary credentials, click **OK** to close the pop-up dialog.
- ![Screenshot of the "A D forest account" pop-up dialog with the "Create new A D account" selected.](./media/how-to-connect-install-move-db/db7.png)
-
-
-16. Once the credentials are provided, the red cross icon is replaced with a green tick icon. Click **Next**.
- ![Screenshot that shows the "Connect your directories" page after entering account credentials.](./media/how-to-connect-install-move-db/db8.png)
-
-
-17. On the **Ready to configure** screen, click **Install**.
- ![Welcome](./media/how-to-connect-install-move-db/db9.png)
-
-
-18. Once installation completes, the Azure AD Connect server is automatically enabled for Staging Mode. It is recommended that you review the server configuration and pending exports for unexpected changes before disabling Staging Mode.
-## Next steps
+Use the following steps to move the Azure AD Connect database to a computer running remote SQL Server:
+
+1. On the Azure AD Connect server, go to **Services** and stop the Microsoft Azure AD Sync service.
+1. Go to the *%ProgramFiles%\Microsoft Azure AD Sync\Data* folder and copy the *ADSync.mdf* and *ADSync_log.ldf* files to the computer running remote SQL Server.
+1. Restart the Microsoft Azure AD Sync service on the Azure AD Connect server.
+1. Uninstall Azure AD Connect by going to **Control Panel** > **Programs** > **Programs and Features**. Select **Microsoft Azure AD Connect**, and then select **Uninstall**.
+1. On the computer running remote SQL Server, open SQL Server Management Studio.
+1. Right-click **Databases** and select **Attach**.
+1. In **Attach Databases**, select **Add** and go to the *ADSync.mdf* file. Select **OK**.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/move2.png" alt-text="Screenshot that shows the options in the Attach Databases pane.":::
+
+1. When the database is attached, go back to the Azure AD Connect server and install Azure AD Connect.
+1. When the MSI installation is finished, the Azure AD Connect wizard starts in express settings mode. Select the **Exit** icon to close the page.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db1.png" alt-text="Screenshot that shows the Welcome to Azure AD Connect page with Express Settings in the left menu highlighted.":::
+
+1. Open a new Command Prompt window or PowerShell session. Go to the folder *\<drive>\program files\Microsoft Azure AD Connect*. Run the command `.\AzureADConnect.exe /useexistingdatabase` to start the Azure AD Connect wizard in **Use existing database** setup mode.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db2.png" alt-text="Screenshot that shows the command described in the step in PowerShell.":::
+
+1. In **Welcome to Azure AD Connect**, review and agree to the license terms and privacy notice, and then select **Continue**.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db3.png" alt-text="Screenshot that shows the Welcome to Azure AD Connect page.":::
+
+1. In **Install required components**, the **Use an existing SQL Server** option is enabled. Specify the name of the SQL Server instance that's hosting the ADSync database. If the SQL engine instance that's used to host the ADSync database isn't the default instance in SQL Server, you must specify the name of the SQL engine instance.
+
+ Also, if SQL browsing isn't enabled, you must specify the SQL engine instance port number. For example:
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db4.png" alt-text="Screenshot that shows the options on the Install required components page.":::
+
+1. In **Connect to Azure AD**, you must provide the credentials of a Hybrid Identity Administrator for your directory in Azure Active Directory (Azure AD).
+
+ We recommend that you use an account in the default `onmicrosoft.com` domain. This account is used only to create a service account in Azure AD. The account isn't used after the wizard is finished.
-- Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).-- [Install Azure AD Connect using an existing ADSync database](how-to-connect-install-existing-database.md)-- [Install Azure AD Connect using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md)
+ :::image type="content" source="media/how-to-connect-install-move-db/db5.png" alt-text="Screenshot that shows the options on the Connect to Azure AD page.":::
+
+1. In **Connect your directories**, the existing Windows Server Active Directory (Windows Server AD) forest that's configured for directory sync is listed with a red X icon beside it. To sync changes from Windows Server AD, an Active Directory Domain Services (AD DS) account is required. Select **Change Credentials** to specify the AD DS account for the Windows Server AD forest.
+
+ The Azure AD Connect wizard can't retrieve the credentials of the AD DS account that are stored in the ADSync database because the credentials are encrypted. The credentials can be decrypted only by the earlier instance of the Azure AD Connect server.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db6.png" alt-text="Screenshot that shows the options on the Connect your directories page.":::
+
+1. In the dialog, choose one of the following options:
+
+ 1. Enter the credentials for an Enterprise Admin and let Azure AD Connect create the AD DS account for you.
+ 1. Create the AD DS account yourself and enter its credentials in Azure AD Connect.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db7.png" alt-text="Screenshot that shows the Windows Server AD forest account dialog with Create new AD account selected.":::
+
+ After you select an option and enter the credentials, select **OK**.
+
+1. After the credentials are entered, the red X icon is replaced with a green checkmark icon. Select **Next**.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db8.png" alt-text="Screenshot that shows the Connect your directories page after you enter account credentials.":::
+
+1. In **Ready to configure**, select **Install**.
+
+ :::image type="content" source="media/how-to-connect-install-move-db/db9.png" alt-text="Screenshot that shows the Azure AD Connect Welcome page.":::
+
+1. When installation is finished, the Azure AD Connect server is automatically enabled for staging mode. We recommend that you review the server configuration and pending exports for unexpected changes before you disable staging mode.
+
+## Next steps
+- Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+- Get more information about [installing Azure AD Connect by using an existing ADSync database](how-to-connect-install-existing-database.md).
+- Learn how to [install Azure AD Connect by using SQL delegated administrator permissions](how-to-connect-install-sql-delegation.md).
active-directory How To Connect Pta Security Deep Dive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta-security-deep-dive.md
The following sections discuss these phases in detail.
### Authentication Agent installation
-Only Hybrid Identity Administrators or Hybrid Identity administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
+Only Hybrid Identity Administrators can install an Authentication Agent (by using Azure AD Connect or standalone) on an on-premises server. Installation adds two new entries to the **Control Panel** > **Programs** > **Programs and Features** list:
- The Authentication Agent application itself. This application runs with [NetworkService](/windows/win32/services/networkservice-account) privileges. - The Updater application that's used to auto-update the Authentication Agent. This application runs with [LocalSystem](/windows/win32/services/localsystem-account) privileges.
The Authentication Agents use the following steps to register themselves with Az
![Agent registration](./media/how-to-connect-pta-security-deep-dive/pta1.png)
-1. Azure AD first requests that a Hybrid Identity Administratoristrator or hybrid identity administrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the
+1. Azure AD first requests that a Hybrid Identity Administratoristrator sign in to Azure AD with their credentials. During sign-in, the Authentication Agent acquires an access token that it can use on behalf of the
2. The Authentication Agent then generates a key pair: a public key and a private key. - The key pair is generated through standard RSA 2048-bit encryption. - The private key stays on the on-premises server where the Authentication Agent resides.
The Authentication Agents use the following steps to register themselves with Az
- The access token acquired in step 1. - The public key generated in step 2. - A Certificate Signing Request (CSR or Certificate Request). This request applies for a digital identity certificate, with Azure AD as its certificate authority (CA).
-4. Azure AD validates the access token in the registration request and verifies that the request came from a Hybrid Identity Administrator or hybrid identity administrator.
+4. Azure AD validates the access token in the registration request and verifies that the request came from a Hybrid Identity Administrator.
5. Azure AD then signs and sends a digital identity certificate back to the Authentication Agent. - The root CA in Azure AD is used to sign the certificate.
To auto-update an Authentication Agent:
- [Frequently asked questions](how-to-connect-pta-faq.yml): Find answers to frequently asked questions. - [Troubleshoot](tshoot-connect-pass-through-authentication.md): Learn how to resolve common problems with the Pass-through Authentication feature. - [Azure AD Seamless SSO](how-to-connect-sso.md): Learn more about this complementary feature.
+- [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.
active-directory How To Connect Pta https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-pta.md
This feature is an alternative to [Azure AD Password Hash Synchronization](how-t
![Azure AD Pass-through Authentication](./media/how-to-connect-pta/pta1.png)
-You can combine Pass-through Authentication with the [Seamless Single Sign-On](how-to-connect-sso.md) feature. This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in.
+You can combine Pass-through Authentication with the [Seamless Single Sign-On](how-to-connect-sso.md) feature. If you have Windows 10 or later machines, use [Hybrid Azure AD Join (AADJ)](../devices/howto-hybrid-azure-ad-join.md). This way, when your users are accessing applications on their corporate machines inside your corporate network, they don't need to type in their passwords to sign in.
## Key benefits of using Azure AD Pass-through Authentication
You can combine Pass-through Authentication with the [Seamless Single Sign-On](h
- [Quickstart](how-to-connect-pta-quick-start.md) - Get up and running Azure AD Pass-through Authentication. - [Migrate from AD FS to Pass-through Authentication](https://github.com/Identity-Deployment-Guides/Identity-Deployment-Guides/blob/master/Authentication/Migrating%20from%20Federated%20Authentication%20to%20Pass-through%20Authentication.docx?raw=true) - A detailed guide to migrate from AD FS (or other federation technologies) to Pass-through Authentication. - [Smart Lockout](../authentication/howto-password-smart-lockout.md) - Configure Smart Lockout capability on your tenant to protect user accounts.
+- [Hybrid Azure AD join](../devices/howto-hybrid-azure-ad-join.md): Configure Hybrid Azure AD join capability on your tenant for SSO across your cloud and on-premises resources.
- [Current limitations](how-to-connect-pta-current-limitations.md) - Learn which scenarios are supported and which ones are not. - [Technical Deep Dive](how-to-connect-pta-how-it-works.md) - Understand how this feature works. - [Frequently Asked Questions](how-to-connect-pta-faq.yml) - Answers to frequently asked questions.
active-directory Reference Connect Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-ports.md
This table describes the ports and protocols that are required for communication
| HTTP |80 (TCP) |Used to download CRLs (Certificate Revocation Lists) to verify TLS/SSL certificates. | | HTTPS |443 (TCP) |Used to synchronize with Azure AD. |
-For a list of URLs and IP addresses you need to open in your firewall, see [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) and [Troubleshooting Azure AD Connect connectivity](tshoot-connect-connectivity.md#troubleshoot-connectivity-issues-in-the-installation-wizard).
+For a list of URLs and IP addresses you need to open in your firewall, see [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) and [Troubleshooting Azure AD Connect connectivity](tshoot-connect-connectivity.md#connectivity-issues-in-the-installation-wizard).
## Table 3 - Azure AD Connect and AD FS Federation Servers/WAP This table describes the ports and protocols that are required for communication between the Azure AD Connect server and AD FS Federation/WAP servers.
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
Title: 'Azure AD Connect: Troubleshoot Azure AD connectivity issues | Microsoft Docs'
-description: Explains how to troubleshoot connectivity issues with Azure AD Connect.
+ Title: 'Azure AD Connect: Troubleshoot Azure AD connectivity issues'
+description: Learn how to troubleshoot connectivity issues with Azure AD Connect.
editor: '' na
-# Troubleshoot Azure AD connectivity
-This article explains how connectivity between Azure AD Connect and Azure AD works and how to troubleshoot connectivity issues. These issues are most likely to be seen in an environment with a proxy server.
+# Troubleshoot Azure AD Connect connectivity issues
-## Troubleshoot connectivity issues in the installation wizard
-Azure AD Connect uses the MSAL library for authentication. The installation wizard and the sync engine proper require machine.config to be properly configured since these two are .NET applications.
+This article explains how connectivity between Azure AD Connect and Azure Active Directory (Azure AD) works and how to troubleshoot connectivity issues. These issues are most likely to be seen in an environment that uses a proxy server.
->[!NOTE]
->Azure AD Connect v1.6.xx.x uses the ADAL library. The ADAL library is being deprecated and support will end in June 2022. Microsoft recommends that you upgrade to the latest version of [Azure AD Connect v2](whatis-azure-ad-connect-v2.md).
+## Connectivity issues in the installation wizard
-In this article, we show how Fabrikam connects to Azure AD through its proxy. The proxy server is named fabrikamproxy and is using port 8080.
+Azure AD Connect uses the Microsoft Authentication Library (MSAL) for authentication. The installation wizard and the sync engine require machine.config to be properly configured because these two are .NET applications.
-First we need to make sure [**machine.config**](how-to-connect-install-prerequisites.md#connectivity) is correctly configured and **Microsoft Azure AD Sync service** has been restarted once after the machine.config file update.
-![Screenshot shows part of the machine dot config file.](./media/tshoot-connect-connectivity/machineconfig.png)
+> [!NOTE]
+> Azure AD Connect v1.6.xx.x uses the Active Directory Authentication Library (ADAL). The ADAL is being deprecated and support will end in June 2022. We recommend that you upgrade to the latest version of [Azure AD Connect v2](whatis-azure-ad-connect-v2.md).
+
+In this article, we show how Fabrikam connects to Azure AD through its proxy. The proxy server is named `fabrikamproxy` and uses port 8080.
+
+First, make sure that [machine.config](how-to-connect-install-prerequisites.md#connectivity) is correctly configured and that the Microsoft Azure AD Sync service has been restarted once after the *machine.config* file update.
+ > [!NOTE]
-> In some non-Microsoft blogs, it is documented that changes should be made to miiserver.exe.config instead. However, this file is overwritten on every upgrade so even if it works during initial install, the system stops working on first upgrade. For that reason, the recommendation is to update machine.config instead.
->
->
+> Some non-Microsoft blogs indicate you should make changes to *miiserver.exe.config* instead of the *machine.config* file. However, the *miiserver.exe.config* file is overwritten on every upgrade. Even if the file works during the initial installation, the system stops working during the first upgrade. For that reason, we recommend that you update *machine.config* as described in this article.
The proxy server must also have the required URLs opened. The official list is documented in [Office 365 URLs and IP address ranges](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2).
-Of these URLs, the following table is the absolute bare minimum to be able to connect to Azure AD at all. This list does not include any optional features, such as password writeback, or Azure AD Connect Health. It is documented here to help in troubleshooting for the initial configuration.
+Of these URLs, the URLs listed in the following table are the absolute bare minimum to be able to connect to Azure AD at all. This list doesn't include any optional features, such as password writeback or Azure AD Connect Health. The information is provided here to help with troubleshooting for the initial configuration.
| URL | Port | Description | | | | |
-| mscrl.microsoft.com |HTTP/80 |Used to download CRL lists. |
-| \*.verisign.com |HTTP/80 |Used to download CRL lists. |
-| \*.entrust.net |HTTP/80 |Used to download CRL lists for MFA. |
-| \*.management.core.windows.net (Azure Storage)</br>\*.graph.windows.net (Azure AD Graph)|HTTPS/443|Used for the various Azure services|
-| secure.aadcdn.microsoftonline-p.com |HTTPS/443 |Used for MFA. |
-| \*.microsoftonline.com |HTTPS/443 |Used to configure your Azure AD directory and import/export data. |
-| \*.crl3.digicert.com |HTTP/80 |Used to verify certificates. |
-| \*.crl4.digicert.com |HTTP/80 |Used to verify certificates. |
-| \*.ocsp.digicert.com |HTTP/80 |Used to verify certificates. |
-| \*.www.d-trust.net |HTTP/80 |Used to verify certificates. |
-| \*.root-c3-ca2-2009.ocsp.d-trust.net |HTTP/80 |Used to verify certificates. |
-| \*.crl.microsoft.com |HTTP/80 |Used to verify certificates. |
-| \*.oneocsp.microsoft.com |HTTP/80 |Used to verify certificates. |
-| \*.ocsp.msocsp.com |HTTP/80 |Used to verify certificates. |
+| `mscrl.microsoft.com` |HTTP/80 |Used to download certificate revocation list (CRL) lists. |
+| `*.verisign.com` |HTTP/80 |Used to download CRL lists. |
+| `*.entrust.net` |HTTP/80 |Used to download CRL lists for multifactor authentication (MFA). |
+| `*.management.core.windows.net` (Azure Storage)</br>`*.graph.windows.net` (Azure AD Graph)|HTTPS/443|Used for the various Azure services.|
+| `secure.aadcdn.microsoftonline-p.com` |HTTPS/443 |Used for MFA. |
+| `*.microsoftonline.com` |HTTPS/443 |Used to configure your Azure AD directory and import/export data. |
+| `*.crl3.digicert.com` |HTTP/80 |Used to verify certificates. |
+| `*.crl4.digicert.com` |HTTP/80 |Used to verify certificates. |
+| `*.ocsp.digicert.com` |HTTP/80 |Used to verify certificates. |
+| `*.www.d-trust.net` |HTTP/80 |Used to verify certificates. |
+| `*.root-c3-ca2-2009.ocsp.d-trust.net` |HTTP/80 |Used to verify certificates. |
+| `*.crl.microsoft.com` |HTTP/80 |Used to verify certificates. |
+| `*.oneocsp.microsoft.com` |HTTP/80 |Used to verify certificates. |
+| `*.ocsp.msocsp.com` |HTTP/80 |Used to verify certificates. |
## Errors in the wizard
-The installation wizard is using two different security contexts. On the page **Connect to Azure AD**, it is using the currently signed in user. On the page **Configure**, it is changing to the [account running the service for the sync engine](reference-connect-accounts-permissions.md#adsync-service-account). If there is an issue, it appears most likely already at the **Connect to Azure AD** page in the wizard since the proxy configuration is global.
-The following issues are the most common errors you encounter in the installation wizard.
+The installation wizard uses two different security contexts. On the **Connect to Azure AD** page, it uses the user who is currently signed in. On the **Configure** page, it changes to the [account running the service for the sync engine](reference-connect-accounts-permissions.md#adsync-service-account). If an issue occurs, the error most likely will appear on the **Connect to Azure AD** page in the wizard because the proxy configuration is global.
+
+The following issues are the most common errors you might encounter in the installation wizard.
+
+### The installation wizard hasn't been correctly configured
-### The installation wizard has not been correctly configured
-This error appears when the wizard itself cannot reach the proxy.
-![Screenshot shows an error: Unable to validate credentials.](./media/tshoot-connect-connectivity/nomachineconfig.png)
+This error appears when the wizard itself can't reach the proxy.
-* If you see this error, verify the [machine.config](how-to-connect-install-prerequisites.md#connectivity) has been correctly configured.
-* If that looks correct, follow the steps in [Verify proxy connectivity](#verify-proxy-connectivity) to see if the issue is present outside the wizard as well.
+
+If you see this error, verify that the [machine.config file](how-to-connect-install-prerequisites.md#connectivity) is correctly configured. If *machine.config* looks correct, complete the steps in [Verify proxy connectivity](#verify-proxy-connectivity) to see if the issue is also present outside the wizard.
### A Microsoft account is used
-If you use a **Microsoft account** rather than a **school or organization** account, you see a generic error.
-![A Microsoft Account is used](./media/tshoot-connect-connectivity/unknownerror.png)
-### The MFA endpoint cannot be reached
-This error appears if the endpoint `https://secure.aadcdn.microsoftonline-p.com` cannot be reached and your Hybrid Identity Administrator has MFA enabled.
-![nomachineconfig](./media/tshoot-connect-connectivity/nomicrosoftonlinep.png)
+If you use a *Microsoft account* instead of a *school or organization account*, you see a generic error:
++
+### The MFA endpoint can't be reached
+
+This error appears if the endpoint `https://secure.aadcdn.microsoftonline-p.com` can't be reached and your Hybrid Identity Administrator has MFA enabled.
+
-* If you see this error, verify that the endpoint **secure.aadcdn.microsoftonline-p.com** has been added to the proxy.
+If you see this error, verify that the endpoint `secure.aadcdn.microsoftonline-p.com` has been added to the proxy.
-### The password cannot be verified
-If the installation wizard is successful in connecting to Azure AD, but the password itself cannot be verified you see this error:
-![Bad password.](./media/tshoot-connect-connectivity/badpassword.png)
+### The password can't be verified
-* Is the password a temporary password and must be changed? Is it actually the correct password? Try to sign in to `https://login.microsoftonline.com` (on another computer than the Azure AD Connect server) and verify the account is usable.
+If the installation wizard is successful in connecting to Azure AD but the password itself can't be verified, you see this error:
++
+Is the password a temporary password that must be changed? Is it actually the correct password? Try to sign in to `https://login.microsoftonline.com` on a different computer than the Azure AD Connect server and verify that the account is usable.
### Verify proxy connectivity
-To verify if the Azure AD Connect server has actual connectivity with the Proxy and Internet, use some PowerShell to see if the proxy is allowing web requests or not. In a PowerShell prompt, run `Invoke-WebRequest -Uri https://adminwebservice.microsoftonline.com/ProvisioningService.svc`. (Technically the first call is to `https://login.microsoftonline.com` and this URI works as well, but the other URI is faster to respond.)
-PowerShell uses the configuration in machine.config to contact the proxy. The settings in winhttp/netsh should not impact these cmdlets.
+To check whether the Azure AD Connect server is connecting to the proxy and the internet, use some PowerShell cmdlets to see if the proxy is allowing web requests. In PowerShell, run `Invoke-WebRequest -Uri https://adminwebservice.microsoftonline.com/ProvisioningService.svc`. (Technically, the first call is to `https://login.microsoftonline.com`, and this URI also works, but the other URI is quicker to respond.)
+
+PowerShell uses the configuration in *machine.config* to contact the proxy. The settings in *winhttp/netsh* shouldn't affect these cmdlets.
+
+If the proxy is correctly configured, a success status appears:
++
+If you see the message **Unable to connect to the remote server**, PowerShell is trying to make a direct call without using the proxy or DNS isn't correctly configured. Make sure that the *machine.config* file is correctly configured.
-If the proxy is correctly configured, you should get a success status:
-![Screenshot that shows the success status when the proxy is configured correctly.](./media/tshoot-connect-connectivity/invokewebrequest200.png)
-If you receive **Unable to connect to the remote server**, then PowerShell is trying to make a direct call without using the proxy or DNS is not correctly configured. Make sure the **machine.config** file is correctly configured.
-![unabletoconnect](./media/tshoot-connect-connectivity/invokewebrequestunable.png)
+If the proxy isn't correctly configured, a 403 or 407 error message appears:
-If the proxy is not correctly configured, you get an error:
-![proxy200](./media/tshoot-connect-connectivity/invokewebrequest403.png)
-![proxy407](./media/tshoot-connect-connectivity/invokewebrequest407.png)
++
+The following table describes 403 and 407 proxy errors:
| Error | Error Text | Comment | | | | |
-| 403 |Forbidden |The proxy has not been opened for the requested URL. Revisit the proxy configuration and make sure the [URLs](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) have been opened. |
-| 407 |Proxy Authentication Required |The proxy server required a sign-in and none was provided. If your proxy server requires authentication, make sure to have this setting configured in the machine.config. Also make sure you are using domain accounts for the user running the wizard and for the service account. |
+| 403 |Forbidden |The proxy hasn't been opened for the requested URL. Revisit the proxy configuration and make sure that the [URLs](https://support.office.com/article/Office-365-URLs-and-IP-address-ranges-8548a211-3fe7-47cb-abb1-355ea5aa88a2) have been opened. |
+| 407 |Proxy Authentication Required |The proxy server required a sign-in and none was provided. If your proxy server requires authentication, make sure that you configured this setting in *machine.config*. Also make sure that you're using domain accounts for the user running the wizard and for the service account. |
### Proxy idle timeout setting
-When Azure AD Connect sends an export request to Azure AD, Azure AD can take up to 5 minutes to process the request before generating a response. This can happen especially if there are a number of group objects with large group memberships included in the same export request. Ensure the Proxy idle timeout is configured to be greater than 5 minutes. Otherwise, intermittent connectivity issue with Azure AD may be observed on the Azure AD Connect server.
-## The communication pattern between Azure AD Connect and Azure AD
-If you have followed all these preceding steps and still cannot connect, you might at this point start looking at network logs. This section is documenting a normal and successful connectivity pattern. It is also listing common red herrings that can be ignored when you are reading the network logs.
+When Azure AD Connect sends an export request to Azure AD, Azure AD can take up to 5 minutes to process the request before generating a response. The response is especially likely to be delayed if many group objects that have large group memberships are included in the same export request. Ensure that the proxy idle timeout is configured to be greater than 5 minutes. Otherwise, you might have intermittent connectivity issues with Azure AD on the Azure AD Connect server.
+
+## Communication pattern between Azure AD Connect and Azure AD
+
+If you've followed all the steps described in this article and you still can't connect, at this point you might look at network logs. This section describes a normal and successful connectivity pattern.
+
+But first, here are some common concerns about data in the network logs that you can ignore:
-* There are calls to `https://dc.services.visualstudio.com`. It is not required to have this URL open in the proxy for the installation to succeed and these calls can be ignored.
-* You see that dns resolution lists the actual hosts to be in the DNS name space nsatc.net and other namespaces not under microsoftonline.com. However, there are not any web service requests on the actual server names and you do not have to add these URLs to the proxy.
-* The endpoints adminwebservice and provisioningapi are discovery endpoints and used to find the actual endpoint to use. These endpoints are different depending on your region.
+- There are calls to `https://dc.services.visualstudio.com`. It's not required to have this URL open in the proxy for the installation to succeed, and these calls can be ignored.
+- You see that DNS resolution lists the actual hosts as being in the DNS namespace `nsatc.net` and other namespaces that aren't under `microsoftonline.com`. However, there aren't any web service requests on the actual server names. You don't have to add these URLs to the proxy.
+- The endpoints `adminwebservice` and `provisioningapi` are discovery endpoints, and they're used to find the actual endpoint to use. These endpoints are different depending on your region.
### Reference proxy logs
-Here is a dump from an actual proxy log and the installation wizard page from where it was taken (duplicate entries to the same endpoint have been removed). This section can be used as a reference for your own proxy and network logs. The actual endpoints might be different in your environment (in particular those URLs in *italic*).
+
+The following example is a dump from an actual proxy log and the installation wizard page from where it was taken (duplicate entries to the same endpoint have been removed). This section can be used as a reference for your own proxy and network logs. The actual endpoints might be different in your environment (in particular, the URLs in *italic*).
**Connect to Azure AD** | Time | URL | | | |
-| 1/11/2016 8:31 |connect://login.microsoftonline.com:443 |
+| 1/11/2016 8:31 |connect:/login.microsoftonline.com:443 |
| 1/11/2016 8:31 |connect://adminwebservice.microsoftonline.com:443 | | 1/11/2016 8:32 |connect://*bba800-anchor*.microsoftonline.com:443 | | 1/11/2016 8:32 |connect://login.microsoftonline.com:443 |
Here is a dump from an actual proxy log and the installation wizard page from wh
| 1/11/2016 8:46 |connect://provisioningapi.microsoftonline.com:443 | | 1/11/2016 8:46 |connect://*bwsc02-relay*.microsoftonline.com:443 |
-**Initial Sync**
+**Initial sync**
| Time | URL | | | |
Here is a dump from an actual proxy log and the installation wizard page from wh
| 1/11/2016 8:49 |connect://*bba800-anchor*.microsoftonline.com:443 | ## Authentication errors
-This section covers errors that can be returned from ADAL (the authentication library used by Azure AD Connect) and PowerShell. The error explained should help you in understand your next steps.
-### Invalid Grant
-Invalid username or password. For more information, see [The password cannot be verified](#the-password-cannot-be-verified).
+This section covers errors that might be returned from the ADAL and PowerShell. The error explanation should help you identify your next steps.
+
+### Invalid grant
+
+You entered an invalid username or password. For more information, see [The password can't be verified](#the-password-cant-be-verified).
+
+### Unknown user type
+
+Your Azure AD directory can't be found or resolved. Maybe you tried to sign in with a username in an unverified domain?
+
+### User realm discovery failed
-### Unknown User Type
-Your Azure AD directory cannot be found or resolved. Maybe you try to login with a username in an unverified domain?
+Network or proxy configuration issues. The network can't be reached. See [Connectivity issues in the installation wizard](#connectivity-issues-in-the-installation-wizard).
-### User Realm Discovery Failed
-Network or proxy configuration issues. The network cannot be reached. See [Troubleshoot connectivity issues in the installation wizard](#troubleshoot-connectivity-issues-in-the-installation-wizard).
+### User password expired
-### User Password Expired
Your credentials have expired. Change your password.
-### Authorization Failure
-Failed to authorize user to perform action in Azure AD.
+### Authorization failure
-### Authentication Canceled
-The multi-factor authentication (MFA) challenge was canceled.
+Azure AD Connect failed to authorize the user to perform an action in Azure AD.
+
+### Authentication canceled
+
+The MFA challenge was canceled.
<div id="connect-msolservice-failed"> <!--
The multi-factor authentication (MFA) challenge was canceled.
--> </div>
-### Connect To MSOnline Failed
+### Connect to MSOnline failed
+ Authentication was successful, but Azure AD PowerShell has an authentication problem. <div id="get-msoluserrole-failed">
Authentication was successful, but Azure AD PowerShell has an authentication pro
--> </div>
-### Azure AD Global Administrator Role Needed
-User was authenticated successfully. However user is not assigned Global Administrator role. This is [how you can assign Global Administrator role](../roles/permissions-reference.md) to the user.
+### Azure AD Global Administrator role needed
+
+The user was authenticated successfully, but the user isn't assigned the Global Administrator role. You can [assign the Global Administrator role](../roles/permissions-reference.md) to the user.
<div id="privileged-identity-management"> <!--
User was authenticated successfully. However user is not assigned Global Adminis
--> </div>
-### Privileged Identity Management Enabled
-Authentication was successful. Privileged identity management has been enabled and you are currently not a Hybrid Identity Administrator. For more information, see [Privileged Identity Management](../privileged-identity-management/pim-getting-started.md).
+### Privileged Identity Management enabled
+
+Authentication was successful, but Privileged Identity Management has been enabled and the user currently isn't a Hybrid Identity Administrator. For more information, see [Privileged Identity Management](../privileged-identity-management/pim-getting-started.md).
<div id="get-msolcompanyinformation-failed"> <!--
Authentication was successful. Privileged identity management has been enabled a
--> </div>
-### Company Information Unavailable
-Authentication was successful. Could not retrieve company information from Azure AD.
+### Company information unavailable
+
+Authentication was successful, but company information couldn't be retrieved from Azure AD.
<div id="get-msoldomain-failed"> <!--
Authentication was successful. Could not retrieve company information from Azure
--> </div>
-### Domain Information Unavailable
-Authentication was successful. Could not retrieve domain information from Azure AD.
+### Domain information unavailable
+
+Authentication was successful, but domain information couldn't be retrieved from Azure AD.
+
+### Unspecified authentication failure
+
+Shown as *Unexpected error* in the installation wizard. This error might occur if you try to use a *Microsoft account* instead of a *school or organization account*.
+
+## Troubleshooting steps for earlier releases
-### Unspecified Authentication Failure
-Shown as Unexpected error in the installation wizard. Can happen if you try to use a **Microsoft Account** rather than a **school or organization account**.
+In releases starting with build number 1.1.105.0 (released February 2016), the sign-in assistant was retired. Configuring the sign-in assistant should no longer be required, but the information in the next sections is included for reference.
-## Troubleshooting steps for previous releases.
-With releases starting with build number 1.1.105.0 (released February 2016), the sign-in assistant was retired. This section and the configuration should no longer be required, but is kept as reference.
+For the single sign-in assistant to work, Microsoft Windows HTTP Services (WinHTTP) must be configured. You can configure WinHTTP by using [netsh](how-to-connect-install-prerequisites.md#connectivity).
-For the single-sign in assistant to work, winhttp must be configured. This configuration can be done with [**netsh**](how-to-connect-install-prerequisites.md#connectivity).
-![Screenshot shows a command prompt window running the netsh tool to set a proxy.](./media/tshoot-connect-connectivity/netsh.png)
-### The Sign-in assistant has not been correctly configured
-This error appears when the Sign-in assistant cannot reach the proxy or the proxy is not allowing the request.
-![Screenshot shows an error: Unable to validate credentials, Verify network connectivity and firewall or proxy settings.](./media/tshoot-connect-connectivity/nonetsh.png)
+### The sign-in assistant isn't configured correctly
-* If you see this error, look at the proxy configuration in [netsh](how-to-connect-install-prerequisites.md#connectivity) and verify it is correct.
- ![Screenshot shows a command prompt window running the netsh tool to show the proxy configuration.](./media/tshoot-connect-connectivity/netshshow.png)
-* If that looks correct, follow the steps in [Verify proxy connectivity](#verify-proxy-connectivity) to see if the issue is present outside the wizard as well.
+This error appears when the sign-in assistant can't reach the proxy or the proxy isn't allowing the request.
++
+If you see this error, look at the proxy configuration in [netsh](how-to-connect-install-prerequisites.md#connectivity) and verify that it's correct.
++
+If the proxy configuration looks correct, complete the steps in [Verify proxy connectivity](#verify-proxy-connectivity) to see if the issue occurs outside the wizard.
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+
+Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Tshoot Connect Objectsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-objectsync.md
Title: 'Azure AD Connect: Troubleshoot object synchronization | Microsoft Docs'
-description: This topic provides steps for how to troubleshoot issues with object synchronization using the troubleshooting task.
+ Title: 'Azure AD Connect: Troubleshoot object synchronization'
+description: Learn how to troubleshoot issues with object synchronization by using the troubleshooting task.
# Troubleshoot object synchronization with Azure AD Connect sync
-This article provides steps for troubleshooting issues with object synchronization by using the troubleshooting task. To see how troubleshooting works in Azure Active Directory (Azure AD) Connect, watch [this short video](https://aka.ms/AADCTSVideo).
+
+This article provides steps for troubleshooting issues with object synchronization by using the troubleshooting task. To see how troubleshooting works in Azure AD Connect, watch a [short video](https://aka.ms/AADCTSVideo).
## Troubleshooting task
-For Azure AD Connect deployment with version 1.1.749.0 or higher, use the troubleshooting task in the wizard to troubleshoot object synchronization issues. For earlier versions, please troubleshoot manually as described [here](tshoot-connect-object-not-syncing.md).
+
+For Azure AD Connect deployments of version 1.1.749.0 or later, use the troubleshooting task in the wizard to troubleshoot object sync issues. For earlier versions, you can [troubleshoot manually](tshoot-connect-object-not-syncing.md).
### Run the troubleshooting task in the wizard
-To run the troubleshooting task in the wizard, perform the following steps:
-
-1. Open a new Windows PowerShell session on your Azure AD Connect server with the Run as Administrator option.
-2. Run `Set-ExecutionPolicy RemoteSigned` or `Set-ExecutionPolicy Unrestricted`.
-3. Start the Azure AD Connect wizard.
-4. Navigate to the Additional Tasks page, select Troubleshoot, and click Next.
-5. On the Troubleshooting page, click Launch to start the troubleshooting menu in PowerShell.
-6. In the main menu, select Troubleshoot Object Synchronization.
-![Troubleshoot object synchronization](media/tshoot-connect-objectsync/objsynch11.png)
-
-### Troubleshooting Input Parameters
-The following input parameters are needed by the troubleshooting task:
-1. **Object Distinguished Name** ΓÇô This is the distinguished name of the object that needs troubleshooting
-2. **AD Connector Name** ΓÇô This is the name of the AD forest where the above object resides.
-3. Azure AD tenant Hybrid Identity Administrator credentials
-![Hybrid Identity Administratoristrator credentials](media/tshoot-connect-objectsync/objsynch1.png)
+
+To run the troubleshooting task:
+
+1. Open a new Windows PowerShell session on your Azure AD Connect server by using the Run as Administrator option.
+1. Run `Set-ExecutionPolicy RemoteSigned` or `Set-ExecutionPolicy Unrestricted`.
+1. Start the Azure AD Connect wizard.
+1. Go to **Additional Tasks** > **Troubleshoot**, and then select **Next**.
+1. On the **Troubleshooting** page, select **Launch** to start the troubleshooting menu in PowerShell.
+1. In the main menu, select **Troubleshoot Object Synchronization**.
++
+### Troubleshoot input parameters
+
+The troubleshooting task requires the following input parameters:
+
+- **Object Distinguished Name**: The distinguished name of the object that needs troubleshooting.
+- **AD Connector Name**: The name of the Windows Server Active Directory (Windows Server AD) forest where the object resides.
+- Azure Active Directory (Azure AD) tenant Hybrid Identity Administrator credentials.
+ ### Understand the results of the troubleshooting task+ The troubleshooting task performs the following checks:
-1. Detect UPN mismatch if the object is synced to Azure Active Directory
-2. Check if object is filtered due to domain filtering
-3. Check if object is filtered due to OU filtering
-4. Check if object synchronization is blocked due to a linked mailbox
-5. Check if object is dynamic distribution group which is not supposed to be synchronized
+- Detect user principal name (UPN) mismatch if the object is synced to Azure AD.
+- Check whether object is filtered due to domain filtering.
+- Check whether object is filtered due to organizational unit (OU) filtering.
+- Check whether object sync is blocked due to a linked mailbox.
+- Check whether the object is in a dynamic distribution group that isn't intended to be synced.
+
+The rest of the article describes specific results that are returned by the troubleshooting task. In each case, the task provides an analysis followed by recommended actions to resolve the issue.
+
+## Detect UPN mismatch if the object is synced to Azure AD
+
+Check for the UPN mismatch issues that are described in the next sections.
-The rest of this section describes specific results that are returned by the task. In each case, the task provides an analysis followed by recommended actions to resolve the issue.
+### UPN suffix is not verified with the Azure AD tenant
-## Detect UPN mismatch if object is synced to Azure Active Directory
-### UPN Suffix is NOT verified with Azure AD Tenant
-When UserPrincipalName (UPN)/Alternate Login ID suffix is not verified with the Azure AD Tenant, then Azure Active Directory replaces the UPN suffixes with the default domain name "onmicrosoft.com".
+When the UPN or alternate login ID suffix isn't verified with the Azure AD tenant, Azure AD replaces the UPN suffixes with the default domain name `onmicrosoft.com`.
-![Azure AD replaces UPN](media/tshoot-connect-objectsync/objsynch2.png)
-### Azure AD Tenant DirSync Feature ΓÇÿSynchronizeUpnForManagedUsersΓÇÖ is disabled
-When the Azure AD Tenant DirSync Feature ΓÇÿSynchronizeUpnForManagedUsersΓÇÖ is disabled, Azure Active Directory does not allow synchronization updates to UserPrincipalName/Alternate Login ID for licensed user accounts with managed authentication.
+### Azure AD tenant DirSync feature SynchronizeUpnForManagedUsers is disabled
-![SynchronizeUpnForManagedUsers](media/tshoot-connect-objectsync/objsynch4.png)
+When the Azure AD tenant DirSync feature SynchronizeUpnForManagedUsers is disabled, Azure AD doesn't allow sync updates to the UPN or alternate login ID for licensed user accounts that use managed authentication.
+ ## Object is filtered due to domain filtering+
+Check for the domain filtering issues that are described in the next sections.
+ ### Domain is not configured to sync
-Object is out of scope due to domain not being configured. In the example below, the object is out of sync scope as the domain that it belongs to is filtered from synchronization.
-![Domain is not configured to sync](media/tshoot-connect-objectsync/objsynch5.png)
+The object is out of scope because the domain hasn't been configured. In the example in the following figure, the object is out of sync scope because the domain that it belongs to is filtered from sync.
++
+### Domain is configured to sync but is missing run profiles or run steps
+
+The object is out of scope because the domain is missing run profiles or run steps. In the example in the following figure, the object is out of sync scope because the domain that it belongs to is missing run steps for the Full Import run profile.
-### Domain is configured to sync but is missing run profiles/run steps
-Object is out of scope as the domain is missing run profiles/run steps. In the example below, the object is out of sync scope as the domain that it belongs to is missing run steps for the Full Import run profile.
-![missing run profiles](media/tshoot-connect-objectsync/objsynch6.png)
## Object is filtered due to OU filtering
-The object is out of sync scope due to OU filtering configuration. In the example below, the object belongs to OU=NoSync,DC=bvtadwbackdc,DC=com. This OU is not included in sync scope.</br>
-![OU](./media/tshoot-connect-objectsync/objsynch7.png)
+The object is out of sync scope because of the OU filtering configuration. In the example in the following figure, the object belongs to `OU=NoSync,DC=bvtadwbackdc,DC=com`. This OU is not included in the sync scope.
-## Linked Mailbox issue
-A linked mailbox is supposed to be associated with an external master account located in another trusted account forest. If there is no such external master account, then Azure AD Connect will not synchronize the user account corresponds to the linked mailbox in the Exchange forest to the Azure AD tenant.</br>
-![Linked Mailbox](./media/tshoot-connect-objectsync/objsynch12.png)
-## Dynamic Distribution Group issue
-Due to various differences between on-premises Active Directory and Azure Active Directory, Azure AD Connect does not synchronize dynamic distribution groups to the Azure AD tenant.
+## Linked mailbox issue
-![Dynamic Distribution Group](./media/tshoot-connect-objectsync/objsynch13.png)
+A linked mailbox is supposed to be associated with an external primary account that's located in a different trusted account forest. If the primary account doesn't exist, Azure AD Connect doesn't sync the user account that corresponds to the linked mailbox in the Exchange forest to the Azure AD tenant.
-## HTML Report
-In addition to analyzing the object, the troubleshooting task also generates an HTML report that has everything known about the object. This HTML report can be shared with support team to do further troubleshooting, if needed.
-![HTML Report](media/tshoot-connect-objectsync/objsynch8.png)
+## Dynamic distribution group issue
+
+Due to various differences between on-premises Windows Server AD and Azure AD, Azure AD Connect doesn't sync dynamic distribution groups to the Azure AD tenant.
++
+## HTML report
+
+In addition to analyzing the object, the troubleshooting task generates an HTML report that includes everything that's known about the object. The HTML report can be shared with the support team for further troubleshooting if needed.
+ ## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
+
+Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Whatis Aadc Admin Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-aadc-admin-agent.md
Title: 'What is the Azure AD Connect Admin Agent - Azure AD Connect | Microsoft Docs'
-description: Describes the tools used to synchronize and monitor your on-premises environment with Azure AD.
+ Title: 'What is the Azure AD Connect Administration Agent - Azure AD Connect'
+description: Describes the tools that are used to synchronize and monitor your on-premises environment with Azure AD.
-# What is the Azure AD Connect Admin Agent?
+# What is the Azure AD Connect Administration Agent?
->[!NOTE]
->The Azure AD Connect Admin Agent is no longer part of the Azure AD Connect installation and cannot be used with Azure AD Connect versions 2.1.12.0 and newer.
+The Azure AD Connect Administration Agent is a component of Azure AD Connect that can be installed on an Azure AD Connect server. The agent is used to collect specific data from your hybrid Active Directory environment. The collected data helps a Microsoft support engineer troubleshoot issues when you open a support case.
-The Azure AD Connect Administration Agent is a new component of Azure Active Directory Connect that can be installed on an Azure Active Directory Connect server. It is used to collect specific data from your Active Directory environment that helps a Microsoft support engineer to troubleshoot issues when you open a support case.
+> [!NOTE]
+> The Azure AD Connect Administration Agent is no longer part of the Azure AD Connect installation, and it can't be used with Azure AD Connect version 2.1.12.0 or later.
->[!NOTE]
->The admin agent is not installed and enabled by default. You must install the agent in order to collect data to assist with support cases.
+The Azure AD Connect Administration Agent waits for specific requests for data from Azure Active Directory (Azure AD). The agent then takes the requested data from the sync environment and sends it to Azure AD, where it's presented to the Microsoft support engineer.
-The Azure AD Connect Administration Agent waits for specific requests for data from Azure Active Directory. The agent then takes the requested data from the sync environment and sends it to Azure AD, where it is presented to the Microsoft support engineer.
+The information that the Azure AD Connect Administration Agent retrieves from your environment isn't stored. The information is shown only to the Microsoft support engineer to help them investigate and troubleshoot an Azure AD Connect-related support case.
-The information that the Azure AD Connect Administration Agent retrieves from your environment is not stored. The information is only displayed to the Microsoft support engineer to assist them in investigating and troubleshooting the Azure Active Directory Connect related support case.
-The Azure AD Connect Administration Agent is not installed on the Azure AD Connect Server by default.
+By default, the Azure AD Connect Administration Agent isn't installed on the Azure AD Connect server. To assist with support cases, you must install the agent to collect data.
-## Install the Azure AD Connect Administration Agent on the Azure AD Connect server
+## Install the Azure AD Connect Administration Agent
+
+To install the Azure AD Connect Administration Agent on the Azure AD Connect server, first be sure you meet some prerequisites, and then install the agent.
Prerequisites:
-1. Azure AD Connect is installed on the server
-2. Azure AD Connect Health is installed on the server
-![admin agent](media/whatis-aadc-admin-agent/adminagent0.png)
+- Azure AD Connect is installed on the server.
+- Azure AD Connect Health is installed on the server.
++
+The Azure AD Connect Administration Agent binaries are placed on the Azure AD Connect server.
-The Azure AD Connect Administration Agent binaries are placed in the Azure AD Connect server. To install the agent, use the following steps:
+To install the agent:
-1. Open PowerShell in admin mode
-2. Navigate to the directory where the application is located cd "C:\Program Files\Microsoft Azure Active Directory Connect\Tools"
-3. Run ConfigureAdminAgent.ps1
+1. Open PowerShell as administrator.
+1. Go to the directory where the application is located: `cd "C:\Program Files\Microsoft Azure Active Directory Connect\Tools"`.
+1. Run `ConfigureAdminAgent.ps1`.
-When prompted, please enter your Azure AD Hybrid Identity Administrator credentials. These credentials should be the same credentials entered during Azure AD Connect installation.
+When prompted, enter your Azure AD Hybrid Identity Administrator credentials. These credentials should be the same credentials you entered during Azure AD Connect installation.
-After the agent is installed, you'll see the following two new programs in the "Add/Remove Programs" list in the Control Panel of your server:
+After the agent is installed, you'll see the following two new programs in **Add/Remove Programs** in Control Panel on your server:
-![Screenshot that shows the Add/Remove Programs list that includes the new programs you added.](media/whatis-aadc-admin-agent/adminagent1.png)
-## What data in my Sync service is shown to the Microsoft service engineer?
-When you open a support case, the Microsoft Support Engineer can see, for a given user:
+## What data in my sync service is visible to the Microsoft support engineer?
- - the relevant data in Active Directory
- - the Active Directory connector space in the Azure Active Directory Connect server
- - the Azure Active Directory connector space in the Azure Active Directory Connect server
- - the Metaverse in the Azure Active Directory Connect server.
+When you open a support case, the Microsoft support engineer can see this information for a specific user:
-The Microsoft Support Engineer cannot change any data in your system and cannot see any passwords.
+- The relevant data in Windows Server Active Directory (Windows Server AD).
+- The Windows Server AD connector space on the Azure AD Connect server.
+- The Azure AD connector space on the Azure AD Connect server.
+- The metaverse in the Azure AD Connect server.
-## What if I don't want the Microsoft support engineer to access my data?
-Once the agent is installed, if you do not want the Microsoft service engineer to access your data for a support call, you can disable the functionality by modifying the service config file as described below:
+The Microsoft support engineer can't change any data in your system, and they can't see any passwords.
-1. Open **C:\Program Files\Microsoft Azure AD Connect Administration Agent\AzureADConnectAdministrationAgentService.exe.config** in notepad.
-2. Disable **UserDataEnabled** setting as shown below. If **UserDataEnabled** setting exists and is set to true, then set it to false. If the setting does not exist, then add the setting as shown below.
+## What if I don't want the Microsoft support engineer to access my data?
+
+After the agent is installed, if you don't want the Microsoft support engineer to access your data for a support call, you can disable the functionality by modifying the service config file:
+
+1. In Notepad, open *C:\Program Files\Microsoft Azure AD Connect Administration Agent\AzureADConnectAdministrationAgentService.exe.config*.
+1. Disable the **UserDataEnabled** setting as shown in the following example. If the **UserDataEnabled** setting exists and is set to **true**, set it to **false**. If the setting doesn't exist, add the setting.
```xml <appSettings>
Once the agent is installed, if you do not want the Microsoft service engineer t
</appSettings> ```
-3. Save the config file.
-4. Restart Azure AD Connect Administration Agent service as shown below
+1. Save the config file.
+1. Restart the Azure AD Connect Administration Agent service as shown in the following figure:
-![Screenshot that shows where to restart the Azure AD Administrator Agent service.](media/whatis-aadc-admin-agent/adminagent2.png)
+ :::image type="content" source="media/whatis-aadc-admin-agent/adminagent2.png" alt-text="Screenshot that shows how to restart the Azure AD Connect Administrator Agent service.":::
## Next steps
-Learn more about [Integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md)
+
+Learn more about [integrating your on-premises identities with Azure Active Directory](whatis-hybrid-identity.md).
active-directory Manage Self Service Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/manage-self-service-access.md
Using this feature, you can:
To enable self-service application access, you need: - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Cloud Application Administrator, Application Administrator, or owner of the service principal.
+- One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.
- An Azure Active Directory Premium (P1 or P2) license is required for users to request to join a self-service app and for owners to approve or deny requests. Without an Azure Active Directory Premium license, users can't add self-service apps. ## Enable self-service application access to allow users to find their own applications
active-directory How To Assign App Role Managed Identity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-powershell.md
Connect-AzureAD -TenantId $tenantID
# Look up the details about the server app's service principal and app role. $serverServicePrincipal = (Get-AzureADServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
-$serverServicePrincipalObjectId = $serverServicePrincipal.Id
+$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId
$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id # Assign the managed identity access to the app role.
Connect-MgGraph -TenantId $tenantId -Scopes 'Application.Read.All','Application.
# Look up the details about the server app's service principal and app role. $serverServicePrincipal = (Get-MgServicePrincipal -Filter "DisplayName eq '$serverApplicationName'")
-$serverServicePrincipalObjectId = $serverServicePrincipal.Id
+$serverServicePrincipalObjectId = $serverServicePrincipal.ObjectId
$appRoleId = ($serverServicePrincipal.AppRoles | Where-Object {$_.Value -eq $appRoleName }).Id # Assign the managed identity access to the app role.
active-directory Concept Provisioning Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-provisioning-logs.md
Use the following table to better understand how to resolve errors that you find
> | AzureActiveDirectoryCannotUpdateObjectsOriginatedInExternalService | The synchronization engine could not update one or more user properties in the target tenant.<br/><br/>The operation failed in Microsoft Graph API because of Source of Authority (SOA) enforcement. Currently, the following properties show up in the list:<br/>`Mail`<br/>`showInAddressList` | In some cases (for example when `showInAddressList` property is part of the user update), the synchronization engine might automatically retry the (user) update without the offending property. Otherwise, you will need to update the property directly in the target tenant. | > | AzureDirectoryB2BManagementPolicyCheckFailure | The cross-tenant synchronization policy allowing automatic redemption failed.<br/><br/>The synchronization engine checks to ensure that the administrator of the target tenant has created an inbound cross-tenant synchronization policy allowing automatic redemption. The synchronization engine also checks if the administrator of the source tenant has enabled an outbound policy for automatic redemption. | Ensure that the automatic redemption setting has been enabled for both the source and target tenants. For more information, see [Automatic redemption setting](../multi-tenant-organizations/cross-tenant-synchronization-overview.md#automatic-redemption-setting). | > | AzureActiveDirectoryQuotaLimitExceeded | The number of objects in the tenant exceeds the directory limit.<br/><br/>Azure AD has limits for the number of objects that can be created in a tenant. | Check whether the quota can be increased. For information about the directory limits and steps to increase the quota, see [Azure AD service limits and restrictions](../enterprise-users/directory-service-limits-restrictions.md). |
+> |InvitationCreationFailure| The Azure AD provisioning service attempted to invite the user in the target tenant. That invitation failed.| Navigate to the user settings page in Azure AD > external users > collaboration restrictions and ensure that collaboration with that tenant is enabled.|
## Next steps
active-directory Permissions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/permissions-reference.md
User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_chec
User<br/>(no admin role, but member or owner of a role-assignable group) | &nbsp; | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: User Admin | &nbsp; | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
\* A Global Administrator cannot remove their own Global Administrator assignment. This is to prevent a situation where an organization has 0 Global Administrators.
User<br/>(no admin role) | :heavy_check_mark: | :heavy_check_mark: | :heavy_chec
User<br/>(no admin role, but member or owner of a role-assignable group) | &nbsp; | &nbsp; | :heavy_check_mark: | :heavy_check_mark: User Admin | &nbsp; | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: Usage Summary Reports Reader | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
+All custom roles | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark: | :heavy_check_mark:
## Next steps
active-directory Benchling Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/benchling-tutorial.md
Previously updated : 11/21/2022 Last updated : 02/09/2023
Follow these steps to enable Azure AD SSO in the Azure portal.
1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode: In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<SUBDOMAIN>.benchling.com`
+ `https://<SUBDOMAIN>.benchling.com/ext/saml/signin:begin`
> [!NOTE] > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Benchling Client support team](mailto:support@benchling.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
aks Azure Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-netapp-files.md
Title: Integrate Azure NetApp Files with Azure Kubernetes Service | Microsoft Docs
-description: Learn how to provision Azure NetApp Files with Azure Kubernetes Service.
-
+ Title: Provision Azure NetApp Files volumes on Azure Kubernetes Service
+description: Learn how to provision Azure NetApp Files volumes on an Azure Kubernetes Service cluster.
Previously updated : 10/18/2021-
-#Customer intent: As a cluster operator or developer, I want to learn how to use Azure NetApp Files to provision volumes for Kubernetes environments.
Last updated : 02/08/2023
-# Integrate Azure NetApp Files with Azure Kubernetes Service
-
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to create [Azure NetApp Files][anf] volumes to be used by pods in an Azure Kubernetes Service (AKS) cluster.
+# Provision Azure NetApp Files volumes on Azure Kubernetes Service
-[Azure NetApp Files][anf] is an enterprise-class, high-performance, metered file storage service running on Azure. Kubernetes users have two options when it comes to using Azure NetApp Files volumes for Kubernetes workloads:
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to create [Azure NetApp Files][anf] volumes to be used by pods on an Azure Kubernetes Service (AKS) cluster.
-* Create Azure NetApp Files volumes **statically**: In this scenario, the creation of volumes is achieved external to AKS; volumes are created using `az`/Azure UI and are then exposed to Kubernetes by the creation of a `PersistentVolume`. Statically created Azure NetApp Files volumes have lots of limitations (for example, inability to be expanded, needing to be over-provisioned, and so on) and are not recommended for most use cases.
-* Create Azure NetApp Files volumes **on-demand**, orchestrating through Kubernetes: This method is the **preferred** mode of operation for creating multiple volumes directly through Kubernetes and is achieved using [Astra Trident](https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html). Astra Trident is a CSI-compliant dynamic storage orchestrator that helps provision volumes natively through Kubernetes.
+[Azure NetApp Files][anf] is an enterprise-class, high-performance, metered file storage service running on Azure. Kubernetes users have two options for using Azure NetApp Files volumes for Kubernetes workloads:
-Using a CSI driver to directly consume Azure NetApp Files volumes from AKS workloads is **highly recommended** for most use cases. This requirement is fulfilled using Astra Trident, an open-source dynamic storage orchestrator for Kubernetes. Astra Trident is an enterprise-grade storage orchestrator purpose-built for Kubernetes, fully supported by NetApp. It simplifies access to storage from Kubernetes clusters by automating storage provisioning. You can take advantage of Astra Trident's Container Storage Interface (CSI) driver for Azure NetApp Files to abstract underlying details and create, expand, and snapshot volumes on-demand. Also, using Astra Trident enables you to use [Astra Control Service](https://cloud.netapp.com/astra-control) built on top of Astra Trident to backup, recover, move, and manage the application-data lifecycle of your AKS workloads across clusters within and across Azure regions to meet your business and service continuity needs.
-
-## Before you begin
+* Create Azure NetApp Files volumes **statically**. In this scenario, the creation of volumes is external to AKS. Volumes are created using the Azure CLI or from the Azure portal, and are then exposed to Kubernetes by the creation of a `PersistentVolume`. Statically created Azure NetApp Files volumes have many limitations (for example, inability to be expanded, needing to be over-provisioned, and so on). Statically created volumes are not recommended for most use cases.
+* Create Azure NetApp Files volumes **on-demand**, orchestrating through Kubernetes. This method is the **preferred** way to create multiple volumes directly through Kubernetes, and is achieved using [Astra Trident][astra-trident]. Astra Trident is a CSI-compliant dynamic storage orchestrator that helps provision volumes natively through Kubernetes.
-This article assumes that you have an existing AKS cluster. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
+Using a CSI driver to directly consume Azure NetApp Files volumes from AKS workloads is the recommended configuration for most use cases. This requirement is accomplished using Astra Trident, an open-source dynamic storage orchestrator for Kubernetes. Astra Trident is an enterprise-grade storage orchestrator purpose-built for Kubernetes, and fully supported by NetApp. It simplifies access to storage from Kubernetes clusters by automating storage provisioning.
-> [!IMPORTANT]
-> Your AKS cluster must also be [in a region that supports Azure NetApp Files][anf-regions].
+You can take advantage of Astra Trident's Container Storage Interface (CSI) driver for Azure NetApp Files to abstract underlying details and create, expand, and snapshot volumes on-demand. Also, using Astra Trident enables you to use [Astra Control Service][astra-control-service] built on top of Astra Trident. Using the Astra Control Service, you can backup, recover, move, and manage the application-data lifecycle of your AKS workloads across clusters within and across Azure regions to meet your business and service continuity needs.
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-### Prerequisites
+## Before you begin
The following considerations apply when you use Azure NetApp Files:
-* Azure NetApp Files is only available [in selected Azure regions][anf-regions].
+* Your AKS cluster must be [in a region that supports Azure NetApp Files][anf-regions].
+* The Azure CLI version 2.0.59 or higher installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
* After the initial deployment of an AKS cluster, you can choose to provision Azure NetApp Files volumes statically or dynamically.
-* To use dynamic provisioning with Azure NetApp Files, install and configure [Astra Trident](https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html) version 19.07 or later.
+* To use dynamic provisioning with Azure NetApp Files, install and configure [Astra Trident][astra-trident] version 19.07 or higher.
## Configure Azure NetApp Files
-Register the *Microsoft.NetApp* resource provider:
-
-```azurecli
-az provider register --namespace Microsoft.NetApp --wait
-```
-
-> [!NOTE]
-> This can take some time to complete.
-
-When you create an Azure NetApp account for use with AKS, you can create the account in an existing resource group or create a new one in the same region as the AKS cluster.
-The following example creates an account named *myaccount1* in the *myResourceGroup* resource group and *eastus* region:
-
-```azurecli
-az netappfiles account create \
- --resource-group myResourceGroup \
- --location eastus \
- --account-name myaccount1
-```
-
-Create a new capacity pool by using [az netappfiles pool create][az-netappfiles-pool-create]. The following example creates a new capacity pool named *mypool1* with 4 TB in size and *Premium* service level:
-
-```azurecli
-az netappfiles pool create \
- --resource-group myResourceGroup \
- --location eastus \
- --account-name myaccount1 \
- --pool-name mypool1 \
- --size 4 \
- --service-level Premium
-```
-
-Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using [az network vnet subnet create][az-network-vnet-subnet-create]. *This subnet must be in the same virtual network as your AKS cluster.*
-
-```azurecli
-RESOURCE_GROUP=myResourceGroup
-VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
-VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-SUBNET_NAME=MyNetAppSubnet
-az network vnet subnet create \
- --resource-group $RESOURCE_GROUP \
- --vnet-name $VNET_NAME \
- --name $SUBNET_NAME \
- --delegations "Microsoft.NetApp/volumes" \
- --address-prefixes 10.0.0.0/28
-```
-
-Volumes can either be provisioned statically or dynamically. Both options are covered in detail below.
+1. Register the *Microsoft.NetApp* resource provider by running the following command:
+
+ ```azurecli
+ az provider register --namespace Microsoft.NetApp --wait
+ ```
+
+ > [!NOTE]
+ > This operation can take several minutes to complete.
+
+2. When you create an Azure NetApp account for use with AKS, you can create the account in an existing resource group or create a new one in the same region as the AKS cluster.
+The following command creates an account named *myaccount1* in the *myResourceGroup* resource group and *eastus* region:
+
+ ```azurecli
+ az netappfiles account create \
+ --resource-group myResourceGroup \
+ --location eastus \
+ --account-name myaccount1
+ ```
+
+3. Create a new capacity pool by using [az netappfiles pool create][az-netappfiles-pool-create]. The following example creates a new capacity pool named *mypool1* with 4 TB in size and *Premium* service level:
+
+ ```azurecli
+ az netappfiles pool create \
+ --resource-group myResourceGroup \
+ --location eastus \
+ --account-name myaccount1 \
+ --pool-name mypool1 \
+ --size 4 \
+ --service-level Premium
+ ```
+
+4. Create a subnet to [delegate to Azure NetApp Files][anf-delegate-subnet] using [az network vnet subnet create][az-network-vnet-subnet-create]. Specify the resource group hosting the existing virtual network for your AKS cluster.
+
+ > [!NOTE]
+ > This subnet must be in the same virtual network as your AKS cluster.
+
+ ```azurecli
+ RESOURCE_GROUP=myResourceGroup
+ VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+ VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+ SUBNET_NAME=MyNetAppSubnet
+ az network vnet subnet create \
+ --resource-group $RESOURCE_GROUP \
+ --vnet-name $VNET_NAME \
+ --name $SUBNET_NAME \
+ --delegations "Microsoft.NetApp/volumes" \
+ --address-prefixes 10.0.0.0/28
+ ```
+
+ Volumes can either be provisioned statically or dynamically. Both options are covered further in the next sections.
## Provision Azure NetApp Files volumes statically
-Create a volume by using [az netappfiles volume create][az-netappfiles-volume-create].
-
-```azurecli
-RESOURCE_GROUP=myResourceGroup
-LOCATION=eastus
-ANF_ACCOUNT_NAME=myaccount1
-POOL_NAME=mypool1
-SERVICE_LEVEL=Premium
-VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
-VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
-SUBNET_NAME=MyNetAppSubnet
-SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
-VOLUME_SIZE_GiB=100 # 100 GiB
-UNIQUE_FILE_PATH="myfilepath2" # Note that file path needs to be unique within all ANF Accounts
-
-az netappfiles volume create \
- --resource-group $RESOURCE_GROUP \
- --location $LOCATION \
- --account-name $ANF_ACCOUNT_NAME \
- --pool-name $POOL_NAME \
- --name "myvol1" \
- --service-level $SERVICE_LEVEL \
- --vnet $VNET_ID \
- --subnet $SUBNET_ID \
- --usage-threshold $VOLUME_SIZE_GiB \
- --file-path $UNIQUE_FILE_PATH \
- --protocol-types "NFSv3"
-```
-
-### Create the PersistentVolume
-
-List the details of your volume using [az netappfiles volume show][az-netappfiles-volume-show]
-
-```azurecli
-az netappfiles volume show \
- --resource-group $RESOURCE_GROUP \
- --account-name $ANF_ACCOUNT_NAME \
- --pool-name $POOL_NAME \
- --volume-name "myvol1" -o JSON
-```
-
-```output
-{
- ...
- "creationToken": "myfilepath2",
- ...
- "mountTargets": [
+1. Create a volume using the [az netappfiles volume create][az-netappfiles-volume-create] command. Update `RESOURCE_GROUP`, `LOCATION`, `ANF_ACCOUNT_NAME` (Azure NetApp account name), `POOL_NAME`, and `SERVICE_LEVEL` with the correct values.
+
+ ```azurecli
+ RESOURCE_GROUP=myResourceGroup
+ LOCATION=eastus
+ ANF_ACCOUNT_NAME=myaccount1
+ POOL_NAME=mypool1
+ SERVICE_LEVEL=Premium
+ VNET_NAME=$(az network vnet list --resource-group $RESOURCE_GROUP --query [].name -o tsv)
+ VNET_ID=$(az network vnet show --resource-group $RESOURCE_GROUP --name $VNET_NAME --query "id" -o tsv)
+ SUBNET_NAME=MyNetAppSubnet
+ SUBNET_ID=$(az network vnet subnet show --resource-group $RESOURCE_GROUP --vnet-name $VNET_NAME --name $SUBNET_NAME --query "id" -o tsv)
+ VOLUME_SIZE_GiB=100 # 100 GiB
+ UNIQUE_FILE_PATH="myfilepath2" # Note that file path needs to be unique within all ANF Accounts
+
+ az netappfiles volume create \
+ --resource-group $RESOURCE_GROUP \
+ --location $LOCATION \
+ --account-name $ANF_ACCOUNT_NAME \
+ --pool-name $POOL_NAME \
+ --name "myvol1" \
+ --service-level $SERVICE_LEVEL \
+ --vnet $VNET_ID \
+ --subnet $SUBNET_ID \
+ --usage-threshold $VOLUME_SIZE_GiB \
+ --file-path $UNIQUE_FILE_PATH \
+ --protocol-types "NFSv3"
+ ```
+
+### Create the persistent volume
+
+1. List the details of your volume using [az netappfiles volume show][az-netappfiles-volume-show]
+
+ ```azurecli
+ az netappfiles volume show \
+ --resource-group $RESOURCE_GROUP \
+ --account-name $ANF_ACCOUNT_NAME \
+ --pool-name $POOL_NAME \
+ --volume-name "myvol1" -o JSON
+ ```
+
+ The following output resembles the output of the previous command:
+
+ ```console
{ ...
- "ipAddress": "10.0.0.4",
+ "creationToken": "myfilepath2",
+ ...
+ "mountTargets": [
+ {
+ ...
+ "ipAddress": "10.0.0.4",
+ ...
+ }
+ ],
... }
- ],
- ...
-}
-```
-
-Create a `pv-nfs.yaml` defining a PersistentVolume. Replace `path` with the *creationToken* and `server` with *ipAddress* from the previous command. For example:
-
-```yaml
-
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: pv-nfs
-spec:
- capacity:
- storage: 100Gi
- accessModes:
- - ReadWriteMany
- mountOptions:
- - vers=3
- nfs:
- server: 10.0.0.4
- path: /myfilepath2
-```
-
-Update the *server* and *path* to the values of your NFS (Network File System) volume you created in the previous step. Create the PersistentVolume with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f pv-nfs.yaml
-```
-
-Verify the *Status* of the PersistentVolume is *Available* using the [kubectl describe][kubectl-describe] command:
-
-```console
-kubectl describe pv pv-nfs
-```
-
-### Create the PersistentVolumeClaim
-
-Create a `pvc-nfs.yaml` defining a PersistentVolume. For example:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: pvc-nfs
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: ""
- resources:
- requests:
- storage: 1Gi
-```
-
-Create the PersistentVolumeClaim with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f pvc-nfs.yaml
-```
-
-Verify the *Status* of the PersistentVolumeClaim is *Bound* using the [kubectl describe][kubectl-describe] command:
-
-```console
-kubectl describe pvc pvc-nfs
-```
+ ```
+
+2. Create a `pv-nfs.yaml` defining a persistent volume by copying the following manifest. Replace `path` with the *creationToken* and `server` with *ipAddress* from the previous step.
+
+ ```yaml
+
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-nfs
+ spec:
+ capacity:
+ storage: 100Gi
+ accessModes:
+ - ReadWriteMany
+ mountOptions:
+ - vers=3
+ nfs:
+ server: 10.0.0.4
+ path: /myfilepath2
+ ```
+
+3. Create the persistent volume using the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f pv-nfs.yaml
+ ```
+
+4. Verify the *Status* of the PersistentVolume is *Available* using the [kubectl describe][kubectl-describe] command:
+
+ ```bash
+ kubectl describe pv pv-nfs
+ ```
+
+### Create a persistent volume claim
+
+1. Create a `pvc-nfs.yaml` defining a PersistentVolume by copying the following manifest:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-nfs
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: ""
+ resources:
+ requests:
+ storage: 1Gi
+ ```
+
+2. Create the persistent volume claim using the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f pvc-nfs.yaml
+ ```
+
+3. Verify the *Status* of the persistent volume claim is *Bound* using the [kubectl describe][kubectl-describe] command:
+
+ ```bash
+ kubectl describe pvc pvc-nfs
+ ```
### Mount with a pod
-Create a `nginx-nfs.yaml` defining a pod that uses the PersistentVolumeClaim. For example:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: nginx-nfs
-spec:
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: nginx-nfs
- command:
- - "/bin/sh"
- - "-c"
- - while true; do echo $(date) >> /mnt/azure/outfile; sleep 1; done
- volumeMounts:
- - name: disk01
- mountPath: /mnt/azure
- volumes:
- - name: disk01
- persistentVolumeClaim:
- claimName: pvc-nfs
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f nginx-nfs.yaml
-```
-
-Verify the pod is *Running* using the [kubectl describe][kubectl-describe] command:
-
-```console
-kubectl describe pod nginx-nfs
-```
-
-Verify your volume has been mounted in the pod by using [kubectl exec][kubectl-exec] to connect to the pod then `df -h` to check if the volume is mounted.
-
-```console
-$ kubectl exec -it nginx-nfs -- sh
-```
-
-```output
-/ # df -h
-Filesystem Size Used Avail Use% Mounted on
-...
-10.0.0.4:/myfilepath2 100T 384K 100T 1% /mnt/azure
-...
-```
+1. Create a `nginx-nfs.yaml` defining a pod that uses the persistent volume claim by using the following manifest:
+
+ ```yaml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nginx-nfs
+ spec:
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: nginx-nfs
+ command:
+ - "/bin/sh"
+ - "-c"
+ - while true; do echo $(date) >> /mnt/azure/outfile; sleep 1; done
+ volumeMounts:
+ - name: disk01
+ mountPath: /mnt/azure
+ volumes:
+ - name: disk01
+ persistentVolumeClaim:
+ claimName: pvc-nfs
+ ```
+
+2. Create the pod using the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f nginx-nfs.yaml
+ ```
+
+3. Verify the pod is *Running* using the [kubectl describe][kubectl-describe] command:
+
+ ```bash
+ kubectl describe pod nginx-nfs
+ ```
+
+4. Verify your volume has been mounted on the pod by using [kubectl exec][kubectl-exec] to connect to the pod, and then use `df -h` to check if the volume is mounted.
+
+ ```bash
+ kubectl exec -it nginx-nfs -- sh
+ ```
+
+ ```console
+ / # df -h
+ Filesystem Size Used Avail Use% Mounted on
+ ...
+ 10.0.0.4:/myfilepath2 100T 384K 100T 1% /mnt/azure
+ ...
+ ```
## Provision Azure NetApp Files volumes dynamically ### Install and configure Astra Trident
-To dynamically provision volumes, you need to install Astra Trident. Astra Trident is NetApp's dynamic storage provisioner that is purpose-built for Kubernetes. Simplify the consumption of storage for Kubernetes applications using Astra Trident's industry-standard [Container Storage Interface (CSI)](https://kubernetes-csi.github.io/docs/) drivers. Astra Trident deploys in Kubernetes clusters as pods and provides dynamic storage orchestration services for your Kubernetes workloads.
+To dynamically provision volumes, you need to install Astra Trident. Astra Trident is NetApp's dynamic storage provisioner that is purpose-built for Kubernetes. Simplify the consumption of storage for Kubernetes applications using Astra Trident's industry-standard [Container Storage Interface (CSI)][kubernetes-csi-driver] driver. Astra Trident deploys on Kubernetes clusters as pods and provides dynamic storage orchestration services for your Kubernetes workloads.
-You can learn more from the [documentation]https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html).
+Before proceeding to the next section, you need to:
-Before proceeding to the next step, you will need to:
-
-1. **Install Astra Trident**. Trident can be installed using the operator/Helm chart/`tridentctl`. The instructions provided below explain how Astra Trident can be installed using the operator. To learn how the other install methods work, see the [Install Guide](https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy.html).
+1. **Install Astra Trident**. Trident can be installed using the Trident operator (manually or using [Helm][trident-helm-chart]) or [`tridentctl`][tridentctl]. The instructions provided later in this article explain how Astra Trident can be installed using the operator. To learn more about these installation methods and how they work, see the [Install Guide][trident-install-guide].
2. **Create a backend**. To instruct Astra Trident about the Azure NetApp Files subscription and where it needs to create volumes, a backend is created. This step requires details about the account that was created in the previous step. #### Install Astra Trident using the operator
-This section walks you through the installation of Astra Trident using the operator. You can also choose to install using one of its other methods:
-
-* [Helm chart](https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-operator.html).
-* [`tridentctl`](https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-tridentctl.html).
-
-See to [Deploying Trident](https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy.html) to understand how each option works and identify the one that works best for you.
-
-Download Astra Trident from its [GitHub repository](https://github.com/NetApp/trident/releases). Choose from the desired version and download the installer bundle.
-
-```console
-#Download Astra Trident
-
-$ wget https://github.com/NetApp/trident/releases/download/v21.07.1/trident-installer-21.07.1.tar.gz
-$ tar xzvf trident-installer-21.07.1.tar.gz
-```
-Deploy the operator using `deploy/bundle.yaml`.
-
-```console
-$ kubectl create ns trident
-
-namespace/trident created
-
-$ kubectl apply -f trident-installer/deploy/bundle.yaml -n trident
-
-serviceaccount/trident-operator created
-clusterrole.rbac.authorization.k8s.io/trident-operator created
-clusterrolebinding.rbac.authorization.k8s.io/trident-operator created
-deployment.apps/trident-operator created
-podsecuritypolicy.policy/tridentoperatorpods created
-```
-
-Create a `TridentOrchestrator` to install Astra Trident.
-
-```console
-$ kubectl apply -f trident-installer/deploy/crds/tridentorchestrator_cr.yaml
-
-tridentorchestrator.trident.netapp.io/trident created
-```
-
-The operator installs by using the parameters provided in the `TridentOrchestrator` spec. You can learn about the configuration parameters and example backends from the extensive [installation](https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy.html) and [backend guides](https://docs.netapp.com/us-en/trident/trident-use/backends.html).
-
-Confirm Astra Trident was installed.
-
-```console
-$ kubectl describe torc trident
-Name: trident
-Namespace:
-Labels: <none>
-Annotations: <none>
-API Version: trident.netapp.io/v1
-Kind: TridentOrchestrator
-...
-Spec:
- Debug: true
- Namespace: trident
-Status:
- Current Installation Params:
- IPv6: false
- Autosupport Hostname:
- Autosupport Image: netapp/trident-autosupport:21.01
- Autosupport Proxy:
- Autosupport Serial Number:
- Debug: true
- Enable Node Prep: false
- Image Pull Secrets:
- Image Registry:
- k8sTimeout: 30
- Kubelet Dir: /var/lib/kubelet
- Log Format: text
- Silence Autosupport: false
- Trident Image: netapp/trident:21.07.1
- Message: Trident installed
- Namespace: trident
- Status: Installed
- Version: v21.07.1
-Events:
- Type Reason Age From Message
- - - - -
- Normal Installing 74s trident-operator.netapp.io Installing Trident
- Normal Installed 67s trident-operator.netapp.io Trident installed
-```
+This section walks you through the installation of Astra Trident using the operator.
+
+1. Download Astra Trident from its [GitHub repository](https://github.com/NetApp/trident/releases). Choose from the desired version and download the installer bundle.
+
+ ```bash
+ wget https://github.com/NetApp/trident/releases/download/v21.07.1/trident-installer-21.07.1.tar.gz
+ tar xzvf trident-installer-21.07.1.tar.gz
+ ```
+
+2. Run the [kubectl create][kubectl-create] command to create the *trident* namespace:
+
+ ```bash
+ kubectl create ns trident
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ namespace/trident created
+ ```
+
+3. Run the [kubectl apply][kubectl-apply] command to deploy the Trident operator using the bundle file:
+
+ ```bash
+ kubectl apply -f trident-installer/deploy/bundle.yaml -n trident
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ serviceaccount/trident-operator created
+ clusterrole.rbac.authorization.k8s.io/trident-operator created
+ clusterrolebinding.rbac.authorization.k8s.io/trident-operator created
+ deployment.apps/trident-operator created
+ podsecuritypolicy.policy/tridentoperatorpods created
+ ```
+
+4. Run the following command to create a `TridentOrchestrator` to install Astra Trident.
+
+ ```bash
+ kubectl apply -f trident-installer/deploy/crds/tridentorchestrator_cr.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ tridentorchestrator.trident.netapp.io/trident created
+ ```
+
+ The operator installs by using the parameters provided in the `TridentOrchestrator` spec. You can learn about the configuration parameters and example backends from the [Trident install guide][trident-install-guide] and [backend guide][trident-backend-install-guide].
+
+5. To confirm Astra Trident was installed successfully, run the following [kubectl describe][kubectl-describe] command:
+
+ ```bash
+ kubectl describe torc trident
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ Name: trident
+ Namespace:
+ Labels: <none>
+ Annotations: <none>
+ API Version: trident.netapp.io/v1
+ Kind: TridentOrchestrator
+ ...
+ Spec:
+ Debug: true
+ Namespace: trident
+ Status:
+ Current Installation Params:
+ IPv6: false
+ Autosupport Hostname:
+ Autosupport Image: netapp/trident-autosupport:21.01
+ Autosupport Proxy:
+ Autosupport Serial Number:
+ Debug: true
+ Enable Node Prep: false
+ Image Pull Secrets:
+ Image Registry:
+ k8sTimeout: 30
+ Kubelet Dir: /var/lib/kubelet
+ Log Format: text
+ Silence Autosupport: false
+ Trident Image: netapp/trident:21.07.1
+ Message: Trident installed
+ Namespace: trident
+ Status: Installed
+ Version: v21.07.1
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Installing 74s trident-operator.netapp.io Installing Trident
+ Normal Installed 67s trident-operator.netapp.io Trident installed
+ ```
### Create a backend
-After Astra Trident is installed, create a backend that points to your Azure NetApp Files subscription.
+1. Before creating a backend, you need to update `backend-anf.yaml` to include details about the Azure NetApp Files subscription, such as:
+
+ * `subscriptionID` for the Azure subscription where Azure NetApp Files will be enabled.
+ * `tenantID`, `clientID`, and `clientSecret` from an [App Registration][azure-ad-app-registration] in Azure Active Directory (AD) with sufficient permissions for the Azure NetApp Files service. The App Registration include the `Owner` or `Contributor` role that's predefined by Azure.
+ * An Azure location that contains at least one delegated subnet.
-```console
-$ kubectl apply -f trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml -n trident
+ In addition, you can choose to provide a different service level. Azure NetApp Files provides three [service levels](../azure-netapp-files/azure-netapp-files-service-levels.md): Standard, Premium, and Ultra.
-secret/backend-tbc-anf-secret created
-tridentbackendconfig.trident.netapp.io/backend-tbc-anf created
-```
+2. After Astra Trident is installed, create a backend that points to your Azure NetApp Files subscription by running the following command.
-Before running the command, you need to update `backend-anf.yaml` to include details about the Azure NetApp Files subscription, such as:
+ ```bash
+ kubectl apply -f trident-installer/sample-input/backends-samples/azure-netapp-files/backend-anf.yaml -n trident
+ ```
-* `subscriptionID` for the Azure subscription with Azure NetApp Files enabled. The
-* `tenantID`, `clientID`, and `clientSecret` from an [App Registration](../active-directory/develop/howto-create-service-principal-portal.md) in Azure Active Directory (AD) with sufficient permissions for the Azure NetApp Files service. The App Registration must carry the `Owner` or `Contributor` role thatΓÇÖs predefined by Azure.
-* Azure location that contains at least one delegated subnet.
+ The output of the command resembles the following example:
-In addition, you can choose to provide a different service level. Azure NetApp Files provides three [service levels](../azure-netapp-files/azure-netapp-files-service-levels.md): Standard, Premium, and Ultra.
+ ```console
+ secret/backend-tbc-anf-secret created
+ tridentbackendconfig.trident.netapp.io/backend-tbc-anf created
+ ```
### Create a StorageClass
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. To consume Azure NetApp Files volumes, a storage class must be created. Create a file named `anf-storageclass.yaml` and copy in the manifest provided below.
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. To consume Azure NetApp Files volumes, a storage class must be created.
+
+1. Create a file named `anf-storageclass.yaml` and copy in the following manifest:
-```yaml
-apiVersion: storage.k8s.io/v1
-kind: StorageClass
-metadata:
- name: azure-netapp-files
-provisioner: csi.trident.netapp.io
-parameters:
- backendType: "azure-netapp-files"
- fsType: "nfs"
-```
+ ```yaml
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: azure-netapp-files
+ provisioner: csi.trident.netapp.io
+ parameters:
+ backendType: "azure-netapp-files"
+ fsType: "nfs"
+ ```
-Create the storage class using [kubectl apply][kubectl-apply] command:
+2. Create the storage class using the [kubectl apply][kubectl-apply] command:
-```console
-$ kubectl apply -f anf-storageclass.yaml
+ ```bash
+ kubectl apply -f anf-storageclass.yaml
+ ```
-storageclass/azure-netapp-files created
+ The output of the command resembles the following example::
-$ kubectl get sc
-NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
-azure-netapp-files csi.trident.netapp.io Delete Immediate false 3s
-```
+ ```console
+ storageclass/azure-netapp-files created
+ ```
-### Create a PersistentVolumeClaim
+3. Run the [kubectl get][kubectl-get] command to view the status of the storage class:
-A PersistentVolumeClaim (PVC) is a request for storage by a user. Upon the creation of a PersistentVolumeClaim, Astra Trident automatically creates an Azure NetApp Files volume and makes it available for Kubernetes workloads to consume.
+ ```bash
+ kubectl get sc
+ NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
+ azure-netapp-files csi.trident.netapp.io Delete Immediate false 3s
+ ```
-Create a file named `anf-pvc.yaml` and provide the following manifest. In this example, a 1-TiB volume is created that is *ReadWriteMany*.
+### Create a persistent volume claim
-```yaml
-kind: PersistentVolumeClaim
-apiVersion: v1
-metadata:
- name: anf-pvc
-spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 1Ti
- storageClassName: azure-netapp-files
-```
+A persistent volume claim (PVC) is a request for storage by a user. Upon the creation of a persistent volume claim, Astra Trident automatically creates an Azure NetApp Files volume and makes it available for Kubernetes workloads to consume.
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
+1. Create a file named `anf-pvc.yaml` and copy the following manifest. In this example, a 1-TiB volume is created that with *ReadWriteMany* access.
-```console
-$ kubectl apply -f anf-pvc.yaml
+ ```yaml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ name: anf-pvc
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 1Ti
+ storageClassName: azure-netapp-files
+ ```
-persistentvolumeclaim/anf-pvc created
+2. Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
-$ kubectl get pvc
-kubectl get pvc -n trident
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-anf-pvc Bound pvc-bffa315d-3f44-4770-86eb-c922f567a075 1Ti RWO azure-netapp-files 62s
-```
+ ```bash
+ kubectl apply -f anf-pvc.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ persistentvolumeclaim/anf-pvc created
+ ```
+
+3. To view information about the persistent volume claim, run the [kubectl get][kubectl-get] command:
+
+ ```bash
+ kubectl get pvc
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ kubectl get pvc -n trident
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ anf-pvc Bound pvc-bffa315d-3f44-4770-86eb-c922f567a075 1Ti RWO azure-netapp-files 62s
+ ```
### Use the persistent volume
-After the PVC is created, a pod can be spun up to access the Azure NetApp Files volume. The manifest below can be used to define an NGINX pod that mounts the Azure NetApp Files volume that was created in the previous step. In this example, the volume is mounted at `/mnt/data`.
-
-Create a file named `anf-nginx-pod.yaml`, which contains the following manifest:
-
-```yml
-kind: Pod
-apiVersion: v1
-metadata:
- name: nginx-pod
-spec:
- containers:
- - name: nginx
- image: mcr.microsoft.com/oss/nginx/nginx:latest1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/data"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: anf-pvc
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command:
-
-```console
-$ kubectl apply -f anf-nginx-pod.yaml
-
-pod/nginx-pod created
-```
-
-Kubernetes has now created a pod with the volume mounted and accessible within the `nginx` container at `/mnt/data`. Confirm by checking the event logs for the pod using `kubectl describe`:
-
-```console
-$ kubectl describe pod nginx-pod
-
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: anf-pvc
- ReadOnly: false
- default-token-k7952:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-k7952
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 15s default-scheduler Successfully assigned trident/nginx-pod to brameshb-non-root-test
- Normal SuccessfulAttachVolume 15s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-bffa315d-3f44-4770-86eb-c922f567a075"
- Normal Pulled 12s kubelet Container image "mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine" already present on machine
- Normal Created 11s kubelet Created container nginx
- Normal Started 10s kubelet Started container nginx
-```
-
-Astra Trident supports many features with Azure NetApp Files, such as:
-
-* [Expanding volumes](https://docs.netapp.com/us-en/trident/trident-use/vol-expansion.html)
-* [On-demand volume snapshots](https://docs.netapp.com/us-en/trident/trident-use/vol-snapshots.html)
-* [Importing volumes](https://docs.netapp.com/us-en/trident/trident-use/vol-import.html)
+After the PVC is created, a pod can be spun up to access the Azure NetApp Files volume. The following manifest can be used to define an NGINX pod that mounts the Azure NetApp Files volume created in the previous step. In this example, the volume is mounted at `/mnt/data`.
+
+1. Create a file named `anf-nginx-pod.yaml` and copy the following manifest:
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nginx-pod
+ spec:
+ containers:
+ - name: nginx
+ image: mcr.microsoft.com/oss/nginx/nginx:latest1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/data"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: anf-pvc
+ ```
+
+2. Create the pod using the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f anf-nginx-pod.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ pod/nginx-pod created
+ ```
+
+ Kubernetes has created a pod with the volume mounted and accessible within the `nginx` container at `/mnt/data`. You can confirm by checking the event logs for the pod using [kubectl describe][kubectl-describe] command:
+
+ ```bash
+ kubectl describe pod nginx-pod
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ [...]
+ Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: anf-pvc
+ ReadOnly: false
+ default-token-k7952:
+ Type: Secret (a volume populated by a Secret)
+ SecretName: default-token-k7952
+ Optional: false
+ [...]
+ Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 15s default-scheduler Successfully assigned trident/nginx-pod to brameshb-non-root-test
+ Normal SuccessfulAttachVolume 15s attachdetach-controller AttachVolume.Attach succeeded for volume "pvc-bffa315d-3f44-4770-86eb-c922f567a075"
+ Normal Pulled 12s kubelet Container image "mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine" already present on machine
+ Normal Created 11s kubelet Created container nginx
+ Normal Started 10s kubelet Started container nginx
+ ```
## Using Azure tags
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
## Next steps
-* For more information on Azure NetApp Files, see [What is Azure NetApp Files][anf].
+Astra Trident supports many features with Azure NetApp Files. For more information, see:
+
+* [Expanding volumes][expand-trident-volumes]
+* [On-demand volume snapshots][on-demand-trident-volume-snapshots]
+* [Importing volumes][importing-trident-volumes]
+<!-- EXTERNAL LINKS -->
+[astra-trident]: https://docs.netapp.com/us-en/trident/https://docsupdatetracker.net/index.html
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
+[astra-control-service]: https://cloud.netapp.com/astra-control
+[kubernetes-csi-driver]: https://kubernetes-csi.github.io/docs/
+[trident-install-guide]: https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy.html
+[trident-helm-chart]: https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-operator.html
+[tridentctl]: https://docs.netapp.com/us-en/trident/trident-get-started/kubernetes-deploy-tridentctl.html
+[trident-backend-install-guide]: https://docs.netapp.com/us-en/trident/trident-use/backends.html
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[expand-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-expansion.html
+[on-demand-trident-volume-snapshots]: https://docs.netapp.com/us-en/trident/trident-use/vol-snapshots.html
+[importing-trident-volumes]: https://docs.netapp.com/us-en/trident/trident-use/vol-import.html
+
+<!-- INTERNAL LINKS -->
[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md [aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md [aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[aks-nfs]: azure-nfs-volume.md
[anf]: ../azure-netapp-files/azure-netapp-files-introduction.md [anf-delegate-subnet]: ../azure-netapp-files/azure-netapp-files-delegate-subnet.md [anf-regions]: https://azure.microsoft.com/global-infrastructure/services/?products=netapp&regions=all
-[anf-waitlist]: https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR8cq17Xv9yVBtRCSlcD_gdVUNUpUWEpLNERIM1NOVzA5MzczQ0dQR1ZTSS4u
[az-aks-show]: /cli/azure/aks#az_aks_show [az-netappfiles-account-create]: /cli/azure/netappfiles/account#az_netappfiles_account_create [az-netapp-files-dynamic]: azure-netapp-files-dynamic.md
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[az-netappfiles-volume-show]: /cli/azure/netappfiles/volume#az_netappfiles_volume_show [az-network-vnet-subnet-create]: /cli/azure/network/vnet/subnet#az_network_vnet_subnet_create [install-azure-cli]: /cli/azure/install-azure-cli
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[kubectl-exec]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#exec
-[use-tags]: use-tags.md
+[use-tags]: use-tags.md
+[azure-ad-app-registration]: ../active-directory/develop/howto-create-service-principal-portal.md
aks Concepts Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-network.md
For more information on core Kubernetes and AKS concepts, see the following arti
<!-- LINKS - External --> [cni-networking]: https://github.com/Azure/azure-container-networking/blob/master/docs/cni.md
-[kubenet]: https://kubernetes.netlify.app/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/#kubenet
+[kubenet]: https://kubernetes.io/docs/concepts/extend-kubernetes/compute-storage-net/network-plugins/
[k8s-service]: https://kubernetes.io/docs/concepts/services-networking/service/ [service-types]: https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
This article shows you how to use Azure CNI networking for dynamic allocation of
> [!NOTE] > When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service isn't supported.
-* Review the [prerequisites](/configure-azure-cni.md#prerequisites) for configuring basic Azure CNI networking in AKS, as the same prerequisites apply to this article.
-* Review the [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS, as the same parameters apply.
+* Review the [prerequisites](./configure-azure-cni.md#prerequisites) for configuring basic Azure CNI networking in AKS, as the same prerequisites apply to this article.
+* Review the [deployment parameters](./configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS, as the same parameters apply.
* AKS Engine and DIY clusters aren't supported. * Azure CLI version `2.37.0` or later.
All other guidance related to configuring the maximum pods per node remains the
## Deployment parameters
-The [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS are all valid, with two exceptions:
+The [deployment parameters](./configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS are all valid, with two exceptions:
* The **subnet** parameter now refers to the subnet related to the cluster's nodes. * An additional parameter **pod subnet** is used to specify the subnet whose IP addresses will be dynamically allocated to pods.
aks Gpu Multi Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/gpu-multi-instance.md
If you're using command line, use the `az aks nodepool add` command to create th
az aks nodepool add \ --name mignode \
- --resourcegroup myresourcegroup \
+ --resource-group myresourcegroup \
--cluster-name migcluster \ --node-vm-size Standard_ND96asr_v4 \ --gpu-instance-profile MIG1g
aks Quick Windows Container Deploy Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-powershell.md
creating a node pool to run Windows Server containers, the default value for `-V
[restricted VM sizes][restricted-vm-sizes]. The minimum recommended size is **Standard_D2s_v3**. The previous command also uses the default subnet in the default vnet created when running `New-AzAksCluster`.
+## Add a Windows Server 2019 or Windows Server 2022 node pool
+
+AKS supports Windows Server 2019 and 2022 node pools. For Kubernetes version 1.25.0 and higher, Windows Server 2022 is the default operating system. For earlier Kubernetes versions, Windows Server 2019 is the default OS. To use Windows Server 2019, you need to specify the following parameters:
+- **osType** set the value to `Windows`
+- **osSKU** set the value to `Windows2019`
+
+> [!NOTE]
+> OsSKU requires PowerShell Az module version "9.2.0" or higher.
+> Windows Server 2022 requires Kubernetes version "1.23.0" or higher.
+
+```azurepowershell-interactive
+New-AzAksNodePool -ResourceGroupName myResourceGroup -ClusterName myAKSCluster -VmSetType VirtualMachineScaleSets -OsType Windows -OsSKU Windows2019 Windows -Name npwin
+```
+ ## Connect to the cluster To manage a Kubernetes cluster, you use [kubectl][kubectl], the Kubernetes command-line client. If
api-management Export Api Power Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/export-api-power-platform.md
This article walks through the steps in the Azure portal to create a custom Powe
:::image type="content" source="media/export-api-power-platform/create-custom-connector.png" alt-text="Create custom connector to API in API Management":::
-Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://flow.microsoft.com) environment. You will see the API listed under **Data > Custom Connectors**.
+Once the connector is created, navigate to your [Power Apps](https://make.powerapps.com) or [Power Automate](https://make.powerautomate.com) environment. You will see the API listed under **Data > Custom Connectors**.
:::image type="content" source="media/export-api-power-platform/custom-connector-power-app.png" alt-text="Custom connector in Power Platform":::
You can manage your custom connector in your Power Apps or Power Platform enviro
1. Select the pencil (Edit) icon to edit and test the custom connector. > [!NOTE]
-> To call the API from the Power Apps test console, you need to add the `https://flow.microsoft.com` URL as an origin to the [CORS policy](cors-policy.md) in your API Management instance.
+> To call the API from the Power Apps test console, you need to add the `https://make.powerautomate.com` URL as an origin to the [CORS policy](cors-policy.md) in your API Management instance.
## Update a custom connector
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
Title: Configure custom domain suffix for App Service Environment
description: Configure a custom domain suffix for the Azure App Service Environment. Previously updated : 09/01/2022 Last updated : 02/09/2023 zone_pivot_groups: app-service-environment-portal-arm
The certificate for custom domain suffix must be stored in an Azure Key Vault. A
:::image type="content" source="./media/custom-domain-suffix/key-vault-networking.png" alt-text="Screenshot of a sample networking page for key vault to allow custom domain suffix feature.":::
-Your certificate must be a wildcard certificate for the selected custom domain name. For example, *internal-contoso.com* would need a certificate covering **.internal-contoso.com*. If the certificate used custom domain suffix contains a Subject Alternate Name (SAN) entry for scm, for example **.scm.internal-contoso.com*, the scm site will also available using the custom domain suffix.
+Your certificate must be a wildcard certificate for the selected custom domain name. For example, *internal-contoso.com* would need a certificate covering **.internal-contoso.com*. If the certificate used by the custom domain suffix contains a Subject Alternate Name (SAN) entry for scm, for example **.scm.internal-contoso.com*, the scm site will also available using the custom domain suffix.
+
+If you rotate your certificate in Azure Key Vault, the App Service Environment will pick up the change within 24 hours.
::: zone pivot="experience-azp"
app-service Manage Create Arc Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-create-arc-environment.md
While a [Log Analytic workspace](../azure-monitor/logs/quick-create-workspace.md
--configuration-settings "customConfigMap=${namespace}/kube-environment-config" \ --configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${aksClusterGroupName}" \ --configuration-settings "logProcessor.appLogs.destination=log-analytics" \
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${logAnalyticsWorkspaceIdEnc}" \
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${logAnalyticsKeyEnc}"
+ --config-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${logAnalyticsWorkspaceIdEnc}" \
+ --config-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${logAnalyticsKeyEnc}"
``` # [PowerShell](#tab/powershell)
While a [Log Analytic workspace](../azure-monitor/logs/quick-create-workspace.md
--configuration-settings "customConfigMap=${namespace}/kube-environment-config" ` --configuration-settings "envoy.annotations.service.beta.kubernetes.io/azure-load-balancer-resource-group=${aksClusterGroupName}" ` --configuration-settings "logProcessor.appLogs.destination=log-analytics" `
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${logAnalyticsWorkspaceIdEnc}" `
- --configuration-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${logAnalyticsKeyEnc}"
+ --config-protected-settings "logProcessor.appLogs.logAnalyticsConfig.customerId=${logAnalyticsWorkspaceIdEnc}" `
+ --config-protected-settings "logProcessor.appLogs.logAnalyticsConfig.sharedKey=${logAnalyticsKeyEnc}"
```
app-service Private Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/networking/private-endpoint.md
description: Connect privately to an App Service apps using Azure private endpoi
ms.assetid: 2dceac28-1ba6-4904-a15d-9e91d5ee162c Previously updated : 01/30/2023 Last updated : 02/09/2023
A private endpoint is a special network interface (NIC) for your App Service app
When you create a private endpoint for your app, it provides secure connectivity between clients on your private network and your app. The private endpoint is assigned an IP Address from the IP address range of your virtual network. The connection between the private endpoint and the app uses a secure [Private Link](../../private-link/private-link-overview.md). Private endpoint is only used for incoming traffic to your app. Outgoing traffic won't use this private endpoint. You can inject outgoing traffic to your network in a different subnet through the [virtual network integration feature](../overview-vnet-integration.md).
-Each slot of an app is configured separately. You can plug up to 100 private endpoints per slot. You can't share a private endpoint between slots.
+Each slot of an app is configured separately. You can plug up to 100 private endpoints per slot. You can't share a private endpoint between slots. The sub-resource name of a slot will be `sites-<slot-name>`.
The subnet where you plug the private endpoint can have other resources in it, you don't need a dedicated empty subnet. You can also deploy the private endpoint in a different region than your app.
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
Title: Install and run Docker containers for Form Recognizer v2.1
+ Title: Install and run Docker containers for Form Recognizer
description: Use the Docker containers for Form Recognizer on-premises to identify and extract key-value pairs, selection marks, tables, and structure from forms and documents.
Previously updated : 01/23/2023 Last updated : 02/08/2023
-monikerRange: 'form-recog-2.1.0'
recommendations: false
-# Install and run Form Recognizer v2.1 containers
+# Install and run Form Recognizer containers
-**This article applies to:** ![Form Recognizer v2.1 checkmark](../media/yes-icon.png) **Form Recognizer v2.1**.
+ Azure Form Recognizer is an Azure Applied AI Service that lets you build automated data processing software using machine-learning technology. Form Recognizer enables you to identify and extract text, key/value pairs, selection marks, table data, and more from your form documents. The results are delivered as structured data that includes the relationships in the original file.
-In this article you'll learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements. Form Recognizer features are supported by six Form Recognizer feature containersΓÇö**Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** (for Receipt, Business Card and ID Document containers you'll also need the **Read** OCR container).
+In this article you learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements.
+
+* **Read** and **Layout** models are supported by Form Recognizer v3.0 containers.
+
+* **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are currently only supported in the [v2.1 containers](form-recognizer-container-install-run.md?view=form-recog-2.1.0&preserve-view=true).
++
+In this article you learn how to download, install, and run Form Recognizer containers. Containers enable you to run the Form Recognizer service in your own environment. Containers are great for specific security and data governance requirements.
+
+* **Layout**, **Business Card**,**ID Document**, **Receipt**, **Invoice**, and **Custom** models are supported by six Form Recognizer feature containers.
+
+* For Receipt, Business Card and ID Document containers you also need the **Read** OCR container.
+ > [!IMPORTANT] >
-> * To use Form Recognizer containers, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container) below.
+> * To use Form Recognizer containers, you must submit an online request, and have it approved. For more information, _see_ [Request approval to run container](#request-approval-to-run-container).
## Prerequisites
-To get started, you'll need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
+To get started, you need an active [**Azure account**](https://azure.microsoft.com/free/cognitive-services/). If you don't have one, you can [**create a free account**](https://azure.microsoft.com/free/).
-You'll also need the following to use Form Recognizer containers:
+You also need the following to use Form Recognizer containers:
| Required | Purpose | |-|| | **Familiarity with Docker** | You should have a basic understanding of Docker concepts, like registries, repositories, containers, and container images, as well as knowledge of basic `docker` [terminology and commands](/dotnet/architecture/microservices/container-docker-introduction/docker-terminology). | | **Docker Engine installed** | <ul><li>You need the Docker Engine installed on a [host computer](#host-computer-requirements). Docker provides packages that configure the Docker environment on [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms). For a primer on Docker and container basics, see the [Docker overview](https://docs.docker.com/engine/docker-overview/).</li><li> Docker must be configured to allow the containers to connect with and send billing data to Azure. </li><li> On **Windows**, Docker must also be configured to support **Linux** containers.</li></ul> | |**Form Recognizer resource** | A [**single-service Azure Form Recognizer**](https://portal.azure.com/#create/Microsoft.CognitiveServicesFormRecognizer) or [**multi-service Cognitive Services**](https://portal.azure.com/#create/Microsoft.CognitiveServicesAllInOne) resource in the Azure portal. To use the containers, you must have the associated key and endpoint URI. Both values are available on the Azure portal Form Recognizer **Keys and Endpoint** page: <ul><li>**{FORM_RECOGNIZER_KEY}**: one of the two available resource keys.<li>**{FORM_RECOGNIZER_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></li></ul>|
-| **Computer Vision API resource** | **To process business cards, ID documents, or Receipts, you'll need a Computer Vision resource.** <ul><li>You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image). The usual [billing](#billing) fees apply.</li> <li>If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`). If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of *5000*. </li></ul></br>Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:<ul><li>**{COMPUTER_VISION_KEY}**: one of the two available resource keys.</li><li> **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.</li></ul> |
|Optional|Purpose| ||-| |**Azure CLI (command-line interface)** | The [Azure CLI](/cli/azure/install-azure-cli) enables you to use a set of online commands to create and manage Azure resources. It's available to install in Windows, macOS, and Linux environments and can be run in a Docker container and Azure Cloud Shell. |
-|||
+
+You also need a **Computer Vision API resource to process business cards, ID documents, or Receipts**.
+
+* You can access the Recognize Text feature as either an Azure resource (the REST API or SDK) or a **cognitive-services-recognize-text** [container](../../../cognitive-services/Computer-vision/computer-vision-how-to-install-containers.md#get-the-container-image).
+
+* The usual [billing](#billing) fees apply.
+
+* If you use the **cognitive-services-recognize-text** container, make sure that your Computer Vision key for the Form Recognizer container is the key specified in the Computer Vision `docker run` or `docker compose` command for the **cognitive-services-recognize-text** container and your billing endpoint is the container's endpoint (for example, `http://localhost:5000`).
+
+* If you use both the Computer Vision container and Form Recognizer container together on the same host, they can't both be started with the default port of **5000**.
+
+* Pass in both the key and endpoints for your Computer Vision Azure cloud or Cognitive Services container:
+
+ * **{COMPUTER_VISION_KEY}**: one of the two available resource keys.
+ * **{COMPUTER_VISION_ENDPOINT_URI}**: the endpoint for the resource used to track billing information.
## Request approval to run container
The host is a x64-based computer that runs the Docker container. It can be a com
#### Required supporting containers The following table lists the supporting container(s) for each Form Recognizer container you download. For more information, see the [Billing](#billing) section. | Feature container | Supporting container(s) | ||--|
The following table lists the supporting container(s) for each Form Recognizer c
| **Invoice** | **Layout** | | **Receipt** |**Computer Vision Read** | | **Custom** | **Custom API**, **Custom Supervised**, **Layout**|+
+Feature container | Supporting container(s) |
+||--|
+| **Read** | None |
+| **Layout** | None|
#### Recommended CPU cores and memory
The following table lists the supporting container(s) for each Form Recognizer c
> > The minimum and recommended values are based on Docker limits and *not* the host machine resources.
-##### Read, Layout, and Prebuilt containers
+
+##### Read and Layout containers
+
+| Container | Minimum | Recommended |
+|--||-|
+| `Read` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+| `Layout` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
++
+##### Read, Layout, and prebuilt containers
| Container | Minimum | Recommended | |--||-|
-| Read 3.2 | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
-| Layout 2.1 | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
-| Business Card 2.1 | `2` cores, 4-GB memory | `4` cores, 4-GB memory |
-| ID Document 2.1 | `1` core, 2-GB memory |`2` cores, 2-GB memory |
-| Invoice 2.1 | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
-| Receipt 2.1 | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
+| `Read 3.2` | `8` cores, 16-GB memory | `8` cores, 24-GB memory|
+| `Layout 2.1` | `8` cores, 16-GB memory | `8` cores, 24-GB memory |
+| `Business Card 2.1` | `2` cores, 4-GB memory | `4` cores, 4-GB memory |
+| `ID Document 2.1` | `1` core, 2-GB memory |`2` cores, 2-GB memory |
+| `Invoice 2.1` | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
+| `Receipt 2.1` | `4` cores, 8-GB memory | `8` cores, 8-GB memory |
##### Custom containers
The following host machine requirements are applicable to **train and analyze**
|--||-| | Custom API| 0.5 cores, 0.5-GB memory| `1` core, 1-GB memory | |Custom Supervised | `4` cores, 2-GB memory | `8` cores, 4-GB memory| * Each core must be at least 2.6 gigahertz (GHz) or faster. * Core and memory correspond to the `--cpus` and `--memory` settings, which are used as part of the `docker compose` or `docker run` command.
The following host machine requirements are applicable to **train and analyze**
* Ensure that the EULA value is set to "accept".
-* The `EULA`, `Billing`, and `Key` values must be specified; otherwise the container won't start.
+* The `EULA`, `Billing`, and `ApiKey` values must be specified; otherwise the container can't start.
> [!IMPORTANT] > The keys are used to access your Form Recognizer resource. Do not share your keys. Store them securely, for example, using Azure Key Vault. We also recommend regenerating these keys regularly. Only one key is necessary to make an API call. When regenerating the first key, you can use the second key for continued access to the service. +
+### Read
+
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with the `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
+
+```yml
+version: "3.9"
+
+ azure-form-recognizer-read:
+ container_name: azure-form-recognizer-read
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/read-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
+
+### Layout
+
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
+
+```yml
+version: "3.9"
+
+ azure-form-recognizer-layout:
+ container_name: azure-form-recognizer-layout
+ image: mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout-3.0
+ environment:
+ - EULA=accept
+ - billing={FORM_RECOGNIZER_ENDPOINT_URI}
+ - apiKey={FORM_RECOGNIZER_KEY}
+ ports:
+ - "5000:5000"
+ networks:
+ - ocrvnet
+networks:
+ ocrvnet:
+ driver: bridge
+```
+
+Now, you can start the service with the [**docker compose**](https://docs.docker.com/compose/) command:
+
+```bash
+docker-compose up
+```
++++ ### [Layout](#tab/layout) The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
docker-compose up
### [Custom](#tab/custom)
-In addition to the [prerequisites](#prerequisites), you'll need to do the following to process a custom document:
+In addition to the [prerequisites](#prerequisites), you need to do the following to process a custom document:
-#### &bullet; Create a folder to store the following files:
+#### &bullet; Create a folder to store the following files
1. [**.env**](#-create-an-environment-file) 1. [**nginx.conf**](#-create-an-nginx-file)
In addition to the [prerequisites](#prerequisites), you'll need to do the follow
#### &bullet; Create a folder to store your input data 1. Name this folder **shared**.
- 1. We'll reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
- 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file.
+ 1. We reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You need to add it to your **.env** file.
#### &bullet; Create a folder to store the logs written by the Form Recognizer service on your local machine. 1. Name this folder **output**.
- 1. We'll reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
- 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file.
+ 1. We reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You need to add it to your **.env** file.
#### &bullet; Create an environment file
http {
} ```
-* Gather a set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created.
+* Gather a set of at least six forms of the same type. You use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created.
-* If you want to label your data, download the [Form Recognizer Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases). The download will import the labeling tool .exe file that you'll use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
+* If you want to label your data, download the [Form Recognizer Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases). The download imports the labeling tool .exe file that you use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
#### Create a new Sample Labeling tool project
http {
volumes: - ${NGINX_CONF_FILE}:/etc/nginx/nginx.conf ports:
- - "5000:5000"
+ - "5000"
rabbitmq: container_name: ${RABBITMQ_HOSTNAME} image: rabbitmq:3
$docker-compose up
To learn how to use the Sample Labeling tool with an Azure Container Instance, *see*, [Deploy the Sample Labeling tool](../deploy-label-tool.md#deploy-with-azure-container-instances-aci). + ## Validate that the service is running There are several ways to validate that the container is running:
docker-compose down
The Form Recognizer containers send billing information to Azure by using a Form Recognizer resource on your Azure account.
-Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You'll be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you'll be billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you'll be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You're billed for each container instance used to process your documents and images.
+
+> [!NOTE]
+> Currently, Form Recognizer v3 containers only support pay as you go pricing. Support for commitment tiers and disconnected mode will be added in March 2023.
+Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
+
+Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You're billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you're billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you're billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft. ### Connect to Azure
The container needs the billing argument values to run. These values allow the c
### Billing arguments
-The [**docker-compose up**](https://docs.docker.com/engine/reference/commandline/compose_up/) command will start the container when all three of the following options are provided with valid values:
+The [**docker-compose up**](https://docs.docker.com/engine/reference/commandline/compose_up/) command starts the container when all three of the following options are provided with valid values:
| Option | Description | |--|-|
-| `Key` | The key of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to a key for the provisioned resource that's specified in `Billing`. |
+| `ApiKey` | The key of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to a key for the provisioned resource that's specified in `Billing`. |
| `Billing` | The endpoint of the Cognitive Services resource that's used to track billing information.<br/>The value of this option must be set to the endpoint URI of a provisioned Azure resource.| | `Eula` | Indicates that you accepted the license for the container.<br/>The value of this option must be set to **accept**. |
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/whats-new.md
Previously updated : 01/30/2023 Last updated : 02/08/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
>[!NOTE] > With the release of the 2022-08-31 GA API, the associated preview APIs are being deprecated. If you are using the 2021-09-30-preview or the 2022-01-30-preview API versions, please update your applications to target the 2022-08-31 API version. There are a few minor changes involved, for more information, _see_ the [migration guide](v3-migration-guide.md).
+## February 2023
+
+* Form Recognizer v3.0 container support
+
+ * The v3.0 [**Read**](concept-read.md) and [**Layout**](concept-layout.md) containers are now available for use!
+
+ * For more information on containers, _see_ [Install and run containers](containers/form-recognizer-container-install-run.md)
+++ ## January 2023 > [!TIP] > All January 2023 updates are available with [REST API version **2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument).
-* **[Prebuilt receipt model](concept-receipt.md#supported-languages-and-locales-v30) ΓÇöadditional language support**:
+* **[Prebuilt receipt model](concept-receipt.md#supported-languages-and-locales-v30)ΓÇöadditional language support**:
The **prebuilt receipt model** now has added support for the following languages:
- * English - United Arab Emirates (en-ae)
- * Dutch - Netherlands (nl-nl)
- * French - Canada (fr-ca)
- * Japanese - Japan (ja-jp)
- * Portuguese - Brazil (pt-br)
+ * English - United Arab Emirates (en-AE)
+ * Dutch - Netherlands (nl-NL)
+ * French - Canada (fr-CA)
+ * Japanese - Japan (ja-JP)
+ * Portuguese - Brazil (pt-BR)
* **[Prebuilt invoice model](concept-invoice.md)ΓÇöadditional language support and field extractions** The **prebuilt invoice model** now has added support for the following languages:
- * English - Australia (en-au), Canada (en-ca), Great Britain (en-gb), India (en-in)
- * Portuguese - Brazil (pt-br)
+ * English - Australia (en-AU), Canada (en-CA), Great Britain (en-GB), India (en-IN)
+ * Portuguese - Brazil (pt-BR)
The **prebuilt invoice model** now has added support for the following field extractions: * Currency code * Payment options * Total discount
- * Tax items (en-in only)
+ * Tax items (en-IN only)
* **[Prebuilt ID document model](concept-id-document.md#document-types)ΓÇöadditional document types support**
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* [**prebuilt-read**](concept-read.md). Read OCR model is now also available in Form Recognizer with paragraphs and language detection as the two new features. Form Recognizer Read targets advanced document scenarios aligned with the broader document intelligence capabilities in Form Recognizer. * [**prebuilt-layout**](concept-layout.md). The Layout model extracts paragraphs and whether the extracted text is a paragraph, title, section heading, footnote, page header, page footer, or page number.
- * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields will now resolve to the existing fields TotalTax and Line/Tax respectively.
+ * [**prebuilt-invoice**](concept-invoice.md). The TotalVAT and Line/VAT fields now resolve to the existing fields TotalTax and Line/Tax respectively.
* [**prebuilt-idDocument**](concept-id-document.md). Data extraction support for US state ID, social security, and green cards. Support for passport visa information. * [**prebuilt-receipt**](concept-receipt.md). Expanded locale support for French (fr-FR), Spanish (es-ES), Portuguese (pt-PT), Italian (it-IT) and German (de-DE). * [**prebuilt-businessCard**](concept-business-card.md). Address parsing support to extract subfields for address components like address, city, state, country, and zip code.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
[Learn more about the invoice model](./concept-invoice.md)
-* **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model will extract line items as part of the JSON output in the documentResults section.
+* **Supervised table labeling and training, empty-value labeling** - In addition to Form Recognizer's [state-of-the-art deep learning automatic table extraction capabilities](https://techcommunity.microsoft.com/t5/azure-ai/enhanced-table-extraction-from-documents-with-form-recognizer/ba-p/2058011), it now enables customers to label and train on tables. This new release includes the ability to label and train on line items/tables (dynamic and fixed) and train a custom model to extract key-value pairs and line items. Once a model is trained, the model extracts line items as part of the JSON output in the documentResults section.
:::image type="content" source="./media/table-labeling.png" alt-text="Screenshot of the table labeling feature." lightbox="./media/table-labeling.png":::
- In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model will know to extract values properly from analyzed documents.
+ In addition to labeling tables, you can now label empty values and regions. If some documents in your training set don't have values for certain fields, you can label them so that your model knows to extract values properly from analyzed documents.
* **Support for 66 new languages** - The Layout API and Custom Models for Form Recognizer now support 73 languages. [Learn more about Form Recognizer's language support](language-support.md)
-* **Natural reading order, handwriting classification, and page selection** - With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter and set it to "natural" value for a more human-friendly reading order output. In addition, for Latin languages, Form Recognizer will classify text lines as handwritten style or not and give a confidence score.
+* **Natural reading order, handwriting classification, and page selection** - With this update, you can choose to get the text line outputs in the natural reading order instead of the default left-to-right and top-to-bottom ordering. Use the new readingOrder query parameter and set it to "natural" value for a more human-friendly reading order output. In addition, for Latin languages, Form Recognizer classifies text lines as handwritten style or not and give a confidence score.
* **Prebuilt receipt model quality improvements** This update includes many quality improvements for the prebuilt Receipt model, especially around line item extraction.
Form Recognizer service is updated on an ongoing basis. Bookmark this page to st
* **New language supported: Japanese** - The following new languages are now supported: for `AnalyzeLayout` and `AnalyzeCustomForm`: Japanese (`ja`). [Language support](language-support.md) * **Text line style indication (handwritten/other) (Latin languages only)** - Form Recognizer now outputs an `appearance` object classifying whether each text line is handwritten style or not, along with a confidence score. This feature is supported only for Latin languages. * **Quality improvements** - Extraction improvements including single digit extraction improvements.
- * **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data will be extracted without writing any code.
+ * **New try-it-out feature in the Form Recognizer Sample and Labeling Tool** - Ability to try out prebuilt Invoice, Receipt, and Business Card models and the Layout API using the Form Recognizer Sample Labeling tool. See how your data is extracted without writing any code.
* [**Try the Form Recognizer Sample Labeling tool**](https://fott-2-1.azurewebsites.net)
automation Extension Based Hybrid Runbook Worker Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md
If you use a proxy server for communication between Azure Automation and machine
```azurepowershell-interactive $settings = @{
- "AutomationAccountURL" = "<registrationurl>/<subscription-id>";
+ "AutomationAccountURL" = "<registrationurl>";
"ProxySettings" = @{ "ProxyServer" = "<ipaddress>:<port>"; "UserName"="test";
$protectedsettings = @{
"Proxy_URL"="http://username:password@<IP Address>" }; $settings = @{
- "AutomationAccountURL" = "<registration-url>/<subscription-id>";
+ "AutomationAccountURL" = "<registration-url>";
}; ``` **Azure VMs** ```powershell
-Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 0.1 -Settings $settings
+Set-AzVMExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -VMName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux TypeHandlerVersion 1.1 -Settings $settings -EnableAutomaticUpgrade $true
``` **Azure Arc-enabled VMs** ```powershell
-New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 0.1 -Setting $settings -NoWait
+New-AzConnectedMachineExtension -ResourceGroupName <VMResourceGroupName> -Location <VMLocation> -MachineName <VMName> -Name "HybridWorkerExtension" -Publisher "Microsoft.Azure.Automation.HybridWorker" -ExtensionType HybridWorkerForLinux -TypeHandlerVersion 1.1 -Setting $settings -EnableAutomaticUpgrade $true
```
automation Extension Based Hybrid Runbook Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/extension-based-hybrid-runbook-worker.md
Title: Troubleshoot extension-based Hybrid Runbook Worker issues in Azure Automation description: This article tells how to troubleshoot and resolve issues that arise with Azure Automation extension-based Hybrid Runbook Workers. Previously updated : 10/31/2022 Last updated : 02/09/2023
To help troubleshoot issues with extension-based Hybrid Runbook Workers:
- Run the troubleshooter tool on the VM and it will generate an output file. Open the output file and verify the errors identified by the troubleshooter tool. - For windows: you can find the troubleshooter at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\TroubleShootWindowsExtension.ps1`
- - For Linux: you can find the troubleshooter at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/troubleshootLinuxExtension.py`
+ - For Linux: you can find the troubleshooter at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux-<version>/Troubleshooter/LinuxTroubleshooter.py`
- For Linux machines, the Hybrid worker extension creates a `hweautomation` user and starts the Hybrid worker under the user. Check whether the user `hweautomation` is set up with the correct permissions. If your runbook is trying to access any local resources, ensure that the `hweautomation` has the correct permissions to the local resources.
To help troubleshoot issues with extension-based Hybrid Runbook Workers:
- For Windows: check the `Hybrid Worker Service` service. - For Linux: check the `hwd.` service. -- Run the log collector tool and review the collected logs.
- - For Windows: the logs are located at `C:\HybridWorkerExtensionLogs`. The tool is at `C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\PullLogs.ps1`.
- - For Linux: the logs are located at `/home/nxautomation/run`. The tool is at `/var/lib/waagent/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux/logcollector.py`.
+- Collect logs:
+ - For Windows: Run the log collector tool in </br>`C:\Packages\Plugins\Microsoft.Azure.Automation.HybridWorker.HybridWorkerForWindows\<version>\bin\troubleshooter\PullLogs.ps1` </br>
+ Logs are in `C:\HybridWorkerExtensionLogs`.
+ - For Linux: Logs are in folders </br>`/var/log/azure/Microsoft.Azure.Automation.HybridWorker.HybridWorkerForLinux` and `/home/hweautomation`.
### Scenario: Hybrid Worker deployment fails with Private Link error
azure-arc Upgrade Sql Managed Instance Auto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/upgrade-sql-managed-instance-auto.md
# Enable automatic upgrades of an Azure SQL Managed Instance for Azure Arc You can set the `--desired-version` parameter of the `spec.update.desiredVersion` property of an Azure Arc-enabled SQL Managed Instance to `auto` to ensure that your managed instance will be upgraded after a data controller upgrade, with no interaction from a user. This setting simplifies management, as you don't need to manually upgrade every instance for every release.
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
In addition, resource bridge (preview) requires connectivity to the [Arc-enabled
## SSL proxy configuration
-Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. This configuration is handled automatically. However, proxy configuration of the management machine isn't configured by the Azure Arc resource bridge.
+If using a proxy, Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. To configure the Arc resource bridge with proxy, provide the proxy certificate file path during creation of the configuration files. Only pass the single proxy certificate. If a certificate bundle is passed then the deployment will fail. Proxy configuration of the management machine isn't configured by the Azure Arc resource bridge.
There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted. ++ ## Exclusion list for no proxy The following table contains the list of addresses that must be excluded by using the `-noProxy` parameter in the `createconfig` command.
azure-functions Durable Functions Configure Durable Functions With Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-configure-durable-functions-with-credentials.md
+
+ Title: "Configure Durable Functions with Azure Active Directory"
+description: Configure Durable Functions with Managed Identity Credentials and Client Secret Credentials.
++ Last updated : 02/01/2023+++
+# Configure Durable Functions with Azure Active Directory
+
+[Azure Active Directory](../../active-directory/fundamentals/active-directory-whatis.md) (Azure AD) is a cloud-based identity and access management service. Identity-based connections allow Durable Functions to make authorized requests against Azure AD protected resources, like an Azure Storage account, without the need to manage secrets manually. Using the default Azure storage provider, Durable Functions needs to authenticate against an Azure storage account. In this article, we show how to configure a Durable Functions app to utilize two kinds of Identity-based connections: **managed identity credentials** and **client secret credentials**.
++
+## Configure your app to use managed identity (recommended)
+
+A [managed identity](../../app-service/overview-managed-identity.md) allows your app to easily access other Azure AD-protected resources such as Azure Key Vault. Managed identity is supported in [Durable Functions extension](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.DurableTask) versions **2.7.0** and greater.
+> [!NOTE]
+> Strictly speaking, a managed identity is only available to apps when executing on Azure. When configured to use identity-based connections, a locally executing app will utilize your **developer credentials** to authenticate with Azure resources. Then, when deployed on Azure, it will utilize your managed identity configuration instead.
+
+### Prerequisites
+
+The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it.
+In particular, this quickstart assumes that you have already:
+
+* Created a Durable Functions project in the Azure portal or deployed a local Durable Functions to Azure.
+
+If this isn't the case, we suggest you start with one of the following articles, which provides detailed instructions on how to achieve all the requirements above:
+
+- [Create your first durable function - C#](durable-functions-create-first-csharp.md)
+- [Create your first durable function - JavaScript](quickstart-js-vscode.md)
+- [Create your first durable function - Python](quickstart-python-vscode.md)
+- [Create your first durable function - PowerShell](quickstart-powershell-vscode.md)
+- [Create your first durable function - Java](quickstart-java.md)
+
+### Enable managed identity
+
+Only one identity is needed for your function, either a **system assigned managed identity** or a **user assigned managed identity**. To enable a managed identity for your function and learn more about the differences between the two identities, read the detailed instructions [here](../../app-service/overview-managed-identity.md).
+
+### Assign Role-based Access Controls (RBAC) to managed identity
+
+Navigate to your app's storage resource on the Azure portal. Follow [these instructions](../../active-directory/managed-identities-azure-resources/howto-assign-access-portal.md) to assign the following roles to your managed identity resource.
+
+* Storage Queue Data Contributor
+* Storage Blob Data Contributor
+* Storage Table Data Contributor
+
+### Add managed identity configuration in the Azure portal
+
+Navigate to your Azure function appΓÇÖs **Configuration** page and perform the following changes:
+
+1. Remove the default value "AzureWebJobsStorage".
+
+ [ ![Screenshot of default storage setting.](./media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-01.png)](./media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-01.png#lightbox)
+
+2. Link your Azure storage account by adding **either one** of the following value settings:
+
+ * **AzureWebJobsStorage__accountName**: For example: `mystorageaccount123`
+
+ * **AzureWebJobsStorage__blobServiceUri**: Example: `https://mystorageaccount123.blob.core.windows.net/`
+
+ **AzureWebJobsStorage__queueServiceUri**: Example: `https://mystorageaccount123.queue.core.windows.net/`
+
+ **AzureWebJobsStorage__tableServiceUri**: Example: `https://mystorageaccount123.table.core.windows.net/`
+
+ > [!NOTE]
+ > If you are using [Azure Government](../../azure-government/documentation-government-welcome.md) or any other cloud that's separate from global Azure, then you will need to use this second option to provide specific service URLs. The values for these settings can be found in the storage account under the **Endpoints** tab. For more information on using Azure Storage with Azure Government, see the [Develop with Storage API on Azure Government](../../azure-government/documentation-government-get-started-connect-to-storage.md) documentation.
+
+ ![Screenshot of endpoint sample.](media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png)
+
+3. Finalize your managed identity configuration:
+
+ * If **system-assigned identity** should be used, then specify nothing else.
+
+ * If **user-assigned identity** should be used, then add the following app settings values in your app configuration:
+ * **AzureWebJobsStorage__credential**: managedidentity
+
+ * **AzureWebJobsStorage__clientId**: (This is a GUID value that you obtain from the Azure AD portal)
+
+ ![Screenshot of user identity client id.](media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-03.png)
+++
+## Configure your app to use client secret credentials
+
+Registering a client application in Azure Active Directory (Azure AD) is another way you can configure access to an Azure service. In the following steps, you will learn how to use client secret credentials for authentication to your Azure Storage account. This method can be used by function apps both locally and on Azure. However, client secret credential is **less recommended** than managed identity as it's more complicated to configure and manage and it requires sharing a secret credential with the Azure Functions service.
+
+### Prerequisites
+
+The following steps assume that you're starting with an existing Durable Functions app and are familiar with how to operate it.
+In particular, this quickstart assumes that you have already:
+
+* Created a Durable Functions project on your local machine or in the Azure portal.
++
+### Register a client application on Azure Active Directory
+1. Register a client application under Azure Active Directory in the Azure portal according to [these instructions](../../healthcare-apis/register-application.md).
+
+2. Create a client secret for your client application. In your registered application:
+
+ 1. Select **Certificates & Secrets** and select **New client secret**.
+
+ 2. Fill in a **Description** and choose secret valid time in the **Expires** field.
+
+ 3. Copy and save the secret value carefully because it will not show up again after you leave the page.
+
+ ![Screenshot of client secret page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-01.png)
+
+### Assign Role-based Access Controls (RBAC) to the client application
+
+Assign these three roles to your client application with the following steps.
+
+* Storage Queue Data Contributor
+* Storage Blob Data Contributor
+* Storage Table Data Contributor
+
+1. Navigate to your functionΓÇÖs storage account **Access Control (IAM)** page and add a new role assignment.
+
+ ![Screenshot of access control page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-02.png)
+
+2. Choose the required role, click next, then search for your application, review and add.
+
+ ![Screenshot of role assignment page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-03.png)
+
+### Add client secret configuration
+
+To run and test in Azure, specify the followings in your Azure function appΓÇÖs **Configuration** page in the Azure portal. To run and test locally, specify the following in the functionΓÇÖs **local.settings.json** file.
+
+1. Remove the default value "AzureWebJobsStorage".
+
+2. Link Azure storage account by adding either one of the following value settings:
+
+ * **AzureWebJobsStorage__accountName**: For example: `mystorageaccount123`
+
+ * **AzureWebJobsStorage__blobServiceUri**: Example: `https://mystorageaccount123.blob.core.windows.net/`
+
+ **AzureWebJobsStorage__queueServiceUri**: Example: `https://mystorageaccount123.queue.core.windows.net/`
+
+ **AzureWebJobsStorage__tableServiceUri**: Example: `https://mystorageaccount123.table.core.windows.net/`
+
+ The values for these Uri variables can be found in the storage account under the **Endpoints** tab.
+
+ ![Screenshot of endpoint sample.](media/durable-functions-configure-df-with-credentials/durable-functions-managed-identity-scenario-02.png)
+
+3. Add a client secret credential by specifying the following values:
+ * **AzureWebJobsStorage__clientId**: (this is a GUID value found in the Azure AD application page)
+
+ * **AzureWebJobsStorage__ClientSecret**: (this is the secret value generated in the Azure AD portal in a previous step)
+
+ * **AzureWebJobsStorage__tenantId**: (this is the tenant ID that the Azure AD application is registered in)
+
+ The client ID and tenant ID values can be found on your client applicationΓÇÖs overview page. The client secret value is the one that was carefully saved in the previous step. It will not be available after the page is refreshed.
+
+ ![Screenshot of application's overview page.](media/durable-functions-configure-df-with-credentials/durable-functions-client-secret-scenario-04.png)
+
azure-functions Functions Add Output Binding Cosmos Db Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-add-output-binding-cosmos-db-vs-code.md
Before you get started, make sure to install the [Azure Databases extension](htt
|Prompt| Selection| |--|--|
- |**Select an Azure Database Server**| Choose **Azure Cosmos DB for NoSQL** to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB for NoSQL](../cosmos-db/introduction.md). |
+ |**Select an Azure Database Server**| Choose **Core (SQL)** to create a document database that you can query by using a SQL syntax. [Learn more about the Azure Cosmos DB](../cosmos-db/introduction.md). |
|**Account name**| Enter a unique name to identify your Azure Cosmos DB account. The account name can use only lowercase letters, numbers, and hyphens (-), and must be between 3 and 31 characters long.| |**Select a capacity model**| Select **Serverless** to create an account in [serverless](../cosmos-db/serverless.md) mode. |**Select a resource group for new resources**| Choose the resource group where you created your function app in the [previous article](./create-first-function-vs-code-csharp.md). |
azure-functions Functions Compare Logic Apps Ms Flow Webjobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-compare-logic-apps-ms-flow-webjobs.md
This article compares the following Microsoft cloud
-* [Microsoft Power Automate](https://flow.microsoft.com/) (was Microsoft Flow)
+* [Microsoft Power Automate](https://make.powerautomate.com/) (was Microsoft Flow)
* [Azure Logic Apps](https://azure.microsoft.com/services/logic-apps/) * [Azure Functions](https://azure.microsoft.com/services/functions/) * [Azure App Service WebJobs](../app-service/webjobs-create.md)
The following table helps you determine whether Power Automate or Logic Apps is
| **Scenarios** |Self-service |Advanced integrations | | **Design tool** |In-browser and mobile app, UI only |In-browser, [Visual Studio Code](../logic-apps/quickstart-create-logic-apps-visual-studio-code.md), and [Visual Studio](../logic-apps/quickstart-create-logic-apps-with-visual-studio.md) with code view available | | **Application lifecycle management (ALM)** |Design and test in non-production environments, promote to production when ready |Azure DevOps: source control, testing, support, automation, and manageability in [Azure Resource Manager](../logic-apps/logic-apps-azure-resource-manager-templates-overview.md) |
-| **Admin experience** |Manage Power Automate environments and data loss prevention (DLP) policies, track licensing: [Admin center](https://admin.flow.microsoft.com) |Manage resource groups, connections, access management, and logging: [Azure portal](https://portal.azure.com) |
+| **Admin experience** |Manage Power Automate environments and data loss prevention (DLP) policies, track licensing: [Admin center](https://admin.powerplatform.microsoft.com) |Manage resource groups, connections, access management, and logging: [Azure portal](https://portal.azure.com) |
| **Security** |Microsoft 365 security audit logs, DLP, [encryption at rest](https://wikipedia.org/wiki/Data_at_rest#Encryption) for sensitive data |Security assurance of Azure: [Azure security](https://www.microsoft.com/en-us/trustcenter/Security/AzureSecurity), [Microsoft Defender for Cloud](https://azure.microsoft.com/services/security-center/), [audit logs](https://azure.microsoft.com/blog/azure-audit-logs-ux-refresh/) | ## Compare Azure Functions and Azure Logic Apps
azure-functions Functions Reference Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-python.md
Python v2 programming model:
+ [Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-decorators) + [Terminal or command prompt](./create-first-function-cli-python.md?pivots=python-mode-decorators)
+Note that the Python v2 programming model is only supported in the 4.x functions runtime. For more information, see [Azure Functions runtime versions overview](./functions-versions.md).
+ Python v1 programming model: + [Visual Studio Code](./create-first-function-vs-code-python.md?pivots=python-mode-configuration)
azure-functions Recover Python Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/recover-python-functions.md
zone_pivot_groups: python-mode-functions
This article provides information to help you troubleshoot errors with your Python functions in Azure Functions. This article supports both the v1 and v2 programming models. Choose the model you want to use from the selector at the top of the article. The v2 model is currently in preview. For more information on Python programming models, see the [Python developer guide](./functions-reference-python.md).
+> [!NOTE]
+> The Python v2 programming model is only supported in the 4.x functions runtime. For more information, see [Azure Functions runtime versions overview](./functions-versions.md).
++ Here are the troubleshooting sections for common issues in Python functions: ::: zone pivot="python-mode-configuration"
azure-government Documentation Government Impact Level 5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-impact-level-5.md
recommendations: false Previously updated : 10/30/2022 Last updated : 02/09/2023 # Isolation guidelines for Impact Level 5 workloads
Log Analytics may also be used to ingest extra customer-provided logs. These log
- Intune supports Impact Level 5 workloads in Azure Government with no extra configuration required. Line-of-business apps should be evaluated for IL5 restrictions prior to [uploading to Intune storage](/mem/intune/apps/apps-add). While Intune does encrypt applications that are uploaded to the service for distribution, it doesn't support customer-managed keys.
+## Media
+
+For Media services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=cdn,media-services&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
+
+### [Media Services](/azure/media-services/latest/)
+
+- Configure encryption at rest of content in Media Services by [using customer-managed keys in Azure Key Vault](/azure/media-services/latest/concept-use-customer-managed-keys-byok).
++ ## Migration For Migration services availability in Azure Government, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration,azure-migrate&regions=non-regional,usgov-non-regional,us-dod-central,us-dod-east,usgov-arizona,usgov-texas,usgov-virginia&rar=true). For a list of services in scope for DoD IL5 PA, see [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope). Guidance below is provided only for IL5 PA services that require extra configuration to support IL5 workloads.
azure-monitor Automate Custom Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/automate-custom-reports.md
These steps only apply if you don't already have a SendGrid account configured.
* Learn more about creating [Analytics queries](../logs/get-started-queries.md). * Learn more about [programmatically querying Application Insights data](/rest/api/application-insights/) * Learn more about [Logic Apps](../../logic-apps/logic-apps-overview.md).
-* Learn more about [Power Automate](https://ms.flow.microsoft.com).
+* Learn more about [Power Automate](https://make.powerautomate.com).
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
Title: Create a new Application Insights resource | Microsoft Docs description: Manually set up Application Insights monitoring for a new live application. Previously updated : 01/28/2023 Last updated : 02/28/2023 # Create an Application Insights resource
+> [!CAUTION]
+> This article applies to Application Insights Classic resources, which are [no longer recommended](https://azure.microsoft.com/updates/we-re-retiring-classic-application-insights-on-29-february-2024).
+>
+> The information in this article is stale and won't be updated.
+>
+> [Transition to workspace-based Application Insights](convert-classic-resource.md) to take advantage of [new capabilities](create-workspace-resource.md#new-capabilities).
+ Application Insights displays data about your application in an Azure resource. Creating a new resource is part of [setting up Application Insights to monitor a new application][start]. After you've created your new resource, you can get its instrumentation key and use it to configure the Application Insights SDK. The instrumentation key links your telemetry to the resource. > [!IMPORTANT]
After your app is created, a new pane displays performance and usage data about
## Copy the instrumentation key
-The instrumentation key identifies the resource that you want to associate with your telemetry data. You'll need to copy the instrumentation key and add it to your application's code.
+The instrumentation key identifies the resource that you want to associate with your telemetry data. You need to copy the instrumentation key and add it to your application's code.
## Install the SDK in your app
To access the preview Application Insights Azure CLI commands, you first need to
az extension add -n application-insights ```
-If you don't run the `az extension add` command, you'll see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
+If you don't run the `az extension add` command, you see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
Run the following command to create your Application Insights resource:
For the full Azure CLI documentation for this command, and to learn how to retri
> [!WARNING] > Don't modify endpoints. [Transition to connection strings](migrate-from-instrumentation-keys-to-connection-strings.md#migrate-from-application-insights-instrumentation-keys-to-connection-strings) to simplify configuration and eliminate the need for endpoint modification.
-To send data from Application Insights to certain regions, you'll need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article.
+To send data from Application Insights to certain regions, you need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article.
These changes require you to adjust the sample code and replace the placeholder values for `QuickPulse_Endpoint_Address`, `TelemetryChannel_Endpoint_Address`, and `Profile_Query_Endpoint_address` with the actual endpoint addresses for your specific region. The end of this article contains links to the endpoint addresses for regions where this configuration is required.
The endpoints can also be configured through environment variables:
# [JavaScript](#tab/js)
-The current snippet listed here is version 5. The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
+The current snippet listed here's version 5. The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
```html <script type="text/javascript">
Currently, the only regions that require endpoint modifications are [Azure Gover
| Azure Government | QuickPulse (Live Metrics) |`https://quickpulse.applicationinsights.us/QuickPulseService.svc` | | Azure Government | Profile Query |`https://dc.applicationinsights.us/api/profiles/{0}/appId` |
-If you currently use the [Application Insights REST API](/rest/api/application-insights/), which is normally accessed via `api.applicationinsights.io`, you'll need to use an endpoint that's local to your region.
+If you currently use the [Application Insights REST API](/rest/api/application-insights/), which is normally accessed via `api.applicationinsights.io`, you need to use an endpoint that's local to your region.
|Region | Endpoint name | Value | |--|:|:-|
azure-monitor Java In Process Agent Redirect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent-redirect.md
- Title: Azure Monitor Application Insights Java (redirect to OpenTelemetry)
-description: Redirect to OpenTelemetry agent
- Previously updated : 11/15/2022-----
-# Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications (redirect to OpenTelemetry)
-
-Whether you are deploying on-premises or in the cloud, you can use Microsoft's OpenTelemetry-based Java Auto-Instrumentation agent.
-
-For more information, see [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java#enable-azure-monitor-opentelemetry-for-net-nodejs-python-and-java-applications).
-
-For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
-
-## Next steps
--- [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](opentelemetry-enable.md?tabs=java#enable-azure-monitor-opentelemetry-for-net-nodejs-python-and-java-applications)
azure-monitor Monitor Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/monitor-functions.md
Title: Monitor applications running on Azure Functions with Application Insights - Azure Monitor | Microsoft Docs
-description: Azure Monitor seamlessly integrates with your application running on Azure Functions, and allows you to monitor the performance and spot the problems with your apps in no time.
+description: Azure Monitor integrates with your Azure Functions application, allowing performance monitoring and quickly identifying problems.
Previously updated : 11/14/2022 Last updated : 02/09/2023 # Monitoring Azure Functions with Azure Monitor Application Insights
-[Azure Functions](../../azure-functions/functions-overview.md) offers built-in integration with Azure Application Insights to monitor functions. For languages other than .NET and .NETCore additional language-specific workers/extensions are needed to get the full benefits of distributed tracing.
+[Azure Functions](../../azure-functions/functions-overview.md) offers built-in integration with Azure Application Insights to monitor functions. For languages other than .NET and .NET Core, other language-specific workers/extensions are needed to get the full benefits of distributed tracing.
Application Insights collects log, performance, and error data, and automatically detects performance anomalies. Application Insights includes powerful analytics tools to help you diagnose issues and to understand how your functions are used. When you have the visibility into your application data, you can continuously improve performance and usability. You can even use Application Insights during local function app project development.
-The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid instrumentation key to connect your function app to an Application Insights resource. The instrumentation key should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have this key, you can set it manually. For more information read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd).
-
-For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+The required Application Insights instrumentation is built into Azure Functions. The only thing you need is a valid connection string to connect your function app to an Application Insights resource. The connection string should be added to your application settings when your function app resource is created in Azure. If your function app doesn't already have a connection string, you can set it manually. For more information, read more about [monitoring Azure Functions](../../azure-functions/functions-monitoring.md?tabs=cmd) and [connection strings](sdk-connection-string.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
+For a complete list of supported auto-instrumentation scenarios, see [Supported environments, languages, and resource providers](codeless-overview.md#supported-environments-languages-and-resource-providers).
+ ## Distributed tracing for Java applications (public preview) > [!IMPORTANT] > This feature is currently in public preview for Java Azure Functions both Windows and Linux
-If your applications are written in Java you can view richer data from your functions applications, including requests, dependencies, logs, and metrics. The additional data also lets you see and diagnose end-to-end transactions and see the application map, which aggregates many transactions to show a topological view of how the systems interact, and what the average performance and error rates are.
+> [!Note]
+> This feature used to have a 8-9 second cold startup implication, which has been reduced to less than 1 sec. If you were an early adopter of this feature (i.e. prior to Feb 2023), then review the troubleshooting section to update to the current version and benefit from the new faster startup.
-The end-to-end diagnostics and the application map provide visibility into one single transaction/request. Together these two features are helpful for finding the root cause of reliability issues and performance bottlenecks on a per request basis.
+To view more data from your Java-based Azure Functions applications than is [collected by default](../../azure-functions/functions-monitoring.md?tabs=cmd), you can enable the [Application Insights Java 3.x agent](./java-in-process-agent.md). This agent allows Application Insights to automatically collect and correlate dependencies, logs, and metrics from popular libraries and Azure SDKs, in addition to the request telemetry already captured by Functions.
+
+By using the application map and having a more complete view of end-to-end transactions, you can better diagnose issues and see a topological view of how systems interact, along with data on average performance and error rates. You have more data for end-to-end diagnostics and the ability to use the application map to easily find the root cause of reliability issues and performance bottlenecks on a per request basis.
+
+For more advanced use cases, you're able to modify telemetry (add spans, update span status, add span attributes) or send custom telemetry using standard APIs.
### How to enable distributed tracing for Java Function apps
-Navigate to the functions app Overview pane and go to configurations. Under Application Settings, click "+ New application setting".
+Navigate to the functions app Overview pane and go to configurations. Under Application Settings, select "+ New application setting".
> [!div class="mx-imgBorder"] > ![Under Settings, add new application settings](./media//functions/create-new-setting.png)
-Add the following application settings with below values, then click Save on the upper left. DONE!
+Add the following application settings with below values, then select Save on the upper left. DONE!
+
+```
+APPLICATIONINSIGHTS_ENABLE_AGENT: true
+```
+
+### Troubleshooting
+
+Your Java Functions may have slow startup times if you adopted this feature before Feb 2023. Follow the steps to fix the issue.
#### Windows+
+1. Check to see if the following settings exist and remove them.
+ ``` XDT_MicrosoftApplicationInsights_Java -> 1 ApplicationInsightsAgent_EXTENSION_VERSION -> ~2 ```
-> [!IMPORTANT]
-> This feature will have a cold start implication of 8-9 seconds in the Windows Consumption plan.
+
+2. Enable the latest version by adding this setting.
+
+```
+APPLICATIONINSIGHTS_ENABLE_AGENT: true
+```
#### Linux Dedicated/Premium+
+1. Check to see if the following settings exist and remove it.
+ ``` ApplicationInsightsAgent_EXTENSION_VERSION -> ~3 ```
-#### Linux Consumption
+2. Enable the latest version by adding this setting.
+ ``` APPLICATIONINSIGHTS_ENABLE_AGENT: true ```
-### Troubleshooting
-
-* Sometimes the latest version of the Application Insights Java agent is not
- available in Azure Function - it takes a few months for the latest versions to
- roll out to all regions. In case you need the latest version of Java agent to
- monitor your app in Azure Function to use a specific version of Application
- Insights Java Auto-instrumentation Agent, you can upload the agent manually:
-
- Please follow this [instruction](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent).
+> [!NOTE]
+> If the latest version of the Application Insights Java agent isn't available in Azure Function, you can upload it manually by following [these instructions](https://github.com/Azure/azure-functions-java-worker/wiki/Distributed-Tracing-for-Java-Azure-Functions#customize-distribute-agent).
[!INCLUDE [azure-monitor-app-insights-test-connectivity](../../../includes/azure-monitor-app-insights-test-connectivity.md)]
To collect custom telemetry from services such as Redis, Memcached, MongoDB, and
* Read more instructions and information about monitoring [Monitoring Azure Functions](../../azure-functions/functions-monitoring.md) * Get an overview of [Distributed Tracing](./distributed-tracing.md) * See what [Application Map](./app-map.md?tabs=net) can do for your business
-* Read about [requests and dependencies for Java apps](./opentelemetry-enable.md?tabs=java)
+* Read about [requests and dependencies for Java apps](./java-in-process-agent.md)
* Learn more about [Azure Monitor](../overview.md) and [Application Insights](./app-insights-overview.md)
azure-monitor Best Practices Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-analysis.md
Last updated 10/18/2021
+# Analyzing and visualize data
-# Azure Monitor best practices: Analyze and visualize data
-
-This article is part of the scenario [Recommendations for configuring Azure Monitor](best-practices.md). It describes built-in features in Azure Monitor for analyzing collected data. It also describes options for creating custom visualizations to meet the requirements of different users in your organization. Visualizations like charts and graphs can help you analyze your monitoring data to drill down on issues and identify patterns.
+This article describes built-in features for visualizing and analyzing collected data in Azure Monitor. Visualizations like charts and graphs can help you analyze your monitoring data to drill down on issues and identify patterns. You can create custom visualizations to meet the requirements of different users in your organization.
## Built-in analysis features
-The following sections describe Azure Monitor features that provide analysis of collected data without any configuration.
-
-### Overview page
-
-Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. Because this page is based on platform metrics that are collected automatically, configuration isn't required for this feature.
-
-### Metrics Explorer
-
-You can use Metrics Explorer to interactively work with metric data and create metric alerts. Typically, you need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. Configuration isn't required for this feature after data collection is configured. Platform metrics for Azure resources are automatically available. Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to them. Application metrics are available after Application Insights is configured.
-
-### Log Analytics
-
-With Log Analytics, you can create log queries to interactively work with log data and create log query alerts. Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization.
-
-## Workbooks
-
-[Workbooks](./visualize/workbooks-overview.md) are the visualization platform of choice for Azure. They provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources.
+This table describes Azure Monitor features that provide analysis of collected data without any configuration.
-Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet the requirements of your different users.
+|Component |Description | Required training and/or configuration|
+|||--|
+|Overview page|Most Azure services have an **Overview** page in the Azure portal that includes a **Monitor** section with charts that show recent critical metrics. This information is intended for owners of individual services to quickly assess the performance of the resource. |This page is based on platform metrics that are collected automatically. No configuration is required. |
+|Metrics Explorer|You can use Metrics Explorer to interactively work with metric data and create metric alerts. You need minimal training to use Metrics Explorer, but you must be familiar with the metrics you want to analyze. |- Once data collection is configured, no another configuration is required.<br>- Platform metrics for Azure resources are automatically available.<br>- Guest metrics for virtual machines are available after an Azure Monitor agent is deployed to the virtual machine.<br>- Application metrics are available after Application Insights is configured. |
+|Log Analytics|With Log Analytics, you can create log queries to interactively work with log data and create log query alerts.| Some training is required for you to become familiar with the query language, although you can use prebuilt queries for common requirements. You can also add [query packs](logs/query-packs.md) with queries that are unique to your organization. Then if you're familiar with the query language, you can build queries for others in your organization. |
-![Diagram that shows screenshots of three pages from a workbook, including Analysis of Page Views, Usage, and Time Spent on Page.](media/visualizations/workbook.png)
-
-Common scenarios for workbooks:
--- Create an interactive report with parameters where selecting an element in a table dynamically updates associated charts and visualizations.-- Share a report with other users in your organization.-- Collaborate with other workbook authors in your organization by using a public GitHub-based template gallery.
+## Built-in visualization tools
-## Azure dashboards
+### Azure dashboards
-[Azure dashboards](../azure-portal/azure-portal-dashboards.md) are useful in providing a "single pane of glass" over your Azure infrastructure and services. While a workbook provides richer functionality, a dashboard can combine Azure Monitor data with data from other Azure services.
+[Azure dashboards](../azure-portal/azure-portal-dashboards.md) are useful in providing a "single pane of glass" of your Azure infrastructure and services. While a workbook provides richer functionality, a dashboard can combine Azure Monitor data with data from other Azure services.
![Screenshot that shows an example of an Azure dashboard with customizable information.](media/visualizations/dashboard.png)
-Here's a video walk-through on how to create dashboards:
+Here's a video about how to create dashboards:
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE4AslH]
-Common scenarios for dashboards:
+### Azure workbooks
-- Create a dashboard that combines a metrics graph and the results of a log query with operational data for related services.-- Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md).
+ [Workbooks](./visualize/workbooks-overview.md) provide a flexible canvas for data analysis and the creation of rich visual reports. You can use workbooks to tap into multiple data sources from across Azure and combine them into unified interactive experiences. They're especially useful to prepare end-to-end monitoring views across multiple Azure resources. Insights use prebuilt workbooks to present you with critical health and performance information for a particular service. You can access a gallery of workbooks on the **Workbooks** tab of the Azure Monitor menu and create custom workbooks to meet the requirements of your different users.
-For details on how to create a dashboard that includes data from Azure Monitor Logs, see [Create and share dashboards of Log Analytics data](visualize/tutorial-logs-dashboards.md). For details on how to create a dashboard that includes data from Application Insights, see [Create custom key performance indicator (KPI) dashboards using Application Insights](app/tutorial-app-dashboards.md).
+![Diagram that shows screenshots of three pages from a workbook, including Analysis of Page Views, Usage, and Time Spent on Page.](media/visualizations/workbook.png)
-## Grafana
+### Grafana
[Grafana](https://grafana.com/) is an open platform that excels in operational dashboards. It's useful for: - Detecting, isolating, and triaging operational incidents. - Combining visualizations of Azure and non-Azure data sources. These sources include on-premises, third-party tools, and data stores in other clouds.
-Grafana has popular plug-ins and dashboard templates for APM tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multi-cloud monitoring in a single pane of glass.
+Grafana has popular plug-ins and dashboard templates for APM tools such as Dynatrace, New Relic, and AppDynamics. You can use these resources to visualize Azure platform data alongside other metrics from higher in the stack collected by other tools. It also has AWS CloudWatch and GCP BigQuery plug-ins for multicloud monitoring in a single pane of glass.
All versions of Grafana include the [Azure Monitor datasource plug-in](visualize/grafana-plugin.md) to visualize your Azure Monitor metrics and logs.
All versions of Grafana include the [Azure Monitor datasource plug-in](visualize
![Screenshot that shows Grafana visualizations.](media/visualizations/grafana.png)
-Common scenarios for Grafana:
--- Combine time-series and event data in a single visualization panel.-- Create a dynamic dashboard based on user selection of dynamic variables.-- Create a dashboard from a community-created and community-supported template.-- Create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises.-
-## Power BI
+### Power BI
[Power BI](https://powerbi.microsoft.com/documentation/powerbi-service-get-started/) is useful for creating business-centric dashboards and reports, along with reports that analyze long-term KPI trends. You can [import the results of a log query](./logs/log-powerbi.md) into a Power BI dataset. Then you can take advantage of its features, such as combining data from different sources and sharing reports on the web and mobile devices. ![Screenshot that shows an example Power B I report for I T operations.](media/visualizations/power-bi.png)
-Common scenarios for Power BI:
--- Create rich visualizations.-- Benefit from extensive interactivity, including zoom-in and cross-filtering.-- Share easily throughout your organization.-- Integrate data from multiple data sources.-- Experience better performance with results cached in a cube.
+## Choose the right visualization tool
-## Azure Monitor partners
+|Visualization tool|Benefits|Common use cases|Good fit for|
+|:|:|:|:|
+|Azure Workbooks|- Native dashboarding platform in Azure.<br>- Designed for collaborating and troubleshooting.<br>- Out-of-the-box templates and reports.<br>- Fully customizable. |- Create an interactive report with parameters where selecting an element in a table dynamically updates associated charts and visualizations.<br>- Share a report with other users in your organization.<br>- Collaborate with other workbook authors in your organization by using a public GitHub-based template gallery. | |
+|Azure Dashboards|- Native dashboarding platform in Azure.<br>- Supports at scale deployments.<br>- Supports RBAC.<br>- No added cost|- Create a dashboard that combines a metrics graph and the results of a log query with operational data for related services.<br>- Share a dashboard with service owners through integration with [Azure role-based access control](../role-based-access-control/overview.md). |Azure/Arc exclusive environments|
+|Grafana |- Multi-platform, multicloud single pane of glass visualizations.<br>- Out-of-the-box plugins from most monitoring tools and platforms.<br>- Dashboard templates with focus on operations.<br>- Supports portability, multi-tenancy, and flexible RBAC.<br>- Azure managed Grafana provides seamless integration with Azure. |- Combine time-series and event data in a single visualization panel.<br>- Create a dynamic dashboard based on user selection of dynamic variables.<br>- Create a dashboard from a community-created and community-supported template.<br>- Create a vendor-agnostic business continuity and disaster scenario that runs on any cloud provider or on-premises. |- Cloud Native CNCF monitoring.<br>- Best with Prometheus.<br>- Multicloud environments.<br>- Combining with 3rd party monitoring tools.|
+|Power BI |- Helps design business centric KPI dashboards for long term trends.<br>- Supports BI analytics with extensive slicing and dicing. <br>- Create rich visualizations.<br>- Benefit from extensive interactivity, including zoom-in and cross-filtering.<br>- Share easily throughout your organization.<br>- Integrate data from multiple data sources.<br>- Experience better performance with results cached in a cube. |Dashboarding for long term trends.|
+## Other options
Some Azure Monitor partners provide visualization functionality. For a list of partners that Microsoft has evaluated, see [Azure Monitor partner integrations](./partners.md). An Azure Monitor partner might provide out-of-the-box visualizations to save you time, although these solutions might have an extra cost.
-## Custom application
-
-You can build your own custom websites and applications by using metric and log data in Azure Monitor accessed through a REST API. This approach gives you complete flexibility in UI, visualization, interactivity, and features.
+You can also build your own custom websites and applications using metric and log data in Azure Monitor accessed through a REST API. This approach gives you complete flexibility in UI, visualization, interactivity, and features.
## Next steps-
-To define alerts and automated actions from Azure Monitor data, see [Alerts and automated actions](best-practices-alerts.md).
+- [Deploy Azure Monitor: Alerts and automated actions](best-practices-alerts.md)
+- [Optimize costs in Azure Monitor](best-practices-cost.md)
azure-monitor Container Insights Syslog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-syslog.md
Container Insights offers the ability to collect Syslog events from Linux nodes
- You will need to have managed identity authentication enabled on your cluster. To enable, see [migrate your AKS cluster to managed identity authentication](container-insights-enable-existing-clusters.md?tabs=azure-cli#migrate-to-managed-identity-authentication). Note: This which will create a Data Collection Rule (DCR) named `MSCI-<WorkspaceRegion>-<ClusterName>` - Minimum versions of Azure components
- - **Azure CLI**: Minimum version required for Azure CLI is [2.44.1 (link to release notes)](/cli/azure/release-notes-azure-cli#january-11-2023). See [How to update the Azure CLI](/cli/azure/update-azure-cli) for upgrade instructions.
+ - **Azure CLI**: Minimum version required for Azure CLI is [2.45.0 (link to release notes)](/cli/azure/release-notes-azure-cli#february-07-2023). See [How to update the Azure CLI](/cli/azure/update-azure-cli) for upgrade instructions.
- **Azure CLI AKS-Preview Extension**: Minimum version required for AKS-Preview Azure CLI extension is [ 0.5.125 (link to release notes)](https://github.com/Azure/azure-cli-extensions/blob/main/src/aks-preview/HISTORY.rst#05125). See [How to update extensions](/cli/azure/azure-cli-extensions-overview#how-to-update-extensions) for upgrade guidance. - **Linux image version**: Minimum version for AKS node linux image is 2022.11.01. See [Upgrade Azure Kubernetes Service (AKS) node images](https://learn.microsoft.com/azure/aks/node-image-upgrade) for upgrade help.
azure-monitor Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/availability-zones.md
Title: Availability zones in Azure Monitor
description: Availability zones in Azure Monitor --++ Last updated 08/18/2021 # Availability zones in Azure Monitor
-[Azure availability zones](../../availability-zones/az-overview.md) protect your applications and data from datacenter failures and can provide resilience for Azure Monitor features that rely on a Log Analytics workspace. When a workspace is linked to an availability zone, it remains active and operational even if a specific datacenter is malfunctioning or even down, by relying on the availability of other zones in the region. You donΓÇÖt need to do anything in order to switch to an alternative zone, or even be aware of the incident.
+[Azure availability zones](../../availability-zones/az-overview.md) protect your applications and data from datacenter failures and can provide resilience for Azure Monitor features that rely on a Log Analytics workspace. When a workspace is linked to an availability-zone-enabled dedicated cluster, it remains active and operational even if a specific datacenter is malfunctioning or even down, by relying on the availability of other zones in the region. You donΓÇÖt need to do anything in order to switch to an alternative zone, or even be aware of the incident.
## Regions
azure-monitor Data Ingestion Time https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-ingestion-time.md
Title: Log data ingestion time in Azure Monitor | Microsoft Docs description: This article explains the different factors that affect latency in collecting log data in Azure Monitor. --++ Last updated 03/21/2022
Last updated 03/21/2022
# Log data ingestion time in Azure Monitor Azure Monitor is a high-scale data service that serves thousands of customers that send terabytes of data each month at a growing pace. There are often questions about the time it takes for log data to become available after it's collected. This article explains the different factors that affect this latency.
-## Typical latency
-Latency refers to the time that data is created on the monitored system and the time that it becomes available for analysis in Azure Monitor. The typical latency to ingest log data is *between 20 seconds and 3 minutes*. The specific latency for any particular data will vary depending on several factors that are explained in this article.
+## Average latency
+Latency refers to the time that data is created on the monitored system and the time that it becomes available for analysis in Azure Monitor. The average latency to ingest log data is *between 20 seconds and 3 minutes*. The specific latency for any particular data will vary depending on several factors that are explained in this article.
## Factors affecting latency The total ingestion time for a particular set of data can be broken down into the following high-level areas: -- **Agent time**: The time to discover an event, collect it, and then send it to an Azure Monitor Logs ingestion point as a log record. In most cases, this process is handled by an agent. More latency might be introduced by the network.
+- **Agent time**: The time to discover an event, collect it, and then send it to a [data collection endpoint](../essentials/data-collection-endpoint-overview.md) as a log record. In most cases, this process is handled by an agent. More latency might be introduced by the network.
- **Pipeline time**: The time for the ingestion pipeline to process the log record. This time period includes parsing the properties of the event and potentially adding calculated information. - **Indexing time**: The time spent to ingest a log record into an Azure Monitor big data store.
To ensure the Log Analytics agent is lightweight, the agent buffers logs and per
**Varies**
-Network conditions might negatively affect the latency of this data to reach an Azure Monitor Logs ingestion point.
+Network conditions might negatively affect the latency of this data to reach a data collection endpoint.
### Azure metrics, resource logs, activity log **30 seconds to 15 minutes**
-Azure data adds more time to become available at an Azure Monitor Logs ingestion point for processing:
+Azure data adds more time to become available at a data collection endpoint for processing:
-- **Azure platform metrics** are available in under a minute in the metrics database, but they take another 3 minutes to be exported to the Azure Monitor Logs ingestion point.
+- **Azure platform metrics** are available in under a minute in the metrics database, but they take another 3 minutes to be exported to the data collection endpoint.
- **Resource logs** typically add 30 to 90 seconds, depending on the Azure service. Some Azure services (specifically, Azure SQL Database and Azure Virtual Network) currently report their logs at 5-minute intervals. Work is in progress to improve this time further. To examine this latency in your environment, see the [query that follows](#check-ingestion-time). - **Activity log** data is ingested in 30 seconds when you use the recommended subscription-level diagnostic settings to send them into Azure Monitor Logs. They might take 10 to 15 minutes if you instead use the legacy integration.
To determine a solution's collection frequency, see the [documentation for each
**30 to 60 seconds**
-After the data is available at an ingestion point, it takes another 30 to 60 seconds to be available for querying.
+After the data is available at the data collection endpoint, it takes another 30 to 60 seconds to be available for querying.
After log records are ingested into the Azure Monitor pipeline (as identified in the [_TimeReceived](./log-standard-columns.md#_timereceived) property), they're written to temporary storage to ensure tenant isolation and to make sure that data isn't lost. This process typically adds 5 to 15 seconds.
Ingestion time might vary for different resources under different circumstances.
| Step | Property or function | Comments | |:|:|:| | Record created at data source | [TimeGenerated](./log-standard-columns.md#timegenerated) <br>If the data source doesn't set this value, it will be set to the same time as _TimeReceived. | If at processing time the Time Generated value is older than 3 days, the row will be dropped. |
-| Record received by Azure Monitor ingestion endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field isn't optimized for mass processing and shouldn't be used to filter large datasets. |
+| Record received by the data collection endpoint | [_TimeReceived](./log-standard-columns.md#_timereceived) | This field isn't optimized for mass processing and shouldn't be used to filter large datasets. |
| Record stored in workspace and available for queries | [ingestion_time()](/azure/kusto/query/ingestiontimefunction) | We recommend using `ingestion_time()` if there's a need to filter only records that were ingested in a certain time window. In such cases, we recommend also adding a `TimeGenerated` filter with a larger range. | ### Ingestion latency delays
azure-monitor Log Analytics Workspace Health https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/log-analytics-workspace-health.md
+
+ Title: Monitor Log Analytics workspace health
+description: This article how to monitor the health of a Log Analytics workspace and set up alerts about latency issues specific to the Log Analytics workspace or related to known Azure service issues.
++++ Last updated : 02/07/2023+
+#Customer-intent: As a Log Analytics workspace administrator, I want to know when there are latency issues in a Log Analytics workspace, so I can act to resolve the issue, contact Microsoft for support, or track that is Azure is meeting its SLA.
++
+# Monitor Log Analytics workspace health
+
+[Azure Service Health](../../service-health/overview.md) monitors the health of your cloud resources, including Log Analytics workspaces. When a Log Analytics workspace is healthy, data you collect from resources in your IT environment is available for querying and analysis in a relatively short period of time, known as [latency](../logs/data-ingestion-time.md). This article explains how to view the health status of your Log Analytics workspace and set up alerts to track Log Analytics workspace health status changes.
+
+Azure Service Health monitors:
+
+- [Resource health](../../service-health/resource-health-overview.md): information about the health of your individual cloud resources, such as a specific Log Analytics workspace.
+- [Service health](../../service-health/service-health-overview.md): information about the health of the Azure services and regions you're using, which might affect your Log Analytics workspace, including communications about outages, planned maintenance activities, and other health advisories.
+
+## View Log Analytics workspace health and set up health status alerts
+
+When Azure Service Health detects [average latency](../logs/data-ingestion-time.md#average-latency) in your Log Analytics workspace, the workspace resource health status is **Available**.
+
+To view your Log Analytics workspace health and set up health status alerts:
+
+1. Select **Resource health** from the Log Analytics workspace menu.
+
+ The **Resource health** screen shows:
+
+ - **Health history**: Indicates whether Azure Service Health has detected latency issues related to the specific Log Analytics workspace. To further investigate latency issues related to your workspace, see [Investigate latency](#investigate-log-analytics-workspace-health-issues).
+ - **Azure service issues**: Displayed when a known issue with an Azure service might affect latency in the Log Analytics workspace. Select the message to view details about the service issue in Azure Service Health.
+
+ > [!NOTE]
+ > Service health notifications do not indicate that your Log Analytics workspace is necessarily affected by the know service issue. If your Log Analytics workspace resource health status is **Available**, Azure Service Health did not detect issues in your workspace.
+
+ :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency.png" alt-text="Screenshot that shows the Resource health screen for a Log Analytics workspace.":::
+
+1. To set up health status alerts:
+ 1. Select **Add resource health alert**.
+
+ The **Create alert rule** wizard opens, with the **Scope** and **Condition** panes pre-populated. By default, the rule triggers alerts all status changes in all Log Analytics workspaces in the subscription. If necessary, you can edit and modify the scope and condition at this stage.
+
+ :::image type="content" source="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" lightbox="media/data-ingestion-time/log-analytics-workspace-latency-alert-rule.png" alt-text="Screenshot that shows the Create alert rule wizard for Log Analytics workspace latency issues.":::
+
+ 1. Follow the rest of the steps in [Create a new alert rule in the Azure portal](../alerts/alerts-create-new-alert-rule.md#create-a-new-alert-rule-in-the-azure-portal).
+
+## Investigate Log Analytics workspace health issues
+
+To investigate Log Analytics workspace health issues:
+
+- Use [Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md), which provides a unified view of your workspace usage, performance, health, agent, queries, and change log.
+- Query the data in your Log Analytics workspace to [understand which factors are contributing greater than expected latency in your workspace](../logs/data-ingestion-time.md).
+- [Use the `_LogOperation` function to view and set up alerts about operational issues](../logs/monitor-workspace.md) logged in your Log Analytics workspace.
+
+## Next steps
+
+Learn more about:
+
+- [Log Analytics Workspace Insights](../logs/log-analytics-workspace-insights-overview.md).
+- [Querying log data in Azure Monitor Logs](../logs/get-started-queries.md).
+
azure-monitor Logicapp Flow Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logicapp-flow-connector.md
Last updated 03/22/2022
# Azure Monitor Logs connector for Logic Apps and Power Automate
-[Azure Logic Apps](../../logic-apps/index.yml) and [Power Automate](https://flow.microsoft.com) allow you to create automated workflows using hundreds of actions for various services. The Azure Monitor Logs connector allows you to build workflows that retrieve data from a Log Analytics workspace or an Application Insights application in Azure Monitor. This article describes the actions included with the connector and provides a walkthrough to build a workflow using this data.
+[Azure Logic Apps](../../logic-apps/index.yml) and [Power Automate](https://make.powerautomate.com) allow you to create automated workflows using hundreds of actions for various services. The Azure Monitor Logs connector allows you to build workflows that retrieve data from a Log Analytics workspace or an Application Insights application in Azure Monitor. This article describes the actions included with the connector and provides a walkthrough to build a workflow using this data.
For example, you can create a logic app to use Azure Monitor log data in an email notification from Office 365, create a bug in Azure DevOps, or post a Slack message. You can trigger a workflow by a simple schedule or from some action in a connected service such as when a mail or a tweet is received.
This tutorial shows how to create a logic app that sends the results of an Azure
- Learn more about [log queries in Azure Monitor](./log-query-overview.md). - Learn more about [Logic Apps](../../logic-apps/index.yml)-- Learn more about [Power Automate](https://flow.microsoft.com).
+- Learn more about [Power Automate](https://make.powerautomate.com).
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
If you've configured your storage account to allow access from selected networks
| Scope | Metric namespace | Metric | Aggregation | Threshold | |:|:|:|:|:|
- | storage-name | Account | Ingress | Sum | 80% of maximum ingress per alert evaluation period. For example, the limit is 60 Gbps for general-purpose v2 in West US. The threshold is 14,400 Gb per 5-minute evaluation period. |
+ | storage-name | Account | Ingress | Sum | 80% of maximum ingress per alert evaluation period. For example, the limit is 60 Gbps for general-purpose v2 in West US. The alert threshold is 1676 GiB per 5-minute evaluation period. |
1. Alert remediation actions: - Use a separate storage account for export that isn't shared with non-monitoring data.
If you've configured your storage account to allow access from selected networks
| Scope | Metric namespace | Metric | Aggregation | Threshold | |:|:|:|:|:|
- | namespaces-name | Event Hubs standard metrics | Incoming bytes | Sum | 80% of maximum ingress per alert evaluation period. For example, the limit is 1 MB/s per unit (TU or PU) and five units used. The threshold is 1,200 MB per 5-minute evaluation period. |
+ | namespaces-name | Event Hubs standard metrics | Incoming bytes | Sum | 80% of maximum ingress per alert evaluation period. For example, the limit is 1 MB/s per unit (TU or PU) and five units used. The threshold is 228 MiB per 5-minute evaluation period. |
| namespaces-name | Event Hubs standard metrics | Incoming requests | Count | 80% of maximum events per alert evaluation period. For example, the limit is 1,000/s per unit (TU or PU) and five units used. The threshold is 1,200,000 per 5-minute evaluation period. | | namespaces-name | Event Hubs standard metrics | Quota exceeded errors | Count | Between 1% of request. For example, requests per 5 minutes is 600,000. The threshold is 6,000 per 5-minute evaluation period. |
The template option doesn't apply.
If the data export rule includes an unsupported table, the configuration will succeed, but no data will be exported for that table. If the table is later supported, then its data will be exported at that time. ## Supported tables
-All data from the table will be exported unless limitations are specified. This list is updated as more tables are added.
+
+> [!NOTE]
+> We are in a process of adding support for more tables. Please check this article regularly.
| Table | Limitations | |:|:|
azure-monitor Logs Dedicated Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-dedicated-clusters.md
az account set --subscription "cluster-subscription-id"
az monitor log-analytics cluster create --no-wait --resource-group "resource-group-name" --name "cluster-name" --location "region-name" --sku-capacity "daily-ingestion-gigabyte" # Wait for job completion when `--no-wait` was used
-$clusterResourceId = az monitor log-analytics cluster list --resource-group "resource-group-name" --query "[?contains(name, "cluster-name")].[id]" --output tsv
+$clusterResourceId = az monitor log-analytics cluster list --resource-group "resource-group-name" --query "[?contains(name, 'cluster-name')].[id]" --output tsv
az resource wait --created --ids $clusterResourceId --include-response-body true ```
azure-monitor Logs Export Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-export-logic-app.md
Go to the **Storage accounts** menu in the Azure portal and select your storage
- Learn more about [log queries in Azure Monitor](./log-query-overview.md). - Learn more about [Logic Apps](../../logic-apps/index.yml).-- Learn more about [Power Automate](https://flow.microsoft.com).
+- Learn more about [Power Automate](https://make.powerautomate.com).
azure-monitor Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/manage-access.md
The following table summarizes the access modes:
| Who is each model intended for? | Central administration.<br>Administrators who need to configure data collection and users who need access to a wide variety of resources. Also currently required for users who need to access logs for resources outside of Azure. | Application teams.<br>Administrators of Azure resources being monitored. Allows them to focus on their resource without filtering. | | What does a user require to view logs? | Permissions to the workspace.<br>See "Workspace permissions" in [Manage access using workspace permissions](./manage-access.md#azure-rbac). | Read access to the resource.<br>See "Resource permissions" in [Manage access using Azure permissions](./manage-access.md#azure-rbac). Permissions can be inherited from the resource group or subscription or directly assigned to the resource. Permission to the logs for the resource will be automatically assigned. The user doesn't require access to the workspace.| | What is the scope of permissions? | Workspace.<br>Users with access to the workspace can query all logs in the workspace from tables they have permissions to. See [Set table-level read access](./manage-access.md#set-table-level-read-access). | Azure resource.<br>Users can query logs for specific resources, resource groups, or subscriptions they have access to in any workspace, but they can't query logs for other resources. |
-| How can a user access logs? | On the **Azure Monitor** menu, select **Logs**.<br><br>Select **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#workbooks). | Select **Logs** on the menu for the Azure resource. Users will have access to data for that resource.<br><br>Select **Logs** on the **Azure Monitor** menu. Users will have access to data for all resources they have access to.<br><br>Select **Logs** from **Log Analytics workspaces**. Users will have access to data for all resources they have access to.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#workbooks). |
+| How can a user access logs? | On the **Azure Monitor** menu, select **Logs**.<br><br>Select **Logs** from **Log Analytics workspaces**.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#azure-workbooks). | Select **Logs** on the menu for the Azure resource. Users will have access to data for that resource.<br><br>Select **Logs** on the **Azure Monitor** menu. Users will have access to data for all resources they have access to.<br><br>Select **Logs** from **Log Analytics workspaces**. Users will have access to data for all resources they have access to.<br><br>From Azure Monitor [workbooks](../best-practices-analysis.md#azure-workbooks). |
## Access control mode
azure-monitor Monitor Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/monitor-workspace.md
Title: Monitor health of Log Analytics workspace in Azure Monitor
+ Title: Monitor operational issues logged in your Azure Monitor Log Analytics workspace
description: The article describes how to monitor the health of your Log Analytics workspace by using data in the Operation table.--++ Last updated 03/21/2022
-# Monitor health of a Log Analytics workspace in Azure Monitor
+# Monitor operational issues in your Azure Monitor Log Analytics workspace
To maintain the performance and availability of your Log Analytics workspace in Azure Monitor, you need to be able to proactively detect any issues that arise. This article describes how to monitor the health of your Log Analytics workspace by using data in the [Operation](/azure/azure-monitor/reference/tables/operation) table. This table is included in every Log Analytics workspace. It contains error messages and warnings that occur in your workspace. We recommend that you create alerts for issues with the level of Warning and Error.
azure-monitor Tables Feature Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/tables-feature-support.md
Last updated 07/10/2022
The following list identifies the tables in a [Log Analytics workspace](log-analytics-workspace-overview.md) that support [transformations](../essentials/data-collection-transformations.md).
+> [!NOTE]
+> We are in a process of adding support for more tables. Please check this article regularly.
| Table | Limitations | |:|:|
azure-percept Retirement Of Azure Percept Dk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/retirement-of-azure-percept-dk.md
Previously updated : 11/10/2022 Last updated : 02/08/2023+ # Retirement of Azure Percept DK
azure-percept Software Releases Over The Air Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-over-the-air-updates.md
Previously updated : 10/04/2022 Last updated : 02/08/2023 +
azure-percept Software Releases Usb Cable Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-percept/software-releases-usb-cable-updates.md
Previously updated : 10/04/2022 Last updated : 02/08/2023 + # Software releases for USB cable updates
Feature Update (2104): [Azure-Percept-DK-1.0.20210409.2055.zip](https://download
## Next steps - [Update the Azure Percept DK over a USB-C cable connection](./how-to-update-via-usb.md)+
azure-video-indexer Logic Apps Connector Arm Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-arm-accounts.md
Azure Video Indexer (AVI) [REST API](https://api-portal.videoindexer.ai/api-deta
> [!TIP] > For the latest `api-version`, chose the latest stable version in [our REST documentation](/rest/api/videoindexer/stable/generate).
-To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure Video Indexer API.
+To make the integration easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with the Azure Video Indexer API.
You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for the integration gives you better visibility on the health of your workflow and an easy way to debug it.
azure-video-indexer Logic Apps Connector Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/logic-apps-connector-tutorial.md
Last updated 09/21/2020
Azure Video Indexer [REST API](https://api-portal.videoindexer.ai/api-details#api=Operations&operation=Delete-Video) supports both server-to-server and client-to-server communication and enables Azure Video Indexer users to integrate video and audio insights easily into their application logic, unlocking new experiences and monetization opportunities.
-To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://preview.flow.microsoft.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
+To make the integration even easier, we support [Logic Apps](https://azure.microsoft.com/services/logic-apps/) and [Power Automate](https://make.powerautomate.com/connectors/shared_videoindexer-v2/video-indexer-v2/) connectors that are compatible with our API. You can use the connectors to set up custom workflows to effectively index and extract insights from a large amount of video and audio files, without writing a single line of code. Furthermore, using the connectors for your integration gives you better visibility on the health of your workflow and an easy way to debug it. 
To help you get started quickly with the Azure Video Indexer connectors, we will do a walkthrough of an example Logic App and Power Automate solution you can set up. This tutorial shows how to set up flows using Logic Apps. However, the editors and capabilities are almost identical in both solutions, thus the diagrams and explanations are applicable to both Logic Apps and Power Automate.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 02/03/2023 Last updated : 02/09/2023
There are some important best practices to follow for optimal performance of NFS
- For optimized performance, choose either **UltraPerformance** gateway or **ErGw3Az** gateway, and enable [FastPath](../expressroute/expressroute-howto-linkvnet-arm.md#configure-expressroute-fastpath) from a private cloud to Azure NetApp Files volumes virtual network. View more detailed information on gateway SKUs at [About ExpressRoute virtual network gateways](../expressroute/expressroute-about-virtual-network-gateways.md). - Based on your performance requirements, select the correct service level needed for the Azure NetApp Files capacity pool. See [Service levels for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-service-levels.md) to understand the throughput allowed per provisioned TiB for each service level. - Create one or more volumes based on the required throughput and capacity. See [Performance considerations](../azure-netapp-files/azure-netapp-files-performance-considerations.md) for Azure NetApp Files to understand how volume size, service level, and capacity pool QoS type will determine volume throughput. For assistance calculating workload capacity and performance requirements, contact your Azure VMware Solution or Azure NetApp Files field expert. The default maximum number of Azure NetApp Files datastores is 64, but it can be increased to a maximum of 256 by submitting a support ticket. To submit a support ticket, see [Create an Azure support request](../azure-portal/supportability/how-to-create-azure-support-request.md).-- Work with your Microsoft representative to ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones).
+- Ensure that the Azure VMware Solution private cloud and the Azure NetApp Files volumes are deployed within the same [availability zone](../availability-zones/az-overview.md#availability-zones). Information regarding your AVS private cloud's availability zone can be viewed from the overview pane within the AVS private cloud.
For performance benchmarks that Azure NetApp Files datastores deliver for virtual machines on Azure VMware Solution, see [Azure NetApp Files datastore performance benchmarks for Azure VMware Solution](../azure-netapp-files/performance-benchmarks-azure-vmware-solution.md).
azure-vmware Tutorial Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-network-checklist.md
Title: Tutorial - Network planning checklist
description: Learn about the network requirements for network connectivity and network ports on Azure VMware Solution. Previously updated : 12/05/2022 Last updated : 2/9/2023 # Networking planning checklist for Azure VMware Solution
The subnets:
| | -- | :: | ::| | | Private Cloud DNS server | On-Premises DNS Server | UDP | 53 | DNS Client - Forward requests from Private Cloud vCenter Server for any on-premises DNS queries (check DNS section below) | | On-premises DNS Server | Private Cloud DNS server | UDP | 53 | DNS Client - Forward requests from on-premises services to Private Cloud DNS servers (check DNS section below) |
-| On-premises network | Private Cloud vCenter Server | TCP(HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. |
+| On-premises network | Private Cloud vCenter Server | TCP (HTTP) | 80 | vCenter Server requires port 80 for direct HTTP connections. Port 80 redirects requests to HTTPS port 443. This redirection helps if you use `http://server` instead of `https://server`. |
| Private Cloud management network | On-premises Active Directory | TCP | 389/636 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter. Port 636 is recommended for security purposes. | | Private Cloud management network | On-premises Active Directory Global Catalog | TCP | 3268/3269 | These ports are open to allow communications for Azure VMware Solutions vCenter Server to communicate to any on-premises Active Directory/LDAP global catalog server(s). These port(s) are optional - for configuring on-premises AD as an identity source on the Private Cloud vCenter Server. Port 3269 is recommended for security purposes. |
-| On-premises network | Private Cloud vCenter Server | TCP(HTTPS) | 443 | This port allows you to access vCenter Server from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
-| On-premises network | HCX Manager | TCP(HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. |
+| On-premises network | Private Cloud vCenter Server | TCP (HTTPS) | 443 | This port allows you to access vCenter Server from an on-premises network. The default port that the vCenter Server system uses to listen for connections from the vSphere Client. To enable the vCenter Server system to receive data from the vSphere Client, open port 443 in the firewall. The vCenter Server system also uses port 443 to monitor data transfer from SDK clients. |
+| On-premises network | HCX Manager | TCP (HTTPS) | 9443 | Hybrid Cloud Manager Virtual Appliance Management Interface for Hybrid Cloud Manager system configuration. |
| Admin Network | Hybrid Cloud Manager | SSH | 22 | Administrator SSH access to Hybrid Cloud Manager. |
-| HCX Manager | Interconnect (HCX-IX) | TCP(HTTPS) | 8123 | HCX Bulk Migration Control |
-| HCX Manager | Interconnect (HCX-IX), Network Extension (HCX-NE) | HTTP TCP(HTTPS) | 9443 | Send management instructions to the local HCX Interconnect using the REST API. |
-| Interconnect (HCX-IX)| L2C | TCP(HTTPS) | 443 | Send management instructions from Interconnect to L2C when L2C uses the same path as the Interconnect. |
+| HCX Manager | Interconnect (HCX-IX) | TCP (HTTPS) | 8123 | HCX Bulk Migration Control |
+| HCX Manager | Interconnect (HCX-IX), Network Extension (HCX-NE) | HTTP TCP (HTTPS) | 9443 | Send management instructions to the local HCX Interconnect using the REST API. |
+| Interconnect (HCX-IX)| L2C | TCP (HTTPS) | 443 | Send management instructions from Interconnect to L2C when L2C uses the same path as the Interconnect. |
| HCX Manager, Interconnect (HCX-IX) | ESXi Hosts | TCP | 80,902 | Management and OVF deployment. | | HCX NE, Interconnect (HCX-IX) at Source| HCX NE, Interconnect (HCX-IX) at Destination)| UDP | 4500 | Required for IPSEC<br> Internet key exchange (IKEv2) to encapsulate workloads for the bidirectional tunnel. Network Address Translation-Traversal (NAT-T) is also supported. | | Interconnect (HCX-IX) local | Interconnect (HCX-IX) (remote) | UDP | 500 | Required for IPSEC<br> Internet key exchange (ISAKMP) for the bidirectional tunnel. |
azure-web-pubsub Quickstart Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-serverless.md
In this tutorial, you learn how to:
var path = require('path'); module.exports = function (context, req) {
- var index = 'https://docsupdatetracker.net/index.html';
- if (process.env["HOME"] != null)
- {
- index = path.join(process.env["HOME"], "site", "wwwroot", index);
- }
+ var index = context.executionContext.functionDirectory + '/../https://docsupdatetracker.net/index.html';
context.log("https://docsupdatetracker.net/index.html path: " + index); fs.readFile(index, 'utf8', function (err, data) { if (err) {
In this tutorial, you learn how to:
- Update `index.cs` and replace `Run` function with following codes. ```c# [FunctionName("index")]
- public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ILogger log)
+ public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
{
- string indexFile = "https://docsupdatetracker.net/index.html";
- if (Environment.GetEnvironmentVariable("HOME") != null)
- {
- indexFile = Path.Join(Environment.GetEnvironmentVariable("HOME"), "site", "wwwroot", indexFile);
- }
+ var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}."); return new ContentResult {
azure-web-pubsub Tutorial Serverless Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-serverless-notification.md
In this tutorial, you learn how to:
var path = require('path'); module.exports = function (context, req) {
- var index = 'https://docsupdatetracker.net/index.html';
- if (process.env["HOME"] != null)
- {
- index = path.join(process.env["HOME"], "site", "wwwroot", index);
- }
+ var index = context.executionContext.functionDirectory + '/../https://docsupdatetracker.net/index.html';
context.log("https://docsupdatetracker.net/index.html path: " + index); fs.readFile(index, 'utf8', function (err, data) { if (err) {
In this tutorial, you learn how to:
- Update `index.cs` and replace `Run` function with following codes. ```c# [FunctionName("index")]
- public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ILogger log)
+ public static IActionResult Run([HttpTrigger(AuthorizationLevel.Anonymous)] HttpRequest req, ExecutionContext context, ILogger log)
{
- string indexFile = "https://docsupdatetracker.net/index.html";
- if (Environment.GetEnvironmentVariable("HOME") != null)
- {
- indexFile = Path.Join(Environment.GetEnvironmentVariable("HOME"), "site", "wwwroot", indexFile);
- }
+ var indexFile = Path.Combine(context.FunctionAppDirectory, "https://docsupdatetracker.net/index.html");
log.LogInformation($"https://docsupdatetracker.net/index.html path: {indexFile}."); return new ContentResult {
cognitive-services Extract Excel Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/tutorials/extract-excel-information.md
The issues are reported in raw text. We will use the NER feature to extract the
## Create a new Power Automate workflow
-Go to the [Power Automate site](https://preview.flow.microsoft.com/), and login. Then click **Create** and **Scheduled flow**.
+Go to the [Power Automate site](https://make.powerautomate.com/), and login. Then click **Create** and **Scheduled flow**.
:::image type="content" source="../media/tutorials/excel/flow-creation.png" alt-text="The workflow creation screen" lightbox="../media/tutorials/excel/flow-creation.png":::
communication-services Client And Server Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/client-and-server-architecture.md
- Title: Client and server architecture-
-description: Learn about Communication Services' architecture.
--- Previously updated : 06/30/2021---
-# Client and Server Architecture
-
-This page illustrates typical architectural components and dataflows in various Azure Communication Service scenarios. Relevant components include:
-
-1. **Client Application.** This website or native application is leveraged by end-users to communicate. Azure Communication Services provides [SDK client libraries](sdk-options.md) for multiple browsers and application platforms. In addition to our core SDKs, [a UI Library](https://aka.ms/acsstorybook) is available to accelerate browser app development.
-1. **Identity Management Service.** This service capability you build to map users and other concepts in your business logic to Azure Communication Services and also to create tokens for those users when required.
-1. **Call Management Service.** This service capability you build to manage and monitor voice and video calls. This service can create calls, invite users, call phone numbers, play audio, listen to DMTF tones and leverage many other call features through the Calling Automation SDK and REST APIs.
--
-## User access management
-
-Azure Communication Services clients must present `user access tokens` to access Communication Services resources securely. `User access tokens` should be generated and managed by a trusted service due to the sensitive nature of the token and the connection string or Azure AD authentication secrets necessary to generate them. Failure to properly manage access tokens can result in additional charges due to misuse of resources.
--
-### Dataflows
-1. The user starts the client application. The design of this application and user authentication scheme is in your control.
-2. The client application contacts your identity management service. The identity management service maintains a mapping between your users and other addressable objects (for example services or bots) to Azure Communication Service identities.
-3. The identity management service creates a user access token for the applicable identity. If no Azure Communication Services identity has been allocated the past, a new identity is created.
-
-### Resources
-- **Concept:** [User Identity](identity-model.md)-- **Quickstart:** [Create and manage access tokens](../quickstarts/access-tokens.md)-- **Tutorial:** [Build a identity management services use Azure Functions](../tutorials/trusted-service-tutorial.md)-- **Sample:** [Trusted authentication service hero sample](../samples/trusted-auth-sample.md)-
-> [!IMPORTANT]
-> For simplicity, we do not show user access management and token distribution in subsequent architecture flows.
--
-## Calling a user without push notifications
-The simplest voice and video calling scenarios involves a user calling another, in the foreground without push notifications.
--
-### Dataflows
-
-1. The accepting user initializes the Call client, allowing them to receive incoming phone calls.
-2. The initiating user needs the Azure Communication Services identity of the person they want to call. A typical experience may have a *friend's list* maintained by the identity management service that collates the user's friends and associated Azure Communication Service identities.
-3. The initiating user initializes their Call client and calls the remote user.
-4. The accepting user is notified of the incoming call through the Calling SDK.
-5. The users communicate with each other using voice and video in a call.
-
-### Resources
-- **Concept:** [Calling Overview](voice-video-calling/calling-sdk-features.md)-- **Quickstart:** [Add voice calling to your app](../quickstarts/voice-video-calling/getting-started-with-calling.md)-- **Quickstart:** [Add video calling to your app](../quickstarts/voice-video-calling/get-started-with-video-calling.md)-- **Hero Sample:** [Group Calling for Web, iOS, and Android](../samples/calling-hero-sample.md)--
-## Joining a user-created group call
-You may want users to join a call without an explicit invitation. For example you may have a *social space* with an associated call, and users join that call at their leisure. In this first dataflow, we show a call that is initially created by a client.
--
-### Dataflows
-1. Initiating user initializes their Call client and makes a group call.
-2. The initiating user shares the group call ID with a Call management service.
-3. The Call Management Service shares the call ID with other users. For example, if the application orients around scheduled events, the group call ID might be an attribute of the scheduled event's data model.
-4. Other users join the call using the group call ID.
-5. The users communicate with each other using voice and video in a call.
--
-## Joining a scheduled Teams call
-Azure Communication Service applications can join Teams calls. This is ideal for many business-to-consumer scenarios, where the consumer is leveraging a custom application and custom identity, while the business-side is using Teams.
---
-### Dataflows
-1. The Call Management Service creates a group call with [Graph APIs](/graph/api/resources/onlinemeeting?view=graph-rest-1.0&preserve-view=true). Another pattern involves end users creating the group call using [Bookings](https://www.microsoft.com/microsoft-365/business/scheduling-and-booking-app), Outlook, Teams, or another scheduling experience in the Microsoft 365 ecosystem.
-2. The Call Management Service shares the Teams call details with Azure Communication Service clients.
-3. Typically, a Teams user must join the call and allow external users to join through the lobby. However this experience is sensitive to the Teams tenant configuration and specific meeting settings.
-4. Azure Communication Service users initialize their Call client and join the Teams meeting, using the details received in Step 2.
-5. The users communicate with each other using voice and video in a call.
-
-### Resources
-- **Concept:** [Teams Interoperability](teams-interop.md)-- **Quickstart:** [Join a Teams meeting](../quickstarts/voice-video-calling/get-started-teams-interop.md)
communication-services Send Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email.md
Last updated 04/15/2022
-zone_pivot_groups: acs-js-csharp-java-python
+zone_pivot_groups: acs-azcli-js-csharp-java-python
# Quickstart: How to send an email using Azure Communication Service
zone_pivot_groups: acs-js-csharp-java-python
In this quick start, you'll learn about how to send email using our Email SDKs. + ::: zone pivot="programming-language-csharp" [!INCLUDE [Send email with .NET SDK](./includes/send-email-net.md)] ::: zone-end
communication-services Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/sms/send.md
zone_pivot_groups: acs-azcli-js-csharp-java-python
> [!IMPORTANT] > SMS capabilities depend on the phone number you use and the country that you're operating within as determined by your Azure billing address. For more information, visit the [Subscription eligibility](../../concepts/numbers/sub-eligibility-number-capability.md) documentation. >
-> Currently, SMS messages can only be sent to received from United States phone numbers. For more information, see [Phone number types](../../concepts/telephony/plan-solution.md).
+> Currently, SMS messages can only be sent to and received from United States phone numbers. For more information, see [Phone number types](../../concepts/telephony/plan-solution.md).
<br/>
connectors Connectors Integrate Security Operations Create Api Microsoft Graph Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-integrate-security-operations-create-api-microsoft-graph-security.md
With [Azure Logic Apps](../logic-apps/logic-apps-overview.md) and the [Microsoft
Your logic app's workflow can use actions that get responses from the Microsoft Graph Security connector and make that output available to other actions in your workflow. You can also have other actions in your workflow use the output from the Microsoft Graph Security connector actions. For example, if you get high severity alerts through the Microsoft Graph Security connector, you can send those alerts in an email message by using the Outlook connector.
-To learn more about Microsoft Graph Security, see the [Microsoft Graph Security API overview](/graph/security-concept-overview). If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). If you're looking for Power Automate or Power Apps, see [What is Power Automate?](https://flow.microsoft.com/) or [What is Power Apps?](https://powerapps.microsoft.com/)
+To learn more about Microsoft Graph Security, see the [Microsoft Graph Security API overview](/graph/security-concept-overview). If you're new to logic apps, review [What is Azure Logic Apps?](../logic-apps/logic-apps-overview.md). If you're looking for Power Automate or Power Apps, see [What is Power Automate?](https://make.powerautomate.com/) or [What is Power Apps?](https://powerapps.microsoft.com/)
## Prerequisites
container-apps Deploy Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-visual-studio.md
In this tutorial, you'll deploy a containerized ASP.NET Core 6.0 application to
- An Azure account with an active subscription is required. If you don't already have one, you can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - Visual Studio 2022 version 17.2 or higher, available as a [free download](https://visualstudio.microsoft.com). -- [Docker Desktop](https://hub.docker.com/editions/community/docker-ce-desktop-windows) for Windows. Visual Studio uses Docker Desktop for various containerization features. ## Create the project
Begin by creating the containerized ASP.NET Core application to deploy to Azure.
:::image type="content" source="media/visual-studio/container-apps-enable-docker.png" alt-text="A screenshot showing to enable docker."::: -
-### Docker installation
-
-If this is your first time creating a project using Docker, you may get a prompt instructing you to install Docker Desktop. This installation is required for working with containerized apps, as mentioned in the prerequisites, so click **Yes**. You can also download and [install Docker Desktop for Windows from the official Docker site](https://hub.docker.com/editions/community/docker-ce-desktop-windows).
-
-Visual Studio launches the Docker Desktop for Windows installer. You can follow the installation instructions on this page to set up Docker, which requires a system reboot.
- ## Deploy to Azure Container Apps The application includes a Dockerfile because the Enable Docker setting was selected in the project template. Visual Studio uses the Dockerfile to build the container image that is run by Azure Container Apps.
container-registry Container Registry Repository Scoped Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-repository-scoped-permissions.md
Scenarios for creating a token include:
This feature is available in the **Premium** container registry service tier. For information about registry service tiers and limits, see [Azure Container Registry service tiers](container-registry-skus.md).
-> [!IMPORTANT]
-> This feature is currently in preview, and some [limitations apply](#preview-limitations). Previews are made available to you on the condition that you agree to the [supplemental terms of use][terms-of-use]. Some aspects of this feature may change prior to general availability (GA).
-
-## Preview limitations
+## Limitations
* You can't currently assign repository-scoped permissions to an Azure Active Directory identity, such as a service principal or managed identity.
-* You can't create a scope map in a registry enabled for [anonymous pull access](container-registry-faq.yml#how-do-i-enable-anonymous-pull-access-).
+ ## Concepts
The following image shows the relationship between tokens and scope maps.
## Prerequisites
-* **Azure CLI** - Azure CLI commands command examples in this article require Azure CLI version 2.17.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+* **Azure CLI** - Azure CLI command examples in this article require Azure CLI version 2.17.0 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
* **Docker** - To authenticate with the registry to pull or push images, you need a local Docker installation. Docker provides installation instructions for [macOS](https://docs.docker.com/docker-for-mac/), [Windows](https://docs.docker.com/docker-for-windows/), and [Linux](https://docs.docker.com/engine/installation/#supported-platforms) systems. * **Container registry** - If you don't have one, create a Premium container registry in your Azure subscription, or upgrade an existing registry. For example, use the [Azure portal](container-registry-get-started-portal.md) or the [Azure CLI](container-registry-get-started-azure-cli.md).
You can use the Azure portal to create tokens and scope maps. As with the `az ac
The following example creates a token, and creates a scope map with the following permissions on the `samples/hello-world` repository: `content/write` and `content/read`. 1. In the portal, navigate to your container registry.
-1. Under **Repository permissions**, select **Tokens (Preview) > +Add**.
+1. Under **Repository permissions**, select **Tokens > +Add**.
:::image type="content" source="media/container-registry-repository-scoped-permissions/portal-token-add.png" alt-text="Create token in portal"::: 1. Enter a token name.
After the token is validated and created, token details appear in the **Tokens**
To use a token created in the portal, you must generate a password. You can generate one or two passwords, and set an expiration date for each one. New passwords created for tokens are available immediately. Regenerating new passwords for tokens will take 60 seconds to replicate and be available. 1. In the portal, navigate to your container registry.
-1. Under **Repository permissions**, select **Tokens (Preview)**, and select a token.
+1. Under **Repository permissions**, select **Tokens**, and select a token.
1. In the token details, select **password1** or **password2**, and select the Generate icon. 1. In the password screen, optionally set an expiration date for the password, and select **Generate**. It's recommended to set an expiration date. 1. After generating a password, copy and save it to a safe location. You can't retrieve a generated password after closing the screen, but you can generate a new one.
az acr scope-map update \
In the Azure portal: 1. Navigate to your container registry.
-1. Under **Repository permissions**, select **Scope maps (Preview)**, and select the scope map to update.
+1. Under **Repository permissions**, select **Scope maps**, and select the scope map to update.
1. Under **Repositories**, enter `samples/nginx`, and under **Permissions**, select `content/read` and `content/write`. Then select **+Add**. 1. Under **Repositories**, select `samples/hello-world` and under **Permissions**, deselect `content/write`. Then select **Save**.
Sample output:
### List scope maps
-Use the [az acr scope-map list][az-acr-scope-map-list] command, or the **Scope maps (Preview)** screen in the portal, to list all the scope maps configured in a registry. For example:
+Use the [az acr scope-map list][az-acr-scope-map-list] command, or the **Scope maps** screen in the portal, to list all the scope maps configured in a registry. For example:
```azurecli az acr scope-map list \
MyScopeMap UserDefined 2019-11-15T21:17:34Z Sample scope map
### Show token details
-To view the details of a token, such as its status and password expiration dates, run the [az acr token show][az-acr-token-show] command, or select the token in the **Tokens (Preview)** screen in the portal. For example:
+To view the details of a token, such as its status and password expiration dates, run the [az acr token show][az-acr-token-show] command, or select the token in the **Tokens** screen in the portal. For example:
```azurecli az acr scope-map show \ --name MyScopeMap --registry myregistry ```
-Use the [az acr token list][az-acr-token-list] command, or the **Tokens (Preview)** screen in the portal, to list all the tokens configured in a registry. For example:
+Use the [az acr token list][az-acr-token-list] command, or the **Tokens** screen in the portal, to list all the tokens configured in a registry. For example:
```azurecli az acr token list --registry myregistry --output table
az acr token list --registry myregistry --output table
### Regenerate token passwords
-If you didn't generate a token password, or you want to generate new passwords, run the [az acr token credential generate][az-acr-token-credential-generate] command.Regenerating new passwords for tokens will take 60 seconds to replicate and be available.
+If you didn't generate a token password, or you want to generate new passwords, run the [az acr token credential generate][az-acr-token-credential-generate] command. Regenerating new passwords for tokens will take 60 seconds to replicate and be available.
The following example generates a new value for password1 for the *MyToken* token, with an expiration period of 30 days. It stores the password in the environment variable `TOKEN_PWD`. This example is formatted for the bash shell.
az acr token update --name MyToken --registry myregistry \
--scope-map MyNewScopeMap ```
-In the portal, on the **Tokens (preview)** screen, select the token, and under **Scope map**, select a different scope map.
+In the portal, on the **Tokens** screen, select the token, and under **Scope map**, select a different scope map.
> [!TIP] > After updating a token with a new scope map, you might want to generate new token passwords. Use the [az acr token credential generate][az-acr-token-credential-generate] command or regenerate a token password in the Azure portal.
az acr token update --name MyToken --registry myregistry \
--status disabled ```
-In the portal, select the token in the **Tokens (Preview)** screen, and select **Disabled** under **Status**.
+In the portal, select the token in the **Tokens** screen, and select **Disabled** under **Status**.
To delete a token to permanently invalidate access by anyone using its credentials, run the [az acr token delete][az-acr-token-delete] command.
To delete a token to permanently invalidate access by anyone using its credentia
az acr token delete --name MyToken --registry myregistry ```
-In the portal, select the token in the **Tokens (Preview)** screen, and select **Discard**.
+In the portal, select the token in the **Tokens** screen, and select **Discard**.
## Next steps
In the portal, select the token in the **Tokens (Preview)** screen, and select *
* Learn about [connected registries](intro-connected-registry.md) and using tokens for [access](overview-connected-registry-access.md). <!-- LINKS - External -->
-[terms-of-use]: https://azure.microsoft.com/support/legal/preview-supplemental-terms/
+ <!-- LINKS - Internal --> [az-acr-login]: /cli/azure/acr#az_acr_login
In the portal, select the token in the **Tokens (Preview)** screen, and select *
[az-acr-token-delete]: /cli/azure/acr/token/#az_acr_token_delete [az-acr-token-create]: /cli/azure/acr/token/#az_acr_token_create [az-acr-token-update]: /cli/azure/acr/token/#az_acr_token_update
-[az-acr-token-credential-generate]: /cli/azure/acr/token/credential/#az_acr_token_credential_generate
+[az-acr-token-credential-generate]: /cli/azure/acr/token/credential/#az_acr_token_credential_generate
container-registry Container Registry Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-roles.md
To create or update a custom role using the JSON description, use the [Azure CLI
* Learn about [authentication options](container-registry-authentication.md) for Azure Container Registry.
-* Learn about enabling [repository-scoped permissions](container-registry-repository-scoped-permissions.md) (preview) in a container registry.
+* Learn about enabling [repository-scoped permissions](container-registry-repository-scoped-permissions.md) in a container registry.
cosmos-db Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/insights-overview.md
Title: Monitor Azure Cosmos DB with Azure Monitor Azure Cosmos DB insights| Microsoft Docs description: This article describes the Azure Cosmos DB insights feature of Azure Monitor that provides Azure Cosmos DB owners with a quick understanding of performance and utilization issues with their Azure Cosmos DB accounts.--++ Previously updated : 05/11/2020- Last updated : 02/08/2023+ # Explore Azure Monitor Azure Cosmos DB insights
-Azure Cosmos DB insights provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article will help you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
+Azure Cosmos DB insights provides a view of the overall performance, failures, capacity, and operational health of all your Azure Cosmos DB resources in a unified interactive experience. This article helps you understand the benefits of this new monitoring experience, and how you can modify and adapt the experience to fit the unique needs of your organization.
## Introduction
-Before diving into the experience, you should understand how it presents and visualizes information.
+Before you begin, you should understand how information is presented and visualized.
It delivers:
-* **At scale perspective** of your Azure Cosmos DB resources across all your subscriptions in a single location, with the ability to selectively scope to only those subscriptions and resources you are interested in evaluating.
+* **At-scale perspective** of your Azure Cosmos DB resources across all your subscriptions in a single location. You can selectively scope to only the subscriptions and resources that you're interested in evaluating.
+* **Drill-down analysis** of a particular Azure Cosmos DB resource. You can diagnose issues or perform detailed analysis by using the categories of utilization, failures, capacity, and operations. Selecting any one of the options provides an in-depth view of the relevant Azure Cosmos DB metrics.
+* **Customizable** experience built on top of Azure Monitor workbook templates. You can change what metrics are displayed, modify or set thresholds that align with your limits, and then save into a custom workbook. Charts in the workbooks can then be pinned to Azure dashboards.
-* **Drill down analysis** of a particular Azure Cosmos DB resource to help diagnose issues or perform detailed analysis by category - utilization, failures, capacity, and operations. Selecting any one of those options provides an in-depth view of the relevant Azure Cosmos DB metrics.
-
-* **Customizable** - This experience is built on top of Azure Monitor workbook templates allowing you to change what metrics are displayed, modify or set thresholds that align with your limits, and then save into a custom workbook. Charts in the workbooks can then be pinned to Azure dashboards.
-
-This feature does not require you to enable or configure anything, these Azure Cosmos DB metrics are collected by default.
+This feature doesn't require you to enable or configure anything. These Azure Cosmos DB metrics are collected by default.
>[!NOTE]
->There is no charge to access this feature and you will only be charged for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
+>There's no charge to access this feature. You'll only be charged for the Azure Monitor essential features you configure or enable, as described on the [Azure Monitor pricing details](https://azure.microsoft.com/pricing/details/monitor/) page.
## View utilization and performance metrics for Azure Cosmos DB
-To view the utilization and performance of your storage accounts across all of your subscriptions, perform the following steps.
+To view the utilization and performance of your storage accounts across all your subscriptions:
1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Search for **Monitor** and select **Monitor**.
+1. Search for **Monitor** and select **Monitor**.
- ![Search box with the word "Monitor" and a dropdown that says Services "Monitor" with a speedometer style image](./media/insights-overview/search-monitor.png)
+ ![Screenshot that shows the Search box with the word "Monitor" and a dropdown that says Services "Monitor" with a speedometer-style image.](./media/insights-overview/search-monitor.png)
-3. Select **Azure Cosmos DB**.
+1. Select **Azure Cosmos DB**.
- ![Screenshot of Azure Cosmos DB overview workbook](./media/insights-overview/cosmos-db.png)
+ ![Screenshot that shows the Azure Cosmos DB Overview workbook.](./media/insights-overview/cosmos-db.png)
### Overview
-On **Overview**, the table displays interactive Azure Cosmos DB metrics. You can filter the results based on the options you select from the following drop-down lists:
-
-* **Subscriptions** - only subscriptions that have an Azure Cosmos DB resource are listed.
+On the **Overview** page, the table displays interactive Azure Cosmos DB metrics. You can filter the results based on the options you select from the following dropdown lists:
-* **Azure Cosmos DB** - You can select all, a subset, or single Azure Cosmos DB resource.
+* **Subscriptions**: Only subscriptions that have an Azure Cosmos DB resource are listed.
+* **Azure Cosmos DB**: You can select all, a subset, or a single Azure Cosmos DB resource.
+* **Time Range**: By default, the last four hours of information are displayed based on the corresponding selections made.
-* **Time Range** - by default, displays the last 4 hours of information based on the corresponding selections made.
+The counter tile under the dropdown lists rolls up the total number of Azure Cosmos DB resources that are in the selected subscriptions. Conditional color-coding or heatmaps for columns in the workbook report transaction metrics. The deepest color has the highest value. A lighter color is based on the lowest values.
-The counter tile under the drop-down lists rolls-up the total number of Azure Cosmos DB resources are in the selected subscriptions. There is conditional color-coding or heatmaps for columns in the workbook that report transaction metrics. The deepest color has the highest value and a lighter color is based on the lowest values.
+Select a dropdown arrow next to one of the Azure Cosmos DB resources to reveal a breakdown of the performance metrics at the individual database container level.
-Selecting a drop-down arrow next to one of the Azure Cosmos DB resources will reveal a breakdown of the performance metrics at the individual database container level:
+![Screenshot that shows the Expanded dropdown that reveals individual database containers and associated performance breakdown.](./media/insights-overview/container-view.png)
-![Expanded drop down revealing individual database containers and associated performance breakdown](./media/insights-overview/container-view.png)
-
-Selecting the Azure Cosmos DB resource name highlighted in blue will take you to the default **Overview** for the associated Azure Cosmos DB account.
+Select the Azure Cosmos DB resource name highlighted in blue to go to the default **Overview** for the associated Azure Cosmos DB account.
### Failures
-Select **Failures** at the top of the page and the **Failures** portion of the workbook template opens. It shows you total requests with the distribution of responses that make up those requests:
+Select the **Failures** tab to open the **Failures** portion of the workbook template. It shows you the total requests with the distribution of responses that make up those requests:
-![Screenshot of failures with breakdown by HTTP request type](./media/insights-overview/failures.png)
+![Screenshot that shows failures with breakdown by HTTP request type.](./media/insights-overview/failures.png)
| Code | Description | |--|:--| | `200 OK` | One of the following REST operations were successful: </br>- GET on a resource. </br> - PUT on a resource. </br> - POST on a resource. </br> - POST on a stored procedure resource to execute the stored procedure.| | `201 Created` | A POST operation to create a resource is successful. |
-| `404 Not Found` | The operation is attempting to act on a resource that no longer exists. For example, the resource may have already been deleted. |
+| `404 Not Found` | The operation is attempting to act on a resource that no longer exists. For example, the resource might have already been deleted. |
-For a full list of status codes, consult the [Azure Cosmos DB HTTP status code article](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
+For a full list of status codes, see [HTTP status codes for Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb).
### Capacity
-Select **Capacity** at the top of the page and the **Capacity** portion of the workbook template opens. It shows you how many documents you have, your document growth over time, data usage, and the total amount of available storage that you have left. This can be used to help identify potential storage and data utilization issues.
+Select the **Capacity** tab to open the **Capacity** portion of the workbook template. It shows you:
+- How many documents you have.
+- Your document growth over time.
+- Data usage.
+- Total amount of available storage that you have left.
+
+This information helps you to identify potential storage and data utilization issues.
-![Capacity workbook](./media/insights-overview/capacity.png)
+![Screenshot that shows the Capacity workbook.](./media/insights-overview/capacity.png)
-As with the overview workbook, selecting the drop-down next to an Azure Cosmos DB resource in the **Subscription** column will reveal a breakdown by the individual containers that make up the database.
+As with the Overview workbook, selecting the dropdown next to an Azure Cosmos DB resource in the **Subscription** column reveals a breakdown by the individual containers that make up the database.
### Operations
-Select **Operations** at the top of the page and the **Operations** portion of the workbook template opens. It gives you the ability to see your requests broken down by the type of requests made.
+Select the **Operations** tab to open the **Operations** portion of the workbook template. You can see your requests broken down by the type of requests made.
-So in the example below you see that `eastus-billingint` is predominantly receiving read requests, but with a small number of upsert and create requests. Whereas `westeurope-billingint` is read-only from a request perspective, at least over the past four hours that the workbook is currently scoped to via its time range parameter.
+In the following example, you see that `eastus-billingint` is predominantly receiving read requests, but with a few upsert and create requests. You can also see that `westeurope-billingint` is read-only from a request perspective, at least over the past four hours that the workbook is currently scoped to via its time range parameter.
-![Operations workbook](./media/insights-overview/operation.png)
+![Screenshot that shows the Operations workbook.](./media/insights-overview/operation.png)
## View from an Azure Cosmos DB resource 1. Search for or select any of your existing Azure Cosmos DB accounts.
+ :::image type="content" source="./media/insights-overview/cosmosdb-search.png" alt-text="Screenshot that shows searching for Azure Cosmos DB." border="true":::
-2. Once you've navigated to your Azure Cosmos DB account, in the Monitoring section select **Insights (preview)** or **Workbooks** to perform further analysis on throughput, requests, storage, availability, latency, system, and account management.
+1. After you've moved to your Azure Cosmos DB account, in the **Monitoring** section, select **Insights (preview)** or **Workbooks**. Now you can perform further analysis on throughput, requests, storage, availability, latency, system, and account management.
+ :::image type="content" source="./media/insights-overview/cosmosdb-overview.png" alt-text="Screenshot that shows the Azure Cosmos DB Insights Overview page." border="true":::
### Time range
-By default, the **Time Range** field displays data from the **Last 24 hours**. You can modify the time range to display data anywhere from the last 5 minutes to the last seven days. The time range selector also includes a **Custom** mode that allows you to type in the start/end dates to view a custom time frame based on available data for the selected account.
+By default, the **Time Range** field displays data from the last 24 hours. You can modify the time range to display data anywhere from the last 5 minutes to the last 7 days. The time range selector also includes a **Custom** mode. Enter the start/end dates to view a custom time frame based on available data for the selected account.
### Insights overview
-The **Overview** tab provides the most common metrics for the selected Azure Cosmos DB account including:
+The **Overview** tab provides the most common metrics for the selected Azure Cosmos DB account, including:
* Total Requests * Failed Requests (429s)
The **Overview** tab provides the most common metrics for the selected Azure Cos
* Data & Index Usage * Azure Cosmos DB Account Metrics by Collection
-**Total Requests:** This graph provides a view of the total requests for the account broken down by status code. The units at the bottom of the graph are a sum of the total requests for the period.
+**Total Requests**: This graph provides a view of the total requests for the account broken down by status code. The units at the bottom of the graph are a sum of the total requests for the period.
**Failed Requests (429s)**: This graph provides a view of failed requests with a status code of 429. The units at the bottom of the graph are a sum of the total failed requests for the period.
-**Normalized RU Consumption (max)**: This graph provides the max percentage between 0-100% of Normalized RU Consumption units for the specified period.
+**Normalized RU Consumption (max)**: This graph provides the maximum percentage between 0% and 100% of Normalized RU Consumption units for the specified period.
## Pin, export, and expand
-You can pin any one of the metric sections to an [Azure Dashboard](../azure-portal/azure-portal-dashboards.md) by selecting the pushpin icon at the top right of the section.
+You can pin any one of the metric sections to an [Azure dashboard](../azure-portal/azure-portal-dashboards.md) by selecting the pushpin in the upper-right corner of the section.
-![Metric section pin to dashboard example](./media/insights-overview/pin.png)
+![Screenshot that shows the metric section pin to dashboard example.](./media/insights-overview/pin.png)
-To export your data into the Excel format, select the down arrow icon to the left of the pushpin icon.
+To export your data into the Excel format, select the down arrow to the left of the pushpin.
-![Export workbook icon](./media/insights-overview/export.png)
+![Screenshot that shows the Export workbook down arrow.](./media/insights-overview/export.png)
-To expand or collapse all drop-down views in the workbook, select the expand icon to the left of the export icon:
+To expand or collapse all dropdown views in the workbook, select the expand arrow to the left of the down arrow.
-![Expand workbook icon](./media/insights-overview/expand.png)
+![Screenshot that shows the Expand workbook arrow.](./media/insights-overview/expand.png)
## Customize Azure Cosmos DB insights
-Since this experience is built on top of Azure Monitor workbook templates, you have the ability to **Customize** > **Edit** and **Save** a copy of your modified version into a custom workbook.
+This experience is built on top of Azure Monitor workbook templates. You can use **Customize** > **Edit** > **Save** to modify and save a copy of your modified version into a custom workbook.
-![Customize bar](./media/insights-overview/customize.png)
+![Screenshot that shows the Customize button.](./media/insights-overview/customize.png)
-Workbooks are saved within a resource group, either in the **My Reports** section that's private to you or in the **Shared Reports** section that's accessible to everyone with access to the resource group. After you save the custom workbook, you need to go to the workbook gallery to launch it.
+Workbooks are saved within a resource group. The **My Reports** section is private to you. The **Shared Reports** section is accessible to everyone with access to the resource group. After you save the custom workbook, you must go to the workbook gallery to start it.
-![Launch workbook gallery from command bar](./media/insights-overview/gallery.png)
+![Screenshot that shows the Gallery button.](./media/insights-overview/gallery.png)
## Troubleshooting
-For troubleshooting guidance, refer to the dedicated workbook-based insights [troubleshooting article](../azure-monitor/insights/troubleshoot-workbooks.md).
+For troubleshooting guidance, see [Troubleshooting workbook-based insights](../azure-monitor/insights/troubleshoot-workbooks.md).
## Next steps
-* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [service health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerting to aid in detecting issues.
-
-* Learn the scenarios workbooks are designed to support, how to author new and customize existing reports, and more by reviewing [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
+* Configure [metric alerts](../azure-monitor/alerts/alerts-metric.md) and [Service Health notifications](../service-health/alerts-activity-log-service-notifications-portal.md) to set up automated alerting to aid in detecting issues.
+* For more information on how the scenario workbooks are designed and how to author new and customize existing reports, see [Create interactive reports with Azure Monitor workbooks](../azure-monitor/visualize/workbooks-overview.md).
cosmos-db Merge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/merge.md
Title: Merge partitions in Azure Cosmos DB (preview)
-description: Learn more about the merge partitions capability in Azure Cosmos DB
+ Title: Merge partitions (preview)
+
+description: Reduce the number of physical partitions used for your container with the merge capability in Azure Cosmos DB.
+ + Last updated : 02/07/2023 -- Previously updated : 10/26/2022 # Merge partitions in Azure Cosmos DB (preview)
Based on conditions 1 and 2, our container can potentially benefit from merging
### Merging physical partitions In PowerShell, when the flag `-WhatIf` is passed in, Azure Cosmos DB will run a simulation and return the expected result of the merge, but won't run the merge itself. When the flag isn't passed in, the merge will execute against the resource. When finished, the command will output the current amount of storage in KB per physical partition post-merge.+ > [!TIP] > Before running a merge, it's recommended to set your provisioned RU/s (either manual RU/s or autoscale max RU/s) as close as possible to your desired steady state RU/s post-merge, to help ensure the system calculates an efficient partition layout. #### [PowerShell](#tab/azure-powershell)
-```azurepowershell
-// Add the preview extension
+Use [`Install-Module`](/powershell/module/powershellget/install-module) to install the [Az.CosmosDB](/powershell/module/az.cosmosdb/) module with pre-release features enabled.
+
+```azurepowershell-interactive
$parameters = @{ Name = "Az.CosmosDB" AllowPrerelease = $true
$parameters = @{
Install-Module @parameters ```
-```azurepowershell
-// API for NoSQL
+#### [Azure CLI](#tab/azure-cli)
+
+Use [`az extension add`](/cli/azure/extension#az-extension-add) to install the [cosmosdb-preview](https://github.com/azure/azure-cli-extensions/tree/main/src/cosmosdb-preview) Azure CLI extension.
+
+```azurecli-interactive
+az extension add \
+ --name cosmosdb-preview
+```
+++
+#### [API for NoSQL](#tab/nosql/azure-powershell)
+
+Use `Invoke-AzCosmosDBSqlContainerMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation.
+
+```azurepowershell-interactive
$parameters = @{ ResourceGroupName = "<resource-group-name>" AccountName = "<cosmos-account-name>"
$parameters = @{
Invoke-AzCosmosDBSqlContainerMerge @parameters ```
-```azurepowershell
-// API for MongoDB
+Start the merge by running the same command without the `-WhatIf` parameter.
+
+```azurepowershell-interactive
$parameters = @{ ResourceGroupName = "<resource-group-name>" AccountName = "<cosmos-account-name>" DatabaseName = "<cosmos-database-name>" Name = "<cosmos-container-name>"
- WhatIf = $true
}
-Invoke-AzCosmosDBMongoDBCollectionMerge @parameters
+Invoke-AzCosmosDBSqlContainerMerge @parameters
```
-#### [Azure CLI](#tab/azure-cli)
+#### [API for NoSQL](#tab/nosql/azure-cli)
-```azurecli
-// Add the preview extension
-az extension add --name cosmosdb-preview
-```
+Start the merge by using [`az cosmosdb sql container merge`](/cli/azure/cosmosdb/sql/container#az-cosmosdb-sql-container-merge).
-```azurecli
-// API for NoSQL
+```azurecli-interactive
az cosmosdb sql container merge \ --resource-group '<resource-group-name>' \ --account-name '<cosmos-account-name>' \
az cosmosdb sql container merge \
--name '<cosmos-container-name>' ```
-```azurecli
-// API for MongoDB
+#### [API for MongoDB](#tab/mongodb/azure-powershell)
+
+Use `Invoke-AzCosmosDBMongoDBCollectionMerge` with the `-WhatIf` parameter to preview the merge without actually performing the operation.
+
+```azurepowershell-interactive
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ AccountName = "<cosmos-account-name>"
+ DatabaseName = "<cosmos-database-name>"
+ Name = "<cosmos-container-name>"
+ WhatIf = $true
+}
+Invoke-AzCosmosDBMongoDBCollectionMerge @parameters
+```
+
+Start the merge by running the same command without the `-WhatIf` parameter.
+
+```azurepowershell-interactive
+$parameters = @{
+ ResourceGroupName = "<resource-group-name>"
+ AccountName = "<cosmos-account-name>"
+ DatabaseName = "<cosmos-database-name>"
+ Name = "<cosmos-container-name>"
+}
+Invoke-AzCosmosDBMongoDBCollectionMerge @parameters
+```
+
+#### [API for MongoDB](#tab/mongodb/azure-cli)
+
+Start the merge by using [`az cosmosdb mongodb collection merge`](/cli/azure/cosmosdb/mongodb/collection#az-cosmosdb-mongodb-collection-merge).
+
+```azurecli-interactive
az cosmosdb mongodb collection merge \ --resource-group '<resource-group-name>' \ --account-name '<cosmos-account-name>' \
You can track whether merge is still in progress by checking the **Activity Log*
## Limitations
+The following are limitations of the merge feature at this time.
+ ### Preview eligibility criteria To enroll in the preview, your Azure Cosmos DB account must meet all the following criteria:
Support for other SDKs is planned for the future.
If you enroll in the preview, the following connectors will fail. -- Azure Data Factory <sup>1</sup>-- Azure Stream Analytics <sup>1</sup>-- Logic Apps <sup>1</sup>-- Azure Functions <sup>1</sup>-- Azure Search <sup>1</sup>-- Azure Cosmos DB Spark connector <sup>1</sup>
+- Azure Data Factory ┬╣
+- Azure Stream Analytics ┬╣
+- Logic Apps ┬╣
+- Azure Functions ┬╣
+- Azure Search ┬╣
+- Azure Cosmos DB Spark connector ┬╣
- Any third party library or tool that has a dependency on an Azure Cosmos DB SDK that isn't .NET V3 SDK v3.27.0 or higher
-<sup>1</sup> Support for these connectors is planned for the future.
+┬╣ Support for these connectors is planned for the future.
## Next steps
cosmos-db Quickstart Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-java.md
In this quickstart, you create and manage an Azure Cosmos DB for NoSQL account f
> This quickstart is for Azure Cosmos DB Java SDK v4 only. Please view the Azure Cosmos DB Java SDK v4 [Release notes](sdk-java-v4.md), [Maven repository](https://mvnrepository.com/artifact/com.azure/azure-cosmos), Azure Cosmos DB Java SDK v4 [performance tips](performance-tips-java-sdk-v4.md), and Azure Cosmos DB Java SDK v4 [troubleshooting guide](troubleshoot-java-sdk-v4.md) for more information. If you are currently using an older version than v4, see the [Migrate to Azure Cosmos DB Java SDK v4](migrate-java-v4-sdk.md) guide for help upgrading to v4. >
+> [!TIP]
+> If you're working with Azure Cosmos DB resources in a Spring application, we recommend that you consider [Spring Cloud Azure](/azure/developer/java/spring-framework/) as an alternative. Spring Cloud Azure is an open-source project that provides seamless Spring integration with Azure services. To learn more about Spring Cloud Azure, and to see an example using Cosmos DB, see [Access data with Azure Cosmos DB NoSQL API](/azure/developer/java/spring-framework/configure-spring-boot-starter-java-app-with-cosmos-db).
+ ## Prerequisites - An Azure account with an active subscription.
cost-management-billing Cost Management Billing Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/cost-management-billing-overview.md
After discounts are applied, cost details then flow into Cost Management, where:
- The cost allocation engine applies tag inheritance and [splits shared costs](./costs/allocate-costs.md). - AWS cost and usage reports are pulled based on any [connectors for AWS](./costs/aws-integration-manage.md) you may have configured. - Azure Advisor cost recommendations are pulled in to enable cost savings insights for subscriptions and resource groups.-- Cost alerts are sent out for [budgets](./costs/tutorial-acm-create-budgets.md), [anomalies](./understand/analyze-unexpected-charges.md#create-an-anomaly-alert), [scheduled alerts](./costs/save-share-views.md#subscribe-to-cost-alerts), and more based on the configured settings.
+- Cost alerts are sent out for [budgets](./costs/tutorial-acm-create-budgets.md), [anomalies](./understand/analyze-unexpected-charges.md#create-an-anomaly-alert), [scheduled alerts](./costs/save-share-views.md#subscribe-to-scheduled-alerts), and more based on the configured settings.
Lastly, cost details are made available from [cost analysis](./costs/quick-acm-cost-analysis.md) in the Azure portal and published to your storage account via [scheduled exports](./costs/tutorial-export-acm-data.md).
Cost Management and Billing offer many different types of emails and alerts to k
- [**Budget alerts**](./costs/tutorial-acm-create-budgets.md) notify recipients when cost exceeds a predefined cost or forecast amount. Budgets can be visualized in cost analysis and are available on every scope supported by Cost Management. Subscription and resource group budgets can also be configured to notify an action group to take automated actions to reduce or even stop further charges. - [**Anomaly alerts**](./understand/analyze-unexpected-charges.md)notify recipients when an unexpected change in daily usage has been detected. It can be a spike or a dip. Anomaly detection is only available for subscriptions and can be viewed within the cost analysis preview. Anomaly alerts can be configured from the cost alerts page.-- [**Scheduled alerts**](./costs/save-share-views.md#subscribe-to-cost-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV.
+- [**Scheduled alerts**](./costs/save-share-views.md#subscribe-to-scheduled-alerts) notify recipients about the latest costs on a daily, weekly, or monthly schedule based on a saved cost view. Alert emails include a visual chart representation of the view and can optionally include a CSV file. Views are configured in cost analysis, but recipients don't require access to cost in order to view the email, chart, or linked CSV.
- **EA commitment balance alerts** are automatically sent to any notification contacts configured on the EA billing account when the balance is 90% or 100% used. - **Invoice alerts** can be configured for MCA billing profiles and Microsoft Online Services Program (MOSP) subscriptions. For details, see [View and download your Azure invoice](./understand/download-azure-invoice.md).
For other options, see [Azure benefits and incentives](https://azure.microsoft.c
Now that you're familiar with Cost Management + Billing, the next step is to start using the service. - Start using Cost Management to [analyze costs](./costs/quick-acm-cost-analysis.md).-- You can also read more about [Cost Management best practices](./costs/cost-mgt-best-practices.md).
+- You can also read more about [Cost Management best practices](./costs/cost-mgt-best-practices.md).
cost-management-billing Save Share Views https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/save-share-views.md
When downloading data, cost analysis includes summarized data as it's shown in t
If you need more advanced summaries or you're interested in raw data that hasn't been summarized, schedule an export to publish raw data to a storage account on a recurring basis.
-## Subscribe to cost alerts
+## Subscribe to scheduled alerts
In addition to saving and opening views repeatedly or sharing them with others manually, you can also subscribe to updates or a recurring schedule to get alerted as costs change. You can also set up alerts to be shared with others who may not have direct access to costs in the portal.
-### To subscribe to cost alerts
+### To subscribe to scheduled alerts
-1. In cost analysis, select a private or shared view you want to subscribe to alerts for or create and save a new chart view.
+1. In Cost analysis, select any chart view you want to subscribe to or create and save a new chart view.
+ - Note that built-in views (i.e., Accumulated costs, Daily costs, or Cost by service) cannot be changed, so if you need to change the date range, currency, amortization, or any other setting, you will need to save that as a private or shared view.
1. Select **Subscribe** at the top of the page. 1. Select **+ Add** at the top of the list of alerts. 1. Specify the desired email settings and select **Save**. - The **Name** helps you distinguish the different emails setup for the current view. Use it to indicate audience or purpose of this specific email. - The **Subject** is what people will see when they receive the email.
- - You can include up to 20 recipients. Consider using a distribution list if you have a large audience. To see how the email looks, start by sending it only to yourself. You can update it later.
+ - You can include up to 20 recipients. Consider using a distribution list if you have a large audience. To see how the email looks, start by sending it only to yourself. You can update it later.
- The **Message** is shown in the email to give people some additional context about why they're receiving the email. You may want to include what it covers, who requested it, or who to contact to make changes. - If you want to include an unauthenticated link to the data (for people who don't have access to the scope/view), select **CSV** in the **Include link to data** list. - If you want to allow people who have write access to the scope to change the email configuration settings, check the **Allow contributors to change these settings** option. For example, you might to allow billing account admins or Cost Management Contributors. By default it is unselected and only you can see or edit the scheduled email.
cost-management-billing Change Credit Card https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/change-credit-card.md
tags: billing
Previously updated : 11/15/2022 Last updated : 02/08/2023
In the Azure portal, you can change your default payment method to a new credit
- For a Microsoft Online Service Program (pay-as-you-go) account, you must be an [Account Administrator](add-change-subscription-administrator.md#whoisaa). - For a Microsoft Customer Agreement, you must have the correct [MCA permissions](understand-mca-roles.md) to make these changes. + The supported payment methods for Microsoft Azure are credit cards, debit cards, and check wire transfer. To get approved to pay by check wire transfer, see [Pay for your Azure subscription by check or wire transfer](pay-by-invoice.md). >[!NOTE]
cost-management-billing Link Partner Id Power Apps Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/link-partner-id-power-apps-accounts.md
ms.devlang: azurecli
Microsoft partners who are Power Platform and Dynamics 365 Customer Insights service providers work with their customers to manage, configure, and support Power Platform and Customer Insights resources. To get credit for the services, you can associate your partner network ID with the Azure credential used for service delivery that's in your customersΓÇÖ production environments using the Partner Admin Link (PAL).
-PAL allows Microsoft to identify and recognize partners that have Power Platform and Customer Insights customers. Microsoft attributes usage to a partner's organization based on the account's permissions (user role) and scope (tenant, resource, and so on). The attribution is used for Advanced Specializations, such as the [Microsoft Low Code Advanced Specializations](https://partner.microsoft.com/membership/advanced-specialization#tab-content-2), and [Partner Incentives](https://partner.microsoft.com/asset/collection/microsoft-commerce-incentive-resources#/).
+PAL allows Microsoft to identify and recognize partners that have Power Platform and Customer Insights customers. Microsoft attributes usage to a partner's organization based on the account's permissions (user role) and scope (tenant, resource, and so on). The attribution is used for Specializations, such as the [Microsoft Low Code Advanced Specializations](https://partner.microsoft.com/membership/advanced-specialization#tab-content-2), and [Partner Incentives](https://partner.microsoft.com/asset/collection/microsoft-commerce-incentive-resources#/).
The following sections explain how to:
-1. Get access accounts from your customer
-2. Link your access account to your partner ID
-3. Attribute your access account to the product resource
+1. **Initiation** - get service account from your customer
+2. **Registration** - link your access account to your partner ID
+3. **Attribution** - attribute your service account to the Power Platform & Dynamics Customer Insights resources using Solutions
We recommend taking these actions in the sequence above.
The attribution step is critical and typically happens automatically, as the par
:::image type="content" source="./media/link-partner-id-power-apps-accounts/partner-admin-link-steps.png" alt-text="Images showing the three steps listed above." border="false" lightbox="./media/link-partner-id-power-apps-accounts/partner-admin-link-steps.png" :::
-## Get access accounts from your customer
+## Initiation - get service account from your customer
-Before you link your partner ID, your customer must give you access to their Power Platform or Customer Insights resources. They use one of the following options:
+Use a dedicated Service Account for work performed and delivered into production.
-* **Directory account** - Your customer can create a dedicated user account, or a user account to act as a service account, in their own directory, and provide access to the product(s) you're working on in production.
-* **Service principal** - Your customer can add an app or script from your organization in their directory and provide access to the product you're working on in production.
+Through the normal course of business with your customer, determine ownership and access rights of a service account dedicated to you as a partner.
-## Link your access account to your partner ID
+[Creating a Service Account Video](https://aka.ms/ServiceAcct)
-Linking your access account to your partner ID is also called *PAL association*. When you have access to a Production Environment access account, you can use PAL to link the account to your Microsoft partner location ID.
+## Registration - link your access account to your partner ID
-For directory accounts (user or service), use the graphical web-based Azure portal, PowerShell, or the Azure CLI to link to your Microsoft partner location ID.
+Perform PAL Association on this Service Account.
-For service principal, use PowerShell or the Azure CLI to provide the link your Microsoft partner location ID. Link the partner ID to each customer resource.
+[PAL Association Via Azure portal Video](https://aka.ms/PALAssocAzurePortal)
To use the Azure portal to link to a new partner ID:
-1. Sign in to the [Azure portal](https://portal.azure.com).
-2. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
+1. Go to [Link to a partner ID](https://portal.azure.com/#blade/Microsoft_Azure_Billing/managementpartnerblade) in the Azure portal.
+2. Sign in to the Azure portal
3. Enter the [Microsoft Cloud Partner Program](https://partner.microsoft.com/) ID for your organization. Be sure to use the **Associated Partner ID** shown on your partner center profile. It's typically known as your [partner location ID](/partner-center/account-structure). :::image type="content" source="./media/link-partner-id-power-apps-accounts/link-partner-id.png" alt-text="Screenshot showing the Link to a partner ID window." lightbox="./media/link-partner-id-power-apps-accounts/link-partner-id.png" ::: > [!NOTE] > To link your partner ID to another customer, switch the directory. Under **Switch directory**, select the appropriate directory.
-For more information about using PowerShell or the Azure CLI, see [Use PowerShell, CLI, and other tools](#use-powershell-azure-cli-and-other-tools).
+For more information about using PowerShell or the Azure CLI, see sections under [Alternate approaches](#alternate-approaches).
-## Attribute your access account to product resource
+## Attribution - attribute your service account to the resource using Solutions
-To count the usage of a specific resource, the partner user or guest account needs to be attributed to the *resource* for Power Platform or Dynamics Customer Insights. The access account is the one that you received from your customer. It's the same account that was linked through the Partner Admin Link (PAL).
+To count the usage of a specific resource, the partner service account needs to be attributed to the *resource* for Power Platform or Dynamics Customer Insights.
-To ensure success, we strongly recommend that you use Solutions where available to import your deliverables into the customers Production Environment via a Managed Solution. When you use Solutions, the account used to import the Solution becomes the owner of each deliverable inside the Solution. Linking the account to your partner ID ensures all deliverables inside the Solution are associated to your partner ID, automatically handling this step.
+To ensure success, we strongly recommend that you use [Solutions](/power-apps/maker/data-platform/solutions-overview) where available to import your deliverables into the customers Production Environment via a Managed Solution. Use the Service account to install these Solutions into production environments. The last account with a PAL Association to import the solution will assume ownership of all objects inside the Solution and receive the usage credit.
+
+[Attributing the account to Power Platform & Customer Insights resources using Solutions](https://aka.ms/AttributetoResources)
+
+The resource and attribute user logic differ for every product and are detailed below.
| Product | Primary Metric | Resource | Attributed User Logic | |||||
To ensure success, we strongly recommend that you use Solutions where available
| Power BI | Monthly Active Users (MAU) | Dataset | The user must be the publisher of the dataset. For more information, see [Publish datasets and reports from Power BI Desktop](/power-bi/create-reports/desktop-upload-desktop-files). In cases of multiple partners being mapped to a single dataset, the user's activity is reviewed to select the *latest* partner. | | Customer Insights | Unified Profiles | Instance | Any active user of an Instance is treated as the attributed user. In cases of multiple partners being mapped to a single Instance, the user's activity is reviewed to select the *latest* partner. |
+## Validation
+
+The operation of a PAL association is a Boolean operation. Once performed it can be verified visually in the Azure portal or with a PowerShell Command. Either option will show your organization name and Partner ID to represent the account and partner ID were correctly connected.
+++
+## Alternate approaches
+
+The following sections are alternate approaches that you can use to leverage PAL for Power Platform and Customer Insights.
+
+### Associate PAL with user accounts
+
+The Attribution step can also be completed with **user accounts**. While we are including this as an option, there are some downsides to this approach. For partners with a large number of users, it will require management of user accounts when users are new to the team and/or resign from the team. If you choose to associate PAL in this way, you will need to manage the users via a spreadsheet.
+
+To Associate PAL with User Accounts, follow the same steps as with Service Accounts but do so for each user.
+ Other points about products: * **Power Apps - Canvas Applications**
Other points about products:
* Make sure the user publishing the report performs the PAL association. * Use PowerShell to publish as any user or Service Account.
-## Use PowerShell, Azure CLI, and other tools
-
-The following sections cover PowerShell, Azure CLI, and other tools to manage ownership and link partner IDs.
### Tooling to update or change attributed users
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 01/09/2023 Last updated : 02/08/2023
This article applies to customers with a Microsoft Customer Agreement (MCA) and to customers who signed up for Azure through the Azure website (for a Microsoft Online Services Program account also called pay-as-you-go account). If you signed up for Azure through a Microsoft representative, then your default payment method is already be set to *check or wire transfer*. + If you switch to pay by check or wire transfer, that means you pay your bill within 30 days of the invoice date by check/wire transfer. When you request to change your payment method to check/wire transfer, there are two possible results:
Payments made by wire transfer have processing times that vary, depending on the
- Wire transfers (domestic) - Four business days. Two days to arrive, plus two days to post. - Wire transfers (international) - Seven business days. Five days to arrive, plus two days to post.
-If your account is approved for payment by check or wire transfer, the instructions for payment can are found on the invoice.
+If your account is approved for payment by check or wire transfer, the instructions for payment can be found on the invoice.
## Check access to a Microsoft Customer Agreement [!INCLUDE [billing-check-mca](../../../includes/billing-check-mca.md)]
Occasionally Microsoft needs legal documentation if the information you provided
## Next steps
-* If needed, update your billing contact information at the [Azure portal](https://portal.azure.com).
+* If needed, update your billing contact information at the [Azure portal](https://portal.azure.com).
cost-management-billing Resolve Past Due Balance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/resolve-past-due-balance.md
tags: billing
Previously updated : 03/11/2022 Last updated : 02/08/2023
If you have a Microsoft Customer Agreement billing account, see [Pay Microsoft C
If your payment isn't received or if we can't process your payment, you'll get an email and see an alert in the Azure portal telling you that your subscription is past due. The email contains a link that takes you to the Settle balance page.
-If your default payment method is credit card, the [Account Administrator](add-change-subscription-administrator.md#whoisaa) can settle the outstanding charges in the Azure portal. If you pay by invoice (check wire transfer), send your payment to the location listed at the bottom of your invoice.
+
+If your default payment method is credit card, the [Account Administrator](add-change-subscription-administrator.md#whoisaa) can settle the outstanding charges in the Azure portal. If you pay by invoice (check/wire transfer), send your payment to the location listed at the bottom of your invoice.
> [!IMPORTANT] > * If you have multiple subscriptions using the same credit card and they are all past due, you must pay the entire outstanding balance at once.
cost-management-billing Withholding Tax Credit India https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/withholding-tax-credit-india.md
tags: billing
Previously updated : 03/22/2022 Last updated : 02/08/2023
Your WHT request must include the following items:
Submit the WHT request by opening a ticket with Microsoft support. + ## Credit card payment If your payment method is a credit card and you made a full payment to MRS, and paid WHT to the Income Tax Department, you must submit a WHT request to claim the refund of the tax amount.
cost-management-billing Mpa Invoice Terms https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mpa-invoice-terms.md
tags: billing
Previously updated : 09/15/2021 Last updated : 02/08/2023
The **Billing details by product** section lists the total charges for each prod
At the bottom of the invoice, there are instructions for paying your bill. You can pay by check or wire within 60 days of your invoice date. + ## Publisher information If you have third-party services in your bill, the name and address of each publisher is listed at the bottom of your invoice.
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 01/13/2023 Last updated : 02/08/2023
There are two ways to pay for your bill for Azure. You can pay with the default
If you signed up for Azure through a Microsoft representative, then your default payment method will always be set to *check or wire transfer*. Automatic credit card payment isn't an option if you signed up for Azure through a Microsoft representative. Instead, you can [pay with a credit card for individual invoices](#pay-now-in-the-azure-portal). + If you have a Microsoft Online Services Program account, your default payment method is credit card. Payments are normally automatically deducted from your credit card, but you can also make one-time payments manually by credit card. If you have Azure credits, they automatically apply to your invoice each billing period.
data-factory Change Data Capture Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/change-data-capture-troubleshoot.md
In non-virtual network factories, CDC resources requiring a virtual network will
If you create a new linked service using the CDC fly-out process that points to an Azure Key Vault linked service, the CDC resource will break. This fix is in progress.
+## Issue: Trouble in tracking delete operations.
+
+Currently CDC resource supports delete operations for following sink types – Azure SQL Database & Delta. To achieve this, in the column mapping page, please select **keys** column that can be used to determine if a row from the source matches a row from the sink. 
+
+## Issue: My CDC resource fails when target SQL table has identity columns.
+
+Getting following error on running a CDC when your target sink table has identity columns,
+
+*_Cannot insert explicit value for identity column in table 'TableName' when IDENTITY_INSERT is set to OFF._*
+
+Run below query to determine if you have an identity column in your SQL based target.
+
+**Query 4**
+
+```sql
+SELECTΓÇ»*
+FROM sys.identity_columns
+WHERE OBJECT_NAME(object_id) = 'TableName'
+```
+
+To resolve this user can follow either of the steps
++
+1. Set IDENTITY_INSERT to ON by running following query at database level and rerun the CDC Mapper
+
+**Query 5**
+
+```sql
+SET IDENTITY_INSERT dbo.TableName ON;
+```
+
+(Or)
+
+2. User can remove the specific identity column from mapping while performing inserts.
++ ## Next steps - [Learn more about the change data capture resource](concepts-change-data-capture-resource.md) - [Set up a change data capture resource](how-to-change-data-capture-resource.md)
data-factory Concepts Change Data Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-change-data-capture.md
When you perform data integration and ETL processes in the cloud, your jobs can
### Change Data Capture factory resource
-The easiest and quickest way to get started in data factory with CDC is through the factory level Change Data Capture resource. From the main pipeline designer, click on New under Factory Resources to create a new Change Data Capture. The CDC factory resource will provide a configuration walk-through experience where you will point to your sources and destinations, apply optional transformations, and then click start to begin your data capture. With the CDC resource, you will not beed to design pipelines or data flow activities and the only billing will be 4 cores of General Purpose data flows while your data in being processed. You set a latency which ADF will use to wake-up and look for changed data. That is the only time you will be billed. The top-level CDC resource is also the ADF method of running your processes continuously. Pipelines in ADF are batch only. But the CDC resource can run continuously.
+The easiest and quickest way to get started in data factory with CDC is through the factory level Change Data Capture resource. From the main pipeline designer, click on New under Factory Resources to create a new Change Data Capture. The CDC factory resource will provide a configuration walk-through experience where you will point to your sources and destinations, apply optional transformations, and then click start to begin your data capture. With the CDC resource, you will not need to design pipelines or data flow activities and the only billing will be 4 cores of General Purpose data flows while your data in being processed. You set a latency which ADF will use to wake-up and look for changed data. That is the only time you will be billed. The top-level CDC resource is also the ADF method of running your processes continuously. Pipelines in ADF are batch only. But the CDC resource can run continuously.
### Native change data capture in mapping data flow
defender-for-cloud Defender For Devops Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-devops-introduction.md
Defender for DevOps helps unify, strengthen and manage multi-pipeline DevOps sec
|--|--| | Release state: | Preview<br>The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. | | Clouds | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet) |
-| Regions: | Central US |
+| Regions: | Australia East, Central US, West Europe |
| Source Code Management Systems | [Azure DevOps](https://portal.azure.com/#home) <br>[GitHub](https://github.com/) supported versions: GitHub Free, Pro, Team, and GitHub Enterprise Cloud | | Required permissions: | <br> **Azure account** - with permissions to sign into Azure portal. <br> **Contributor** - on the relevant Azure subscription. <br> **Organization Administrator** - in GitHub. <br> **Security Admin role** - in Defender for Cloud. |
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
API calls performed by Defender for Cloud count against the [Azure DevOps Global
| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. | | Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). | | Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in Azure DevOps <br> **- Basic or Basic + Test Plans Access Level:** in Azure DevOps. <br> - In Azure DevOps, configure: Third-party applications gain access via OAuth, which must be set to `On` . [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)|
-| Regions: | Central US |
+| Regions: | Central US, West Europe, Australia East |
| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | ## Connect your Azure DevOps organization
defender-for-cloud Quickstart Onboard Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-github.md
By connecting your GitHub repositories to Defender for Cloud, you'll extend Defe
| Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). | Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in GitHub | | GitHub supported versions: | GitHub Free, Pro, Team, and GitHub Enterprise Cloud |
-| Regions: | Central US |
+| Regions: | Australia East, Central US, West Europe |
| Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) | ## Connect your GitHub account
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 02/08/2023 Last updated : 02/09/2023 # What's new in Microsoft Defender for Cloud?
To learn about *planned* changes that are coming soon to Defender for Cloud, see
Updates in February include:
+- [Recommendation to find vulnerabilities in running container images for Linux released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-released-for-general-availability-ga)
- [Announcing support for the AWS CIS 1.5.0 compliance standard](#announcing-support-for-the-aws-cis-150-compliance-standard)
+- [Microsoft Defender for DevOps (preview) is now available in other regions](#microsoft-defender-for-devops-preview-is-now-available-in-other-regions)
+
+### Recommendation to find vulnerabilities in running container images released for General Availability (GA)
+
+The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation for Linux is now GA. The recommendation is used to identify unhealthy resources and is included in the calculations of your secure score.
+
+We recommend that you use the recommendation to remediate vulnerabilities in your Linux containers. Learn about [recommendation remediation](implement-security-recommendations.md).
### Announcing support for the AWS CIS 1.5.0 compliance standard
This new standard includes both existing and new recommendations that extend Def
Learn how to [Manage AWS assessments and standards](how-to-manage-aws-assessments-standards.md). +
+### Microsoft Defender for DevOps (preview) is now available in other regions
+
+Microsoft Defender for DevOps has expanded its preview and is now available in the West Europe and East Australia regions, when you onboard your Azure DevOps and GitHub resources.
+
+Learn more about [Microsoft Defender for DevOps](defender-for-devops-introduction.md).
+ ## January 2023 Updates in January include:
Updates in January include:
### The Endpoint protection (Microsoft Defender for Endpoint) component is now accessed in the Settings and monitoring page
-In our continuing efforts to simplify your Defender for Cloud configuration experience, we moved the configuration for Endpoint protection (Microsoft Defender for Endpoint) component from the **Environment settings** > **Integrations** page to the **Environment settings** > **Defender plans** > **Settings and monitoring** page, where the other components are managed as well. There is no change to the functionality other than the location in the portal.
+In our continuing efforts to simplify your Defender for Cloud configuration experience, we moved the configuration for Endpoint protection (Microsoft Defender for Endpoint) component from the **Environment settings** > **Integrations** page to the **Environment settings** > **Defender plans** > **Settings and monitoring** page, where the other components are managed as well. There's no change to the functionality other than the location in the portal.
Learn more about [enabling Microsoft Defender for Endpoint](integration-defender-for-endpoint.md) on your servers with Defender for Servers.
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
The **tabs** below show the features that are available, by environment, for Mic
| Compliance | Docker CIS | VM, Virtual Machine Scale Set | GA | - | Log Analytics agent | Defender for Servers Plan 2 | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[2](#footnote2)</sup> | Registry scan - OS packages | ACR, Private ACR | GA | Preview | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Vulnerability Assessment <sup>[3](#footnote3)</sup> | Registry scan - language specific packages | ACR, Private ACR | Preview | - | Agentless | Defender for Containers | Commercial clouds |
-| Vulnerability Assessment | View vulnerabilities for running images | AKS | Preview | Preview | Defender profile | Defender for Containers | Commercial clouds |
-| Hardening | Control plane recommendations | ACR, AKS | GA | GA | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
+| Vulnerability Assessment | View vulnerabilities for running images | AKS | GA | GA | Defender profile | Defender for Containers | Commercial clouds |
+| Hardening | Control plane recommendations | ACR, AKS | GA | Preview | Agentless | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet |
| Hardening | Kubernetes data plane recommendations | AKS | GA | - | Azure Policy | Free | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Runtime protection| Threat detection (control plane)| AKS | GA | GA | Agentless | Defender for Containers | Commercial clouds<br><br> National clouds: Azure Government, Azure China 21Vianet | | Runtime protection| Threat detection (workload) | AKS | GA | - | Defender profile | Defender for Containers | Commercial clouds |
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 02/01/2023 Last updated : 02/09/2023 # Important upcoming changes to Microsoft Defender for Cloud
If you're looking for the latest release notes, you'll find them in the [What's
| Planned change | Estimated date for change | |--|--|
-| [Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)](#recommendation-to-find-vulnerabilities-in-running-container-images-to-be-released-for-general-availability-ga) | February 2023 |
-| [The built-in policy [Preview]: Private endpoint should be configured for Key Vault is set to be deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-is-set-to-be-deprecated) | February 2023 |
-| [Three alerts in Defender for ARM plan are set to be deprecated](#three-alerts-in-defender-for-arm-plan-are-set-to-be-deprecated) | March 2023 |
-| [Alerts automatic export to Log Analytics workspace is set to be deprecated](#alerts-automatic-export-to-log-analytics-workspace-is-set-to-be-deprecated) | March 2023 |
+| [The built-in policy [Preview]: Private endpoint should be configured for Key Vault is will be deprecated](#the-built-in-policy-preview-private-endpoint-should-be-configured-for-key-vault-will-be-deprecated) | February 2023 |
+| [Three alerts in Defender for Azure Resource Manager plan will be deprecated](#three-alerts-in-defender-for-azure-resource-manager-plan-will-be-deprecated) | March 2023 |
+| [Alerts automatic export to Log Analytics workspace will be deprecated](#alerts-automatic-export-to-log-analytics-workspace-will-be-deprecated) | March 2023 |
| [Deprecation and improvement of selected alerts for Windows and Linux Servers](#deprecation-and-improvement-of-selected-alerts-for-windows-and-linux-servers) | April 2023 |
-### Three alerts in Defender for ARM plan are set to be deprecated
+### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault will be deprecated
+
+**Estimated date for change: February 2023**
+
+The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) is set to be deprecated and will be replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy.
+
+The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1) will also be replaced by this new policy in all standards displayed in the regulatory compliance dashboard.
+
+### Three alerts in Defender for Azure Resource Manager plan will be deprecated
**Estimated date for change: March 2023**
-As we continue to improve the quality of our alerts, the following three alerts from the Defender for ARM plan are set to be deprecated:
+As we continue to improve the quality of our alerts, the following three alerts from the Defender for ARM plan will be deprecated:
1. `Activity from a risky IP address (ARM.MCAS_ActivityFromAnonymousIPAddresses)` 1. `Activity from infrequent country (ARM.MCAS_ActivityFromInfrequentCountry)` 1. `Impossible travel activity (ARM.MCAS_ImpossibleTravelActivity)` You can learn more details about each of these alerts from the [alerts reference list](alerts-reference.md#alerts-resourcemanager).
-In the scenario where an activity from a suspicious IP address is detected, one of the following Defender for ARM plan alert `Azure Resource Manager operation from suspicious IP address` or ' Azure Resource Manager operation from suspicious proxy IP address' will be presented.
+In the scenario where an activity from a suspicious IP address is detected, one of the following Defender for ARM plan alerts `Azure Resource Manager operation from suspicious IP address` or `Azure Resource Manager operation from suspicious proxy IP address` will be present.
-### Alerts automatic export to Log Analytics workspace is set to be deprecated
+### Alerts automatic export to Log Analytics workspace will be deprecated
**Estimated date for change: March 2023**
Currently, Defender for Cloud security alerts are automatically exported to a de
You can export your security alerts to a dedicated Log Analytics workspace with the [Continuous Export](continuous-export.md#set-up-a-continuous-export) feature. If you have already configured continuous export of your alerts to a Log Analytics workspace, no further action is required.
-### Recommendation to find vulnerabilities in running container images to be released for General Availability (GA)
-
-**Estimated date for change: February 2023**
-
-The [Running container images should have vulnerability findings resolved](defender-for-containers-vulnerability-assessment-azure.md#view-vulnerabilities-for-images-running-on-your-aks-clusters) recommendation is currently in preview. While a recommendation is in preview, it doesn't render a resource unhealthy and isn't included in the calculations of your secure score.
-
-We recommend that you use the recommendation to remediate vulnerabilities in your containers. Remediating the recommendation won't affect your secure score when the recommendation is released as GA. Learn about [recommendation remediation](implement-security-recommendations.md).
-
-### The built-in policy \[Preview]: Private endpoint should be configured for Key Vault is set to be deprecated
-
-**Estimated date for change: February 2023**
-
-The built-in policy [`[Preview]: Private endpoint should be configured for Key Vault`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f0bc445-3935-4915-9981-011aa2b46147) is set to be deprecated and will be replaced with the [`[Preview]: Azure Key Vaults should use private link`](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) policy.
-
-The related [policy definition](https://ms.portal.azure.com/#view/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2fproviders%2fMicrosoft.Authorization%2fpolicyDefinitions%2f7c1b1214-f927-48bf-8882-84f0af6588b1) will also be replaced by this new policy in all standards displayed in the regulatory compliance dashboard.
- ### Deprecation and improvement of selected alerts for Windows and Linux Servers **Estimated date for change: April 2023**
deployment-environments How To Configure Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-catalog.md
You can choose from two types of repositories:
1. Go to the home page of the GitHub repository that contains the template definitions. 1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-a-github-repo).
-1. Copy and save the URL. You'll use it later.
+1. Copy and save the URL. You use it later.
#### Get the clone URL of an Azure DevOps repository 1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project. 1. [Get the clone URL](/azure/devops/repos/git/clone#get-the-clone-url-of-an-azure-repos-git-repo).
-1. Copy and save the URL. You'll use it later.
+1. Copy and save the URL. You use it later.
### Create a personal access token
Next, create a personal access token. Depending on the type of repository you us
1. In the **Expiration** dropdown, select an expiration for your token. 1. For a private repository, under **Select scopes**, select the **repo** scope. 1. Select **Generate token**.
-1. Save the generated token. You'll use the token later.
+1. Save the generated token. You use the token later.
#### Create a personal access token in Azure DevOps 1. Go to the home page of your team collection (for example, `https://contoso-web-team.visualstudio.com`), and then select your project. 1. Create a [personal access token](/azure/devops/organizations/accounts/use-personal-access-tokens-to-authenticate#create-a-pat).
-1. Save the generated token. You'll use the token later.
+1. Save the generated token. You use the token later.
### Store the personal access token as a key vault secret
-To store the personal access token you generated as a [key vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
+To store the personal access token, you generated as a [key vault secret](../key-vault/secrets/about-secrets.md) and copy the secret identifier:
1. Create a [key vault](../key-vault/general/quick-create-portal.md#create-a-vault). 1. Add the personal access token as a [secret to the key vault](../key-vault/secrets/quick-create-portal.md#add-a-secret-to-key-vault).
To store the personal access token you generated as a [key vault secret](../key-
| **Name** | Enter a name for the catalog. | | **Git clone URI** | Enter or paste the [clone URL](#get-the-clone-url-for-your-repository) for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git | | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br> This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
- | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br> This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments</br> The folder path can begin with or without a '/'.|
+ | **Secret identifier**| Enter the [secret identifier](#create-a-personal-access-token) that contains your personal access token for the repository.</br>When you copy a Secret Identifier, the connection string includes a version identifier at the end, like this: https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a. </br>Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your PAT expires, only the key vault needs to be updated. </br> *Example secret identifier: https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat*|
:::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
To sync an updated catalog:
## Delete a catalog
-You can delete a catalog to remove it from the dev center. Any templates in a deleted catalog won't be available to development teams when they deploy new environments. Update the catalog item reference for any existing environments that were created by using the catalog items in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails.
+You can delete a catalog to remove it from the dev center. Templates in a deleted catalog are not available to development teams when they deploy new environments. Update the catalog item reference for any existing environments that were created by using the catalog items in the deleted catalog. If the reference isn't updated and the environment is redeployed, the deployment fails.
To delete a catalog:
deployment-environments How To Configure Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/how-to-configure-managed-identity.md
Title: Configure a managed identity
-description: Learn how to configure a managed identity that will be used to deploy environments in your Azure Deployment Environments Preview dev center.
+description: Learn how to configure a managed identity to deploy environments in your Azure Deployment Environments Preview dev center.
In Azure Deployment Environments, you can choose between two types of managed id
- **System-assigned identity**: A system-assigned identity is tied either to your dev center or to the project environment type. A system-assigned identity is deleted when the attached resource is deleted. A dev center or a project environment type can have only one system-assigned identity. - **User-assigned identity**: A user-assigned identity is a standalone Azure resource that you can assign to your dev center or to a project environment type. For Azure Deployment Environments Preview, a dev center or a project environment type can have only one user-assigned identity.
+
+As a security best practice, if you choose to use user-assigned identities, use different identities for your project and your dev center. Project identities should have more limited access to resources compared to a dev center.
> [!NOTE] > In Azure Deployment Environments Preview, if you add both a system-assigned identity and a user-assigned identity, only the user-assigned identity is used.
In Azure Deployment Environments, you can choose between two types of managed id
## Assign a subscription role assignment to the managed identity
-The identity that's attached to the dev center should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to a project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
+The identity that's attached to the dev center should be assigned the Owner role for all the deployment subscriptions and the Reader role for all subscriptions that contain the relevant project. When a user creates or deploys an environment, the service grants appropriate access to the deployment identity that's attached to the project environment type. The deployment identity uses the access to perform deployments on behalf of the user. You can use the managed identity to empower developers to create environments without granting them access to the subscription.
### Add a role assignment to a system-assigned managed identity
deployment-environments Quickstart Create And Configure Devcenter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-devcenter.md
Previously updated : 12/20/2022 Last updated : 02/08/2023 # Quickstart: Create and configure a dev center
To create and configure a Dev center in Azure Deployment Environments by using t
:::image type="content" source="media/quickstart-create-and-configure-devcenter/create-devcenter-review.png" alt-text="Screenshot that shows the Review tab of a dev center to validate the deployment details.":::
-1. Confirm that the dev center was successfully created by checking your Azure portal notifications. Then, select **Go to resource**.
+1. You can check the progress of the deployment in your Azure portal notifications.
:::image type="content" source="media/quickstart-create-and-configure-devcenter/azure-notifications.png" alt-text="Screenshot that shows portal notifications to confirm the creation of a dev center.":::
+1. When the creation of the dev center is complete, select **Go to resource**.
+ 1. In **Dev centers**, verify that the dev center appears. :::image type="content" source="media/quickstart-create-and-configure-devcenter/deployment-environments-devcenter-created.png" alt-text="Screenshot that shows the Dev centers overview, to confirm that the dev center is created."::: ## Create a Key Vault
-You'll need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository.
+You need an Azure Key Vault to store the GitHub personal access token (PAT) that is used to grant Azure access to your GitHub repository.
If you don't have an existing key vault, use the following steps to create one: 1. Sign in to the [Azure portal](https://portal.azure.com).
Using an authentication token like a GitHub personal access token (PAT) enables
:::image type="content" source="media/quickstart-create-and-configure-devcenter/github-pat.png" alt-text="Screenshot that shows the GitHub Tokens (classic) option.":::
- Fine grained and classic tokens work with Azure Deployment Environments.
+ Fine-grained and classic tokens work with Azure Deployment Environments. Fine-grained tokens give you more granular control over the repos to which you're allowing access.
1. On the New personal access token (classic) page: - In the **Note** box, add a note describing the tokenΓÇÖs intended use.
Using an authentication token like a GitHub personal access token (PAT) enables
:::image type="content" source="media/quickstart-create-and-configure-devcenter/create-secret-in-key-vault.png" alt-text="Screenshot that shows the Create a secret page with the Name and Secret value text boxes highlighted."::: - Select **Create**.
-1. Leave this tab open, youΓÇÖll need to come back to the Key Vault later.
+1. Leave this tab open, you need to come back to the Key Vault later.
## Attach an identity to the dev center After you create a dev center, attach an [identity](concept-environments-key-concepts.md#identities) to the dev center. You can attach either a system-assigned managed identity or a user-assigned managed identity. Learn about the two [types of identities](how-to-configure-managed-identity.md#add-a-managed-identity).
-In this quickstart, you'll configure a system-assigned managed identity for your dev center.
+In this quickstart, you configure a system-assigned managed identity for your dev center.
## Attach a system-assigned managed identity
To attach a system-assigned managed identity to your dev center:
1. In the **Enable system assigned managed identity** dialog, select **Yes**. ### Assign the system-assigned managed identity access to the key vault secret
-Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository.
+Make sure that the identity has access to the key vault secret that contains the personal access token to access your repository. Key Vaults support two methods of access; Azure role-based access control or Vault access policy. In this quickstart, you use a vault access policy.
-Configure a key vault access policy:
+Configure a vault access policy:
1. In the Azure portal, go to the key vault that contains the secret with the personal access token. 2. In the left menu, select **Access policies**, and then select **Create**. 3. In Create an access policy, enter or select the following information:
- - On the Permissions tab, under **Secret permissions**, select **Select all**, and then select **Next**.
+ - On the Permissions tab, under **Secret permissions**, select **Get**, and then select **Next**.
- On the Principal tab, select the identity that's attached to the dev center, and then select **Next**. - Select **Review + create**, and then select **Create**.
Configure a key vault access policy:
## Add a catalog to the dev center Azure Deployment Environments Preview supports attaching Azure DevOps repositories and GitHub repositories. You can store a set of curated IaC templates in a repository. Attaching the repository to a dev center as a catalog gives your development teams access to the templates and enables them to quickly create consistent environments.
-In this quickstart, you'll attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
+In this quickstart, you attach a GitHub repository that contains samples created and maintained by the Azure Deployment Environments team.
-To add a catalog to your dev center, you'll first need to gather some information.
+To add a catalog to your dev center, you first need to gather some information.
### Gather GitHub repo information To add a catalog, you must specify the GitHub repo URL, the branch, and the folder that contains your catalog items. You can gather this information before you begin the process of adding the catalog to the dev center, and paste it somewhere accessible, like notepad.
To add a catalog, you must specify the GitHub repo URL, the branch, and the fold
:::image type="content" source="media/quickstart-create-and-configure-devcenter/github-info.png" alt-text="Screenshot that shows the GitHub repo with Code, branch, and folder highlighted."::: ### Gather the secret identifier
-You'll also need the path to the secret you created in the key vault.
+You also need the path to the secret you created in the key vault.
1. In the Azure portal, navigate to your key vault. 1. On the key vault page, from the left menu, select **Secrets**.
You'll also need the path to the secret you created in the key vault.
| **Name** | Enter a name for the catalog. | | **Git clone URI** | Enter or paste the clone URL for either your GitHub repository or your Azure DevOps repository.<br/>*Sample Catalog Example:* https://github.com/Azure/deployment-environments.git | | **Branch** | Enter the repository branch to connect to.<br/>*Sample Catalog Example:* main|
- | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br>This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments|
- | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository.|
+ | **Folder path** | Enter the folder path relative to the clone URI that contains subfolders with your catalog items. </br>This folder path should be the path to the folder that contains the subfolders with the catalog item manifests, and not the path to the folder with the catalog item manifest itself.<br/>*Sample Catalog Example:* /Environments </br> The folder path can begin with or without a '/'.|
+ | **Secret identifier**| Enter the secret identifier that contains your personal access token for the repository. When you copy a Secret Identifier, the connection string includes a version identifier at the end, like this: https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat/9376b432b72441a1b9e795695708ea5a. </br>Removing the version identifier ensures that Deployment Environments fetches the latest version of the secret from the key vault. If your PAT expires, only the key vault needs to be updated. </br> *Example secret identifier: https://contoso-kv.vault.azure.net/secrets/GitHub-repo-pat*|
:::image type="content" source="media/how-to-configure-catalog/add-catalog-form-inline.png" alt-text="Screenshot that shows how to add a catalog to a dev center." lightbox="media/how-to-configure-catalog/add-catalog-form-expanded.png":::
deployment-environments Quickstart Create And Configure Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/deployment-environments/quickstart-create-and-configure-projects.md
Previously updated : 10/26/2022 Last updated : 02/08/2023 # Quickstart: Create and configure a project
To create a project in your dev center:
|-|--| |**Subscription** |Select the subscription in which you want to create the project. | |**Resource group**|Either use an existing resource group or select **Create new** and enter a name for the resource group. |
- |**Dev center**|Select a dev center to associate with this project. All settings for the dev center will apply to the project. |
+ |**Dev center**|Select a dev center to associate with this project. All settings for the dev center apply to the project. |
|**Name**|Enter a name for the project. | |**Description** (Optional) |Enter any project-related details. |
To create a project in your dev center:
:::image type="content" source="media/quickstart-create-configure-projects/created-project.png" alt-text="Screenshot that shows the project overview pane."::: ### Assign a managed identity the owner role to the subscription
-Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you'll configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
+Before you can create environment types, you must give the managed identity that represents your dev center access to the subscriptions where you configure the [project environment types](concept-environments-key-concepts.md#project-environment-types).
-In this quickstart you'll assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
+In this quickstart you assign the Owner role to the system-assigned managed identity that you configured previously: [Attach a system-assigned managed identity](quickstart-create-and-configure-devcenter.md#attach-a-system-assigned-managed-identity).
1. Navigate to your dev center. 1. On the left menu under Settings, select **Identity**.
digital-twins Concepts Apis Sdks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-apis-sdks.md
description: Learn about the Azure Digital Twins API and SDK options, including information about SDK helper classes and general usage notes. Previously updated : 12/16/2022 Last updated : 02/06/2023
The data plane APIs are the Azure Digital Twins APIs used to manage the elements
* `DigitalTwins` - The DigitalTwins category contains the APIs that let developers create, modify, and delete [digital twins](concepts-twins-graph.md) and their relationships in an Azure Digital Twins instance. * `Query` - The Query category lets developers [find sets of digital twins in the twin graph](how-to-query-graph.md) across relationships. * `Event Routes` - The Event Routes category contains APIs to [route data](concepts-route-events.md), through the system and to downstream services.
+* `Import Jobs` - The Import Jobs API lets you manage a long running, asynchronous action to [import models, twins, and relationships in bulk](#bulk-import-with-the-import-jobs-api).
To call the APIs directly, reference the latest Swagger folder in the [data plane Swagger repo](https://github.com/Azure/azure-rest-api-specs/tree/main/specification/digitaltwins/data-plane/Microsoft.DigitalTwins). This folder also includes a folder of examples that show the usage. You can also view the [data plane API reference documentation](/rest/api/azure-digitaltwins/).
The available helper classes are:
* `BasicRelationship`: Generically represents the core data of a relationship * `DigitalTwinsJsonPropertyName`: Contains the string constants for use in JSON serialization and deserialization for custom digital twin types +
+## Bulk import with the Import Jobs API
+
+The [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) is a data plane API that allows you to import a set of models, twins, and/or relationships in a single API call. Import Jobs API operations are also included with the [CLI commands](/cli/azure/dt/job) and [data plane SDKs](#data-plane-apis). Using the Import Jobs API requires use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md).
+
+### Check permissions
+
+To use the Import Jobs API, you'll need to have write permission in your Azure Digital Twins instance for the following data action categories:
+* `Microsoft.DigitalTwins/jobs/*`
+* Any graph elements that you want to include in the Jobs call. This might include `Microsoft.DigitalTwins/models/*`, `Microsoft.DigitalTwins/digitaltwins/*`, and/or `Microsoft.DigitalTwins/digitaltwins/relationships/*`.
+
+The built-in role that provides all of these permissions is *Azure Digital Twins Data Owner*. You can also use a custom role to grant granular access to only the data types that you need. For more information about roles in Azure Digital Twins, see [Security for Azure Digital Twins solutions](concepts-security.md#authorization-azure-roles-for-azure-digital-twins).
+
+>[!NOTE]
+> If you attempt an Import Jobs API call and you're missing write permissions to one of the graph element types you're trying to import, the job will skip that type and import the others. For example, if you have write access to models and twins, but not relationships, an attempt to bulk import all three types of element will only succeed in importing the models and twins. The job status will reflect a failure and the message will indicate which permissions are missing.
+
+### Format data
+
+The API accepts graph information from an *NDJSON* file, which must be uploaded to an [Azure blob storage](../storage/blobs/storage-blobs-introduction.md) container. The file starts with a `Header` section, followed by the optional sections `Models`, `Twins`, and `Relationships`. You don't have to include all three types of graph data in the file, but any sections that are present must follow that order. Twins and relationships created with this API can optionally include initialization of their properties.
+
+Here's a sample input data file for the import API:
+
+```json
+{"Section": "Header"}
+{"fileVersion": "1.0.0", "author": "foobar", "organization": "contoso"}
+{"Section": "Models"}
+{"@id":"dtmi:com:microsoft:azure:iot:model0;1","@type":"Interface","contents":[{"@type":"Property","name":"property00","schema":"integer"},{"@type":"Property","name":"property01","schema":{"@type":"Map","mapKey":{"name":"subPropertyName","schema":"string"},"mapValue":{"name":"subPropertyValue","schema":"string"}}},{"@type":"Relationship","name":"has","target":"dtmi:com:microsoft:azure:iot:model1;1","properties":[{"@type":"Property","name":"relationshipproperty1","schema":"string"},{"@type":"Property","name":"relationshipproperty2","schema":"integer"}]}],"description":{"en":"This is the description of model"},"displayName":{"en":"This is the display name"},"@context":"dtmi:dtdl:context;2"}
+{"@id":"dtmi:com:microsoft:azure:iot:model1;1","@type":"Interface","contents":[{"@type":"Property","name":"property10","schema":"string"},{"@type":"Property","name":"property11","schema":{"@type":"Map","mapKey":{"name":"subPropertyName","schema":"string"},"mapValue":{"name":"subPropertyValue","schema":"string"}}}],"description":{"en":"This is the description of model"},"displayName":{"en":"This is the display name"},"@context":"dtmi:dtdl:context;2"}
+{"Section": "Twins"}
+{"$dtId":"twin0","$metadata":{"$model":"dtmi:com:microsoft:azure:iot:model0;1"},"property00":10,"property01":{"subProperty1":"subProperty1Value","subProperty2":"subProperty2Value"}}
+{"$dtId":"twin1","$metadata":{"$model":"dtmi:com:microsoft:azure:iot:model1;1"},"property10":"propertyValue1","property11":{"subProperty1":"subProperty1Value","subProperty2":"subProperty2Value"}}
+{"Section": "Relationships"}
+{"$dtId":"twin0","$relationshipId":"relationship","$targetId":"twin1","$relationshipName":"has","relationshipProperty1":"propertyValue1","relationshipProperty2":10}
+```
+
+>[!TIP]
+>For a sample project that converts models, twins, and relationships into the NDJSON supported by the import API, see [Azure Digital Twins Bulk Import NDJSON Generator](https://github.com/Azure-Samples/azure-digital-twins-getting-started/tree/main/bulk-import/ndjson-generator). The sample project is written for .NET and can be downloaded or adapted to help you create your own import files.
+
+Once the file has been created, upload it to a block blob in Azure Blob Storage using your preferred upload method (some options are the [AzCopy command](../storage/common/storage-use-azcopy-blobs-upload.md), the [Azure CLI](../storage/blobs/storage-quickstart-blobs-cli.md#upload-a-blob), or the [Azure portal](https://portal.azure.com)). You'll use the blob storage URL of the NDJSON file in the body of the Import Jobs API call.
+
+### Run the import job
+
+Now you can proceed with calling the [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs). For detailed instructions on importing a full graph in one API call, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api). You can also use the Import Jobs API to import each resource type independently. For more information on using the Import Jobs API with individual resource types, see Import Jobs API instructions for [models](how-to-manage-model.md#upload-large-model-sets-with-the-import-jobs-api), [twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-import-jobs-api), and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-import-jobs-api).
+
+In the body of the API call, you'll provide the blob storage URL of the NDJSON input file, as well as another blob storage URL for where you'd like the output log to be stored.
+As the import job executes, a structured output log is generated by the service and stored as a new append blob in your blob container, according to the output blob URL and name you provided. Here's an example output log for a successful job importing models, twins, and relationships:
+
+```json
+{"timestamp":"2022-12-30T19:50:34.5540455Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"status":"Started"}}
+{"timestamp":"2022-12-30T19:50:37.2406748Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"section":"Models","status":"Started"}}
+{"timestamp":"2022-12-30T19:50:38.1445612Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"section":"Models","status":"Succeeded"}}
+{"timestamp":"2022-12-30T19:50:38.5475921Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"section":"Twins","status":"Started"}}
+{"timestamp":"2022-12-30T19:50:39.2744802Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"section":"Twins","status":"Succeeded"}}
+{"timestamp":"2022-12-30T19:50:39.7494663Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"section":"Relationships","status":"Started"}}
+{"timestamp":"2022-12-30T19:50:40.4480645Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"section":"Relationships","status":"Succeeded"}}
+{"timestamp":"2022-12-30T19:50:41.3043264Z","jobId":"test1","jobType":"Import","logType":"Info","details":{"status":"Succeeded"}}
+```
+
+When the job is complete, you can see the total number of ingested entities using the [BulkOperationEntityCount metric](how-to-monitor.md#bulk-operation-metrics-from-the-import-jobs-api).
+
+It's also possible to cancel a running import job with the [Cancel operation](/rest/api/digital-twins/dataplane/import-jobs/cancel?tabs=HTTP) from the Import Jobs API. Once the job has been canceled and is no longer running, you can delete it.
+
+### Limits and considerations
+
+Keep the following considerations in mind while working with the Import Jobs API:
+* Currently, the Import Jobs API only supports "create" operations.
+* Import Jobs are not atomic operations. There is no rollback in the case of failure, partial job completion, or usage of the [Cancel operation](/rest/api/digital-twins/dataplane/import-jobs/cancel?tabs=HTTP).
+* Only one bulk import job is supported at a time within an Azure Digital Twins instance. You can view this information and other numerical limits of the Import Jobs API in [Azure Digital Twins limits](reference-service-limits.md).
+ ## Monitor API metrics API metrics such as requests, latency, and failure rate can be viewed in the [Azure portal](https://portal.azure.com/).
-For information about viewing and managing metrics with Azure Monitor, see [Get started with metrics explorer](../azure-monitor/essentials/metrics-getting-started.md). For a full list of API metrics available for Azure Digital Twins, see [Azure Digital Twins API request metrics](how-to-monitor.md#api-request-metrics).
+For information about viewing and managing Azure Digital Twins metrics, see [Monitor your instance](how-to-monitor.md). For a full list of API metrics available for Azure Digital Twins, see [Azure Digital Twins API request metrics](how-to-monitor.md#api-request-metrics).
## Next steps
digital-twins Concepts Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-models.md
description: Learn how Azure Digital Twins uses custom models to describe entities in your environment and how to define these models using the Digital Twin Definition Language (DTDL). Previously updated : 09/13/2022 Last updated : 02/06/2023
While designing models to reflect the entities in your environment, it can be us
### Upload and delete models in bulk
-Here are two sample projects that can simplify dealing with multiple models at once:
-* [Model uploader](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels): Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution. If you have many models to upload, or if they have many interdependencies that would make ordering individual uploads complicated, you can use this model uploader sample to upload many models at once.
-* [Model deleter](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#deletemodels): This sample can be used to delete all models in an Azure Digital Twins instance at once. It contains recursive logic to handle model dependencies through the deletion process.
+Once you're finished creating, extending, or selecting your models, you need to upload them to your Azure Digital Twins instance to make them available for use in your solution.
+
+You can upload many models in a single API call using the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api). The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. For detailed instructions and examples that use this API, see [bulk import instructions for models](how-to-manage-model.md#upload-large-model-sets-with-the-import-jobs-api).
+
+An alternative to the Import Jobs API is the [Model uploader sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#uploadmodels), which uses the individual model APIs to upload multiple model files at once. The sample also implements automatic reordering to resolve model dependencies.
+
+If you need to delete all models in an Azure Digital Twins instance at once, you can use the [Model deleter sample](https://github.com/Azure/opendigitaltwins-tools/tree/main/ADTTools#deletemodels). This is a project that contains recursive logic to handle model dependencies through the deletion process.
### Visualize models
digital-twins Concepts Twins Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/concepts-twins-graph.md
description: Learn about digital twins, and how their relationships form a digital twin graph. Previously updated : 03/01/2022 Last updated : 02/06/2023
The result of this process is a set of nodes (the digital twins) connected via e
## Create with the APIs
-This section shows what it looks like to create digital twins and relationships from a client application. It contains .NET code examples that use the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), to provide more context on what goes on inside each of these concepts.
+This section shows what it looks like to create digital twins and relationships from a client application. It contains [.NET SDK](/dotnet/api/overview/azure/digitaltwins.core-readme) examples that use the [DigitalTwins APIs](/rest/api/digital-twins/dataplane/twins), to provide more context on what goes on inside each of these concepts.
### Create digital twins
Here's some example client code that uses the [DigitalTwins APIs](/rest/api/digi
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/graph_operations_other.cs" id="CreateRelationship_short":::
+### Create twins and relationships in bulk with the Import Jobs API
+
+You can upload many twins and relationships in a single API call using the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api). Twins and relationships created with this API can optionally include initialization of their properties. For detailed instructions and examples that use this API, see [bulk import instructions for twins](how-to-manage-twin.md#create-twins-in-bulk-with-the-import-jobs-api) and [relationships](how-to-manage-graph.md#create-relationships-in-bulk-with-the-import-jobs-api).
+ ## JSON representations of graph elements Digital twin data and relationship data are both stored in JSON format, which means that when you [query the twin graph](how-to-query-graph.md) in your Azure Digital Twins instance, the result will be a JSON representation of digital twins and relationships you've created.
digital-twins How To Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-graph.md
description: Learn how to manage a graph of digital twins by connecting them with relationships. Previously updated : 12/16/2022 Last updated : 02/06/2023
You can even create multiple instances of the same type of relationship between
> [!NOTE] > The DTDL attributes of `minMultiplicity` and `maxMultiplicity` for relationships aren't currently supported in Azure Digital TwinsΓÇöeven if they're defined as part of a model, they won't be enforced by the service. For more information, see [Service-specific DTDL notes](concepts-models.md#service-specific-dtdl-notes).
+### Create relationships in bulk with the Import Jobs API
+
+You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to create many relationships at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for relationships and bulk jobs.
+
+>[!TIP]
+>The Import Jobs API also allows models and twins to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
+
+To import relationships in bulk, you'll need to structure your relationships (and any other resources included in the bulk import job) as an *NDJSON* file. The `Relationships` section comes after the `Twins` section, making it the last graph data section in the file. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of any properties that the relationships have.
+
+You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
++
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+ ## List relationships ### List properties of a single relationship
You can now call this custom method to delete a relationship like this:
This section describes strategies for creating a graph with multiple elements at the same time, rather than using individual API calls to upload models, twins, and relationships to upload them one by one.
+### Upload models, twins, and relationships in bulk with the Import Jobs API
+
+You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to upload multiple models, twins, and relationships to your instance in a single API call, effectively creating the graph all at once. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for graph elements (models, twins, and relationships) and bulk jobs.
+
+To import resources in bulk, start by creating an *NDJSON* file containing the details of your resources. The file starts with a `Header` section, followed by the optional sections `Models`, `Twins`, and `Relationships`. You don't have to include all three types of graph data in the file, but any sections that are present must follow that order. Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties. Relationships defined in the file can reference twins that are either defined in this file or already present in the instance, and they can optionally include initialization of relationship properties.
+
+You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
++
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+ ### Import graph with Azure Digital Twins Explorer [Azure Digital Twins Explorer](concepts-azure-digital-twins-explorer.md) is a visual tool for viewing and interacting with your twin graph. It contains a feature for importing a graph file in either JSON or Excel format that can contain multiple models, twins, and relationships.
digital-twins How To Manage Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-model.md
description: Learn how to manage DTDL models within Azure Digital Twins, including how to create, edit, and delete them. Previously updated : 02/23/2022 Last updated : 02/06/2023
When you're ready to upload a model, you can use the following code snippet for
:::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModel":::
-You can also upload multiple models in a single transaction.
+On upload, model files are validated by the service.
+
+You'll usually need to upload more than one model to the service. There are several ways that you can upload many models at once in a single transaction. To help you pick a strategy, consider the size of your model set as you continue through the rest of this section.
+
+### Upload small model sets
+
+For smaller model sets, you can upload multiple models at once using individual API calls. You can check the current limit for how many models can be uploaded in a single API call in the [Azure Digital Twins limits](reference-service-limits.md).
If you're using the SDK, you can upload multiple model files with the `CreateModels` method like this: :::code language="csharp" source="~/digital-twins-docs-samples/sdks/csharp/model_operations.cs" id="CreateModels_multi":::
-If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/cli/azure/dt), you can also upload multiple models by placing multiple model definitions in a single JSON file to be uploaded together. In this case, the models should be placed in a JSON array within the file, like in the following example:
+If you're using the [REST APIs](/rest/api/azure-digitaltwins/) or [Azure CLI](/cli/azure/dt), you can upload multiple models by placing multiple model definitions in a single JSON file to be uploaded together. In this case, the models should be placed in a JSON array within the file, like in the following example:
:::code language="json" source="~/digital-twins-docs-samples/models/Planet-Moon.json":::
-On upload, model files are validated by the service.
+### Upload large model sets with the Import Jobs API
+
+For large model sets, you can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to upload many models at once in a single API call. The API can simultaneously accept up to the [Azure Digital Twins limit for number of models in an instance](reference-service-limits.md), and it automatically reorders models if needed to resolve dependencies between them. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for models and bulk jobs.
+
+>[!TIP]
+>The Import Jobs API also allows twins and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
+
+To import models in bulk, you'll need to structure your models (and any other resources included in the bulk import job) as an *NDJSON* file. The `Models` section comes immediately after `Header` section, making it the first graph data section in the file. You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
++
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
## Retrieve models
digital-twins How To Manage Twin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-twin.md
description: See how to retrieve, update, and delete individual twins and relationships. Previously updated : 08/10/2022 Last updated : 02/06/2023
The helper class of `BasicDigitalTwin` allows you to store property fields in a
>twin.Id = "myRoomId"; >```
+### Create twins in bulk with the Import Jobs API
+
+You can use the [Import Jobs API](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api) to create many twins at once in a single API call. This method requires the use of [Azure Blob Storage](../storage/blobs/storage-blobs-introduction.md), as well as [write permissions](concepts-apis-sdks.md#check-permissions) in your Azure Digital Twins instance for twins and bulk jobs.
+
+>[!TIP]
+>The Import Jobs API also allows models and relationships to be imported in the same call, to create all parts of a graph at once. For more about this process, see [Upload models, twins, and relationships in bulk with the Import Jobs API](how-to-manage-graph.md#upload-models-twins-and-relationships-in-bulk-with-the-import-jobs-api).
+
+To import twins in bulk, you'll need to structure your twins (and any other resources included in the bulk import job) as an *NDJSON* file. The `Twins` section comes after the `Models` section (and before the `Relationships` section). Twins defined in the file can reference models that are either defined in this file or already present in the instance, and they can optionally include initialization of the twin's properties.
+
+You can view an example import file and a sample project for creating these files in the [Import Jobs API introduction](concepts-apis-sdks.md#bulk-import-with-the-import-jobs-api).
++
+Then, the file can be used in an [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs) call.
+ ## Get data for a digital twin You can access the details of any digital twin by calling the `GetDigitalTwin()` method like this:
digital-twins How To Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-monitor.md
description: Monitor Azure Digital Twins instances with metrics, alerts, and diagnostics. Previously updated : 10/31/2022 Last updated : 02/06/2023
Metrics having to do with data ingress:
| IngressEventsFailureRate | Ingress Events Failure Rate | Percent | Average | The percentage of incoming telemetry events for which the service returns an internal error (500) response code. | Result | | IngressEventsLatency | Ingress Events Latency | Milliseconds | Average | The time from when an event arrives to when it's ready to be egressed by Azure Digital Twins, at which point the service sends a success/fail result. | Result |
+### Bulk operation metrics (from the Import Jobs API)
+
+Metrics having to do with bulk operations from the [Import Jobs API](/rest/api/digital-twins/dataplane/import-jobs):
+
+| Metric | Metric display name | Unit | Aggregation type| Description | Dimensions |
+| | | | | | |
+| BulkOperationLatency | Bulk Operation Latency | Milliseconds | Average | Total time taken for a bulk operation to complete. | Operation, <br>Authentication, <br>Protocol |
+| BulkOperationEntityCount | Bulk Operation Entity Count | Count | Total | The number of twins, models, or relationships processed by a bulk operation. | Operation, <br>Result |
+ ### Routing metrics Metrics having to do with routing:
digital-twins Reference Query Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-query-functions.md
The following query returns the name of all digital twins who have an array prop
### Limitations The ARRAY_CONTAINS() function has the following limitations:
-* Array indexing is not supported.
+* Array indexing isn't supported.
- For example, `array-name[index] = 'foo_bar'`
-* Subqueries within the ARRAY_CONTAINS() property are not supported.
+* Subqueries within the ARRAY_CONTAINS() property aren't supported.
- For example, `SELECT T.name FROM DIGITALTWINS T WHERE ARRAY_CONTAINS (SELECT S.floor_number FROM DIGITALTWINS S, 4)`
-* ARRAY_CONTAINS() is not supported on properties of relationships.
- - For example, say `Floor.Contains` is a relationship from Floor to Room and it has a `lift` property with a value of `["operating", "under maintenance", "under construction"]`. Queries like this are not supported: `SELECT Room FROM DIGITALTWINS Floor JOIN Room RELATED Floor.Contains WHERE Floor.$dtId = 'Floor-35' AND ARRAY_CONTAINS(Floor.Contains.lift, "operating")`.
-* ARRAY_CONTAINS() does not search inside nested arrays.
- - For example, say a twin has a `tags` property with a value of `[1, [2,3], 3, 4]`. A search for `2` using the query `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, 2)` will return `False`. A search for a value in the top level array, like `1` using the query `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, 1)`, will return `True`.
-* ARRAY_CONTAINS() is not supported if the array contains objects.
- - For example, say a twin has a `tags` property with a value of `[Room1, Room2]` where `Room1` and `Room2` are objects. Queries like this are not supported: `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, Room2)`.
+* ARRAY_CONTAINS() isn't supported on properties of relationships.
+ - For example, say `Floor.Contains` is a relationship from Floor to Room and it has a `lift` property with a value of `["operating", "under maintenance", "under construction"]`. Queries like this aren't supported: `SELECT Room FROM DIGITALTWINS Floor JOIN Room RELATED Floor.Contains WHERE Floor.$dtId = 'Floor-35' AND ARRAY_CONTAINS(Floor.Contains.lift, "operating")`.
+* ARRAY_CONTAINS() doesn't search inside nested arrays.
+ - For example, say a twin has a `tags` property with a value of `[1, [2,3], 3, 4]`. A search for `2` using the query `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, 2)` returns `False`. A search for a value in the top level array, like `1` using the query `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, 1)`, returns `True`.
+* ARRAY_CONTAINS() isn't supported if the array contains objects.
+ - For example, say a twin has a `tags` property with a value of `[Room1, Room2]` where `Room1` and `Room2` are objects. Queries like this aren't supported: `SELECT * FROM DIGITALTWINS WHERE ARRAY_CONTAINS(tags, Room2)`.
## CONTAINS
The following query returns all digital twins whose IDs end in `-small`. The str
## IS_BOOL
-A type checking function for determining whether an property has a Boolean value.
+A type checking function for determining whether a property has a Boolean value.
-This function is often combined with other predicates if the program processing the query results requires a boolean value, and you want to filter out cases where the property is not a boolean.
+This function is often combined with other predicates if the program processing the query results requires a boolean value, and you want to filter out cases where the property isn't a boolean.
### Syntax
This function is often combined with other predicates if the program processing
### Arguments
-`<property>`, an property to check whether it is a Boolean.
+`<property>`, a property to check whether it's a Boolean.
### Returns
The following query selects the digital twins that have a boolean `HasTemperatur
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" ID="IsBoolExample":::
-The following query builds on the above example to select the digital twins that have a boolean `HasTemperature` property, and the value of that property is not `false`.
+The following query builds on the above example to select the digital twins that have a boolean `HasTemperature` property, and the value of that property isn't `false`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" ID="IsBoolNotFalseExample":::
A type checking function to determine whether a property is defined.
### Arguments
-`<property>`, a property to determine whether it is defined.
+`<property>`, a property to determine whether it's defined.
### Returns
The following query returns all digital twins who have a defined `Location` prop
## IS_NULL
-A type checking function for determining whether an property's value is `null`.
+A type checking function for determining whether a property's value is `null`.
### Syntax
A type checking function for determining whether an property's value is `null`.
### Arguments
-`<property>`, a property to check whether it is null.
+`<property>`, a property to check whether it's null.
### Returns
The following query returns twins who do not have a null value for Temperature.
A type checking function for determining whether a property has a number value.
-This function is often combined with other predicates if the program processing the query results requires a number value, and you want to filter out cases where the property is not a number.
+This function is often combined with other predicates if the program processing the query results requires a number value, and you want to filter out cases where the property isn't a number.
### Syntax
This function is often combined with other predicates if the program processing
### Arguments
-`<property>`, a property to check whether it is a number.
+`<property>`, a property to check whether it's a number.
### Returns
A Boolean value indicating if the type of the specified property is a number.
### Example
-The following query selects the digital twins that have a numeric `Capacity` property and its value is not equal to 0.
+The following query selects the digital twins that have a numeric `Capacity` property and its value isn't equal to 0.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" ID="IsNumberExample":::
The following query selects the digital twins that have a numeric `Capacity` pro
A type checking function for determining whether a property's value is of a JSON object type.
-This function is often combined with other predicates if the program processing the query results requires a JSON object, and you want to filter out cases where the value is not a JSON object.
+This function is often combined with other predicates if the program processing the query results requires a JSON object, and you want to filter out cases where the value isn't a JSON object.
### Syntax
This function is often combined with other predicates if the program processing
### Arguments
-`<property>`, a property to check whether it is of an object type.
+`<property>`, a property to check whether it's of an object type.
### Returns
A Boolean value indicating if the type of the specified property is a JSON objec
### Example
-The following query selects all of the digital twins where this is an object called `MapObject`, and it does not have a child property `TemperatureReading`.
+The following query selects all of the digital twins where this is an object called `MapObject`, and it doesn't have a child property `TemperatureReading`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" ID="IsObjectExample":::
Required:
* `<model-ID>`: The model ID to check for. Optional:
-* `<twin-collection>`: Specify a twin collection to search when there is more than one (like when a `JOIN` is used).
-* `exact`: Require an exact match. If this parameter is not set, the result set will include twins with models that inherit from the specified model.
+* `<twin-collection>`: Specify a twin collection to search when there's more than one (like when a `JOIN` is used).
+* `exact`: Require an exact match. If this parameter isn't set, the result set includes twins with models that inherit from the specified model.
### Returns
The following query returns twins from the DT collection that are exactly of the
A type checking function for determining whether a property's value is of a primitive type (string, Boolean, numeric, or `null`).
-This function is often combined with other predicates if the program processing the query results requires a primitive-typed value, and you want to filter out cases where the property is not primitive.
+This function is often combined with other predicates if the program processing the query results requires a primitive-typed value, and you want to filter out cases where the property isn't primitive.
### Syntax
This function is often combined with other predicates if the program processing
### Arguments
-`<property>`, a property to check whether it is of a primitive type.
+`<property>`, a property to check whether it's of a primitive type.
### Returns
The following query returns the `area` property of the Factory with the ID of 'A
A type checking function for determining whether a property has a string value.
-This function is often combined with other predicates if the program processing the query results requires a string value, and you want to filter out cases where the property is not a string.
+This function is often combined with other predicates if the program processing the query results requires a string value, and you want to filter out cases where the property isn't a string.
### Syntax
This function is often combined with other predicates if the program processing
### Arguments
-`<property>`, a property to check whether it is a string.
+`<property>`, a property to check whether it's a string.
### Returns
A Boolean value indicating if the type of the specified expression is a string.
### Example
-The following query selects the digital twins that have a string property `Status` property and its value is not equal to `Completed`.
+The following query selects the digital twins that have a string property `Status` property and its value isn't equal to `Completed`.
:::code language="sql" source="~/digital-twins-docs-samples/queries/reference.sql" ID="IsStringExample":::
dms Dms Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/dms-overview.md
Previously updated : 01/05/2023 Last updated : 02/08/2023 # What is Azure Database Migration Service? Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime (online migrations).
+With Azure Database Migration Service currently we offer two options:
-## Migrate databases to Azure with familiar tools
+1. [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md)
+1. Database Migration Service - via Azure portal, PowerShell and Azure CLI.
-Azure Database Migration Service integrates some of the functionality of our existing tools and services. It provides customers with a comprehensive, highly available solution. The service uses the [Data Migration Assistant](/sql/dma/dma-overview) to generate assessment reports that provide recommendations to guide you through the required changes before a migration. It's up to you to perform any remediation required. Azure Database Migration Service performs all the required steps when ready to begin the migration process. Knowing that the process takes advantage of Microsoft's best practices, you can fire and forget your migration projects with peace of mind.
+**Azure SQL Migration extension for Azure Data Studio** is powered by the latest version of Database Migration Service and provides more features. Currently, it supports SQL Database modernization to Azure. For improved functionality and supportability, consider migrating to Azure SQL Database by using the Azure SQL migration extension for Azure Data Studio.
-> [!NOTE]
-> Using Azure Database Migration Service to perform an online migration requires creating an instance based on the Premium pricing tier.
+**Database Migration Service** via Azure portal, PowerShell and Azure CLI is an older version of the Azure Database Migration Service. It offers database modernization to Azure and support scenarios like – SQL Server, PostgreSQL, MySQL, and MongoDB. 
-## Regional availability
-For up-to-date info about the regional availability of Azure Database Migration Service, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration).
+## Compare versions
-## Pricing
+In 2021, a newer version of the Azure Database Migration Service was released as an extension for Azure Data Studio, which improved the functionality, user experience and supportability of the migration service. Consider using the [Azure SQL migration extension for Azure Data Studio](./migration-using-azure-data-studio.md) whenever possible.
-For up-to-date info about Azure Database Migration Service pricing, see [Azure Database Migration Service pricing](https://azure.microsoft.com/pricing/details/database-migration/).
+The following table compares the functionality of the versions of the Database Migration Service:
+|Feature |DMS |Azure SQL extension for Azure Data Studio |Notes|
+|||||
+|Assessment | No | Yes | Assess compatibility of the source. |
+|SKU recommendation | No | Yes | SKU recommendations for the target based on the assessment of the source. |
+|Azure SQL Database - Offline migration | Yes | Yes | Migrate to Azure SQL Database offline. |
+|Azure SQL Managed Instance - Online migration | Yes |Yes | Migrate to Azure SQL Managed Instance online with minimal downtime. |
+|Azure SQL Managed Instance - Offline migration | Yes |Yes | Migrate to Azure SQL Managed Instance offline. |
+|SQL Server on Azure SQL VM - Online migration | No | Yes |Migrate to SQL Server on Azure VMs online with minimal downtime.|
+|SQL Server on Azure SQL VM - Offline migration | Yes |Yes | Migrate to SQL Server on Azure VMs offline. |
+|Migrate logins|Yes | Yes | Migrate logins from your source to your target.|
+|Migrate schemas| Yes | No | Migrate schemas from your source to your target. |
+|Azure portal support |Yes | Yes | Control your migration by using the Azure portal. |
+|Integration with Azure Data Studio | No | Yes | Migration support integrated with Azure Data Studio. |
+|Regional availability|Yes |Yes | More regions are available with the extension. |
+|Improved user experience| No | Yes | The extension is faster, more secure, and easier to troubleshoot. |
+|Automation| Yes | Yes |The extension supports PowerShell and Azure CLI. |
+|Private endpoints| No | Yes| Connect to your source and target using private endpoints.
+|TDE support|No | Yes |Migrate databases encrypted with TDE. |
+## Migrate databases to Azure with familiar tools
+
+Azure Database Migration Service integrates some of the functionality of our existing tools and services. It provides customers with a comprehensive, highly available solution. The service uses the [Data Migration Assistant](/sql/dma/dma-overview) to generate assessment reports that provide recommendations to guide you through the required changes before a migration. It's up to you to perform any remediation required. Azure Database Migration Service performs all the required steps when ready to begin the migration process. Knowing that the process takes advantage of Microsoft's best practices, you can fire and forget your migration projects with peace of mind.
+
+## Regional availability
+
+For up-to-date info about the regional availability of Azure Database Migration Service, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/?products=database-migration).
## Next steps
dms Tutorial Sql Server Managed Instance Online https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-managed-instance-online.md
Title: "Tutorial: Migrate SQL Server online to SQL Managed Instance"
-description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service.
+description: Learn to perform an online migration from SQL Server to an Azure SQL Managed Instance by using Azure Database Migration Service
Previously updated : 01/12/2023 Last updated : 02/08/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance online using DMS > [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-online-ads.md).
+>
+> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview) with minimal downtime. For additional methods that may require some manual effort, see the article [SQL Server instance migration to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
dms Tutorial Sql Server To Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-azure-sql.md
Previously updated : 01/12/2023 Last updated : 02/08/2023 # Tutorial: Migrate SQL Server to Azure SQL Database using DMS > [!NOTE]
-> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
-
+> This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Database by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-azure-sql-database-offline-ads.md).
+>
+> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to [Azure SQL Database](/azure/sql-database/). In this tutorial, you migrate the [AdventureWorks2016](/sql/samples/adventureworks-install-configure#download-backup-files) database restored to an on-premises instance of SQL Server 2016 (or later) to a single database or pooled database in Azure SQL Database by using Azure Database Migration Service.
dms Tutorial Sql Server To Managed Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/tutorial-sql-server-to-managed-instance.md
Previously updated : 01/12/2023 Last updated : 02/08/2023 # Tutorial: Migrate SQL Server to an Azure SQL Managed Instance offline using DMS > [!NOTE] > This tutorial uses an older version of the Azure Database Migration Service. For improved functionality and supportability, consider migrating to Azure SQL Managed Instance by using the [Azure SQL migration extension for Azure Data Studio](tutorial-sql-server-managed-instance-offline-ads.md). -
+>
+> To compare features between versions, review [compare versions](dms-overview.md#compare-versions).
You can use Azure Database Migration Service to migrate the databases from a SQL Server instance to an [Azure SQL Managed Instance](/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview). For additional methods that may require some manual effort, see the article [SQL Server to Azure SQL Managed Instance](/azure/azure-sql/migration-guides/managed-instance/sql-server-to-managed-instance-guide).
dns Private Resolver Hybrid Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-resolver-hybrid-dns.md
In this article, the private zone **azure.contoso.com** and the resource record
## Create an Azure DNS Private Resolver
-The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provied:
+The following quickstarts are available to help you create a private resolver. These quickstarts walk you through creating a resource group, a virtual network, and Azure DNS Private Resolver. The steps to configure an inbound endpoint, outbound endpoint, and DNS forwarding ruleset are provided:
- [Create a private resolver - portal](dns-private-resolver-get-started-portal.md) - [Create a private resolver - PowerShell](dns-private-resolver-get-started-powershell.md)
The path for this query is: client's default DNS resolver (10.100.0.2) > on-prem
* Learn about [Azure DNS Private Resolver endpoints and rulesets](private-resolver-endpoints-rulesets.md). * Learn how to [Set up DNS failover using private resolvers](tutorial-dns-private-resolver-failover.md) * Learn about some of the other key [networking capabilities](../networking/fundamentals/networking-overview.md) of Azure.
-* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
+* [Learn module: Introduction to Azure DNS](/training/modules/intro-to-azure-dns).
event-grid Communication Services Email Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/communication-services-email-events.md
This section contains an example of what that data would look like for each even
"recipient": "receiver@azure.com", "messageId": "00000000-0000-0000-0000-000000000000", "status": "Delivered",
- "DeliveryStatusDetails": "No error.",
- "ReceivedTimestamp": "2020-09-18T00:22:20.2855749Z",
+ "deliveryAttemptTimeStamp": "2020-09-18T00:22:20.2855749Z",
}, "eventType": "Microsoft.Communication.EmailDeliveryReportReceived", "dataVersion": "1.0",
event-grid Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/storage-upload-process-images.md
Previously updated : 04/04/2022-- Last updated : 02/09/2023+ ms.devlang: csharp, javascript
event-hubs Authenticate Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-application.md
Title: Authenticate an application to access Azure Event Hubs resources description: This article provides information about authenticating an application with Azure Active Directory to access Azure Event Hubs resources Previously updated : 11/08/2022 Last updated : 02/08/2023
Once you've registered your application and granted it permissions to send/recei
For scenarios where acquiring tokens is supported, see the [Scenarios](https://aka.ms/msal-net-scenarios) section of the [Microsoft Authentication Library (MSAL) for .NET](https://github.com/AzureAD/microsoft-authentication-library-for-dotnet) GitHub repository. ## Samples-- [Azure.Messaging.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)-
- This sample has been updated to use the latest **Azure.Messaging.EventHubs** library.
-- [Microsoft.Azure.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac).
+- [RBAC samples using the latest .NET Azure.Messaging.EventHubs package](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac)
+- [RBAC samples using the legacy .NET Microsoft.Azure.EventHubs package](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac).
+- [RBAC sample using the legacy Java com.microsoft.azure.eventhubs package](https://github.com/Azure/azure-event-hubs/tree/master/samples/Jav) to migrate this sample to use the new package (`com.azure.messaging.eventhubs`). To learn more about using the new package in general, see samples [here](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/eventhubs/azure-messaging-eventhubs/src/samples/java/com/azure/messaging/eventhubs).
- These samples use the old **Microsoft.Azure.EventHubs** library, but you can easily update it to using the latest **Azure.Messaging.EventHubs** library. To move the sample from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md).
- ## Next steps - To learn more about Azure RBAC, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)?
event-hubs Authenticate Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-managed-identity.md
Title: Authentication a managed identity with Azure Active Directory description: This article provides information about authenticating a managed identity with Azure Active Directory to access Azure Event Hubs resources Previously updated : 12/15/2022 Last updated : 02/08/2023
var ehClient = EventHubClient.CreateWithManagedIdentity(new Uri($"sb://{EventHub
You can use Apache Kafka applications to send messages to and receive messages from Azure Event Hubs using managed identity OAuth. See the following sample on GitHub: [Event Hubs for Kafka - send and receive messages using managed identity OAuth](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/managedidentity). ## Samples-- **Azure.Messaging.EventHubs** samples
- - [.NET](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)
- - [Java](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/eventhubs/azure-messaging-eventhubs/src/samples/java/com/azure/messaging/eventhubs)
-- [Microsoft.Azure.EventHubs samples](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac).
-
- These samples use the old **Microsoft.Azure.EventHubs** library, but you can easily update it to using the latest **Azure.Messaging.EventHubs** library. To move the sample from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md).
- This sample has been updated to use the latest **Azure.Messaging.EventHubs** library.
-- [Event Hubs for Kafka - send and receive messages using managed identity OAuth](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/managedidentity)+
+- .NET.
+ - For a sample that uses the latest **Azure.Messaging.EventHubs** package, see [Publish events with a managed identity](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Azure.Messaging.EventHubs/ManagedIdentityWebApp)
+ - For a sample that uses the legacy **Microsoft.Azure.EventHubs** package, see [this .NET sample on GitHub](https://github.com/Azure/azure-event-hubs/tree/master/samples/DotNet/Microsoft.Azure.EventHubs/Rbac/ManagedIdentityWebApp)
+- Java - see the following samples.
+ - **Publish events with Azure identity** sample on [GitHub](https://github.com/Azure/azure-sdk-for-java/tree/master/sdk/eventhubs/azure-messaging-eventhubs/src/samples/java/com/azure/messaging/eventhubs).
+ - To learn how to use the Apache Kafka protocol to send events to and receive events from an event hub using a managed identity, see [Event Hubs for Kafka sample to send and receive messages using a managed identity](https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/tutorials/oauth/java/managedidentity).
++
+.
+ ## Next steps
event-hubs Authenticate Shared Access Signature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/authenticate-shared-access-signature.md
# Authenticate access to Event Hubs resources using shared access signatures (SAS) Shared access signature (SAS) gives you granular control over the type of access you grant to the clients who has the shared access signature. Here are some of the controls you can set in a SAS: -- The interval over which the SAS is valid, including the start time and expiry time.
+- The interval over which the SAS is valid, which includes the start time and expiry time.
- The permissions granted by the SAS. For example, a SAS for an Event Hubs namespace might grant the listen permission, but not the send permission. - Only clients that present valid credentials can send data to an event hub. - A client can't impersonate another client.
This article covers authenticating the access to Event Hubs resources using SAS.
## Configuring for SAS authentication
-You can configure the EventHubs shared access authorization rule on an Event Hubs namespace, or an entity (event hub instance or Kafka Topic in an event hub). Configuring a shared access authorization rule on a consumer group is currently not supported, but you can use rules configured on a namespace or entity to secure access to consumer group.
+You can configure a shared access authorization rule on an Event Hubs namespace, or an entity (event hub instance or Kafka Topic in an event hub). Configuring a shared access authorization rule on a consumer group is currently not supported, but you can use rules configured on a namespace or entity to secure access to consumer group.
The following image shows how the authorization rules apply on sample entities.
In this example, the sample Event Hubs namespace (ExampleNamespace) has two enti
The manageRuleNS, sendRuleNS, and listenRuleNS authorization rules apply to both event hub instance eh1 and topic t1. The listenRule-eh and sendRule-eh authorization rules apply only to event hub instance eh1 and sendRuleT authorization rule applies only to topic topic1.
-When using sendRuleNS authorization rule, client applications can send to both eh1 and topic1. When sendRuleT authorization rule is used, it enforces granular access to topic1 only and hence client applications using this rule for access now cannot send to eh1, but only to topic1.
+When you use sendRuleNS authorization rule, client applications can send to both eh1 and topic1. When sendRuleT authorization rule is used, it enforces granular access to topic1 only and hence client applications using this rule for access now can't send to eh1, but only to topic1.
## Generate a Shared Access Signature token Any client that has access to name of an authorization rule name and one of its signing keys can generate a SAS token. The token is generated by crafting a string in the following format: - `se` ΓÇô Token expiry instant. Integer reflecting seconds since epoch 00:00:00 UTC on 1 January 1970 (UNIX epoch) when the token expires-- `skn` ΓÇô Name of the authorization rule, that is the SAS key name.
+- `skn` ΓÇô Name of the authorization rule, which is the SAS key name.
- `sr` ΓÇô URI of the resource being accessed. - `sig` ΓÇô Signature.
To use a policy name and a key value to connect to an event hub, use the `EventH
const producer = new EventHubProducerClient("NAMESPACE NAME.servicebus.windows.net", eventHubName, new AzureNamedKeyCredential("POLICYNAME", "KEYVALUE")); ```
-You'll need to add a reference to `AzureNamedKeyCredential`.
+You need to add a reference to `AzureNamedKeyCredential`.
```javascript const { AzureNamedKeyCredential } = require("@azure/core-auth");
var token = createSharedAccessToken("https://NAMESPACENAME.servicebus.windows.ne
const producer = new EventHubProducerClient("NAMESPACENAME.servicebus.windows.net", eventHubName, new AzureSASCredential(token)); ```
-You'll need to add a reference to `AzureSASCredential`.
+You need to add a reference to `AzureSASCredential`.
```javascript const { AzureSASCredential } = require("@azure/core-auth");
For example, to define authorization rules scoped down to only sending/publishin
To authenticate back-end applications that consume from the data generated by Event Hubs producers, Event Hubs token authentication requires its clients to either have the **manage** rights or the **listen** privileges assigned to its Event Hubs namespace or event hub instance or topic. Data is consumed from Event Hubs using consumer groups. While SAS policy gives you granular scope, this scope is defined only at the entity level and not at the consumer level. It means that the privileges defined at the namespace level or the event hub instance or topic level will be applied to the consumer groups of that entity. ## Disabling Local/SAS Key authentication
-For certain organizational security requirements, you may have to disable local/SAS key authentication completely and rely on the Azure Active Directory (Azure AD) based authentication which is the recommended way to connect with Azure Event Hubs. You can disable local/SAS key authentication at the Event Hubs namespace level using Azure portal or Azure Resource Manager template.
+For certain organizational security requirements, you may have to disable local/SAS key authentication completely and rely on the Azure Active Directory (Azure AD) based authentication, which is the recommended way to connect with Azure Event Hubs. You can disable local/SAS key authentication at the Event Hubs namespace level using Azure portal or Azure Resource Manager template.
### Disabling Local/SAS Key authentication via the portal You can disable local/SAS key authentication for a given Event Hubs namespace using the Azure portal.
You can disable local authentication for a given Event Hubs namespace by setting
] ```
+## Samples
+
+- See the .NET sample #6 in [this GitHub location](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/Azure.Messaging.EventHubs/samples) to learn how to publish events to an event hub using shared access credentials or the default Azure credential identity.
+- See the .NET sample #5 in [this GitHub location](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples) to learn how to consume or process events using shared access credentials or the default Azure credential identity.
+ ## Next steps See the following articles:
event-hubs Azure Event Hubs Kafka Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/azure-event-hubs-kafka-overview.md
Title: Use Azure Event Hubs to stream data from Apache Kafka apps
-description: Learn how to use Azure Event Hubs to stream data from Apache Kafka applications without setting up a Kafka cluster on your own.
+ Title: Introduction to Apache Kafka on Azure Event Hubs
+description: Learn what Apache Kafka on Azure Event Hubs is and how to use it to stream data from Apache Kafka applications without setting up a Kafka cluster on your own.
Last updated 02/03/2023
-# Use Azure Event Hubs to stream data from Apache Kafka applications
+# What is Azure Event Hubs for Apache Kafka
This article explains how you can use Azure Event Hubs to stream data from [Apache Kafka](https://kafka.apache.org) applications without setting up a Kafka cluster on your own.
expressroute Expressroute Monitoring Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-monitoring-metrics-alerts.md
description: Learn about Azure ExpressRoute monitoring, metrics, and alerts usin
Previously updated : 05/10/2022 Last updated : 02/08/2023 # ExpressRoute monitoring, metrics, and alerts
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| | | | | | | | | [Arp Availability](#arp) | Availability | Percent | Average | ARP Availability from MSEE towards all peers. | Peering Type, Peer | Yes | | [Bgp Availability](#bgp) | Availability | Percent | Average | BGP Availability from MSEE towards all peers. | Peering Type, Peer | Yes |
-| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | No |
-| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | No |
+| [BitsInPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Peering Type | Yes |
+| [BitsOutPerSecond](#circuitbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Peering Type | Yes |
| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Peering Type | Yes | | DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Peering Type | Yes |
-| GlobalReachBitsInPerSecond | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeredCircuitSKey | No |
-| GlobalReachBitsOutPerSecond | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | PeeredCircuitSKey | No |
+| GlobalReachBitsInPerSecond | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | PeeredCircuitSKey | Yes |
+| GlobalReachBitsOutPerSecond | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | PeeredCircuitSKey | Yes |
| [FastPathRoutesCount](#fastpath-routes-count-at-circuit-level) | Fastpath | Count | Maximum | Count of FastPath routes configured on the circuit | None | Yes | >[!NOTE]
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [Bits received per second](#gwbits) | Performance | BitsPerSecond | Average | Total bits received on ExpressRoute gateway per second | roleInstance | No |
+| [Bits received per second](#gwbits) | Performance | BitsPerSecond | Average | Total bits received on ExpressRoute gateway per second | roleInstance | Yes |
| [CPU utilization](#cpu) | Performance | Count | Average | CPU Utilization of the ExpressRoute Gateway | roleInstance | Yes |
-| [Packets per second](#packets) | Performance | CountPerSecond | Average | Total Packets received on ExpressRoute Gateway per second | roleInstance | No |
+| [Packets per second](#packets) | Performance | CountPerSecond | Average | Total Packets received on ExpressRoute Gateway per second | roleInstance | Yes |
| [Count of routes advertised to peer](#advertisedroutes) | Availability | Count | Maximum | Count Of Routes Advertised To Peer by ExpressRouteGateway | roleInstance | Yes | | [Count of routes learned from peer](#learnedroutes)| Availability | Count | Maximum | Count Of Routes Learned From Peer by ExpressRouteGateway | roleInstance | Yes |
-| [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | No |
-| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Number of VMs in the Virtual Network | No Dimensions | No |
+| [Frequency of routes changed](#frequency) | Availability | Count | Total | Frequency of Routes change in ExpressRoute Gateway | roleInstance | Yes |
+| [Number of VMs in virtual network](#vm) | Availability | Count | Maximum | Number of VMs in the Virtual Network | No Dimensions | Yes |
### ExpressRoute Gateway connections | Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | No |
-| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | No |
+| [BitsInPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
+| [BitsOutPerSecond](#connectionbandwidth) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second through ExpressRoute gateway | ConnectionName | Yes |
| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | ConnectionName | Yes | | DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | ConnectionName | Yes |
Metrics explorer supports SUM, MAX, MIN, AVG and COUNT as [aggregation types](..
| Metric | Category | Unit | Aggregation Type | Description | Dimensions | Exportable via Diagnostic Settings? | | | | | | | | |
-| [BitsInPerSecond](#directin) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Link | No |
-| [BitsOutPerSecond](#directout) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Link | No |
-| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Link | No |
-| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Link | No |
-| [AdminState](#admin) | Physical Connectivity | Count | Average | Admin state of the port | Link | No |
-| [LineProtocol](#line) | Physical Connectivity | Count | Average | Line protocol status of the port | Link | No |
-| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | No |
-| [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | No |
+| [BitsInPerSecond](#directin) | Traffic | BitsPerSecond | Average | Bits ingressing Azure per second | Link | Yes |
+| [BitsOutPerSecond](#directout) | Traffic | BitsPerSecond | Average | Bits egressing Azure per second | Link | Yes |
+| DroppedInBitsPerSecond | Traffic | BitsPerSecond | Average | Ingress bits of data dropped per second | Link | Yes |
+| DroppedOutBitsPerSecond | Traffic | BitPerSecond | Average | Egress bits of data dropped per second | Link | Yes |
+| [AdminState](#admin) | Physical Connectivity | Count | Average | Admin state of the port | Link | Yes |
+| [LineProtocol](#line) | Physical Connectivity | Count | Average | Line protocol status of the port | Link | Yes |
+| [RxLightLevel](#rxlight) | Physical Connectivity | Count | Average | Rx Light level in dBm | Link, Lane | Yes |
+| [TxLightLevel](#txlight) | Physical Connectivity | Count | Average | Tx light level in dBm | Link, Lane | Yes |
| [FastPathRoutesCount](#fastpath-routes-count-at-port-level) | FastPath | Count | Maximum | Count of FastPath routes configured on the port | None | Yes | ### ExpressRoute Traffic Collector
Set up your ExpressRoute connection.
* [Create and modify a circuit](expressroute-howto-circuit-arm.md) * [Create and modify peering configuration](expressroute-howto-routing-arm.md)
-* [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
+* [Link a VNet to an ExpressRoute circuit](expressroute-howto-linkvnet-arm.md)
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/validation-against-profiles.md
There are different ways provided for you to validate resource:
Azure API for FHIR will always return an `OperationOutcome` as the validation results for $validate operation. Azure API for FHIR service does two step validation, once a resource is passed into $validate endpoint - the first step is a basic validation to ensure resource can be parsed. During resource parsing, individual errors need to be fixed before proceeding further to next step. Once resource is successfully parsed, full validation is conducted as second step. > [!NOTE]
-> Any valuesets that are to be used for validation must be uploaded to the FHIR server.  This includes any Valuesets which are part of the FHIR specification, > as well as any ValueSets defined in Implementation Guides.  Only fully expanded Valuesets which contain a full list of all codes are supported.  Any > ValueSet definitions which reference external sources are not supported
+> Any valuesets that are to be used for validation must be uploaded to the FHIR server.  This includes any Valuesets which are part of the FHIR specification, as well as any ValueSets defined in Implementation Guides.  Only fully expanded Valuesets which contain a full list of all codes are supported.  Any ValueSet definitions which reference external sources are not supported.
## Validating an existing resource
healthcare-apis Validation Against Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/validation-against-profiles.md
# Validate FHIR resources against profiles in Azure Health Data Services
-`$validate` is an operation in Fast Healthcare Interoperability Resources (FHIR&#174;) that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This is a valuable operation to ensure that the data in the FHIR server has the expected attributes and values.
+In the [store profiles in the FHIR service](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. The FHIR service in Azure Health Data Services (hereby called the FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article will guide you through how to use `$validate` for validating resources against profiles.
-In the [store profiles in the FHIR service](store-profiles-in-fhir.md) article, you walked through the basics of FHIR profiles and storing them. The FHIR service in Azure Health Data Services (hereby called the FHIR service) allows validating resources against profiles to see if the resources conform to the profiles. This article will guide you through how to use `$validate` for validating resources against profiles. For more information about FHIR profiles outside of this article, visit
-[HL7.org](https://www.hl7.org/fhir/profiling.html).
-
-## Validating resources against the profiles
-
-FHIR resources can express their conformance to specific profiles. This allows the FHIR service to **validate** given resources against profiles. Validating a resource against a profile means checking if the resource conforms to the profile, including the specifications listed in `Resource.meta.profile` or in an Implementation Guide. There are two ways for you to validate your resource:
--- You can use `$validate` operation against a resource that is already in the FHIR service. -- You can include `$validate` when you create or update a resource. -
-In both cases, you can decide by way of your FHIR service configuration what to do when the resource doesn't conform to your desired profile.
-
-## Using $validate
+`$validate` is an operation in Fast Healthcare Interoperability Resources (FHIR&#174;) that allows you to ensure that a FHIR resource conforms to the base resource requirements or a specified profile. This operation ensures that the data in FHIR service has the expected attributes and values. For information on validate operation, visit [HL7 FHIR Specification](https://www.hl7.org/fhir/resource-operation-validate.html).
+Per specification, Mode can be specified with `$validate`, such as create and update:
+- `create`: Azure API for FHIR checks that the profile content is unique from the existing resources and that it's acceptable to be created as a new resource.
+- `update`: Checks that the profile is an update against the nominated existing resource (that is no changes are made to the immutable fields).
-The `$validate` operation checks whether the provided profile is valid, and whether the resource conforms to the specified profile. As mentioned in the [HL7 FHIR specifications](https://www.hl7.org/fhir/resource-operation-validate.html), you can also specify the mode for `$validate`, such as create and update:
+There are different ways provided for you to validate resource:
+- Validate an existing resource with validate operation.
+- Validate a new resource with validate operation.
+- Validate on resource CREATE/ UPDATE using header.
-- `create`: The server checks that the profile content is unique from the existing resources and that it's acceptable to be created as a new resource.-- `update`: Checks that the profile is an update against the nominated existing resource (that is no changes are made to the immutable fields).
+FHIR Service will always return an `OperationOutcome` as the validation results for $validate operation. FHIR service does two step validation, once a resource is passed into $validate endpoint - the first step is a basic validation to ensure resource can be parsed. During resource parsing, individual errors need to be fixed before proceeding further to next step. Once resource is successfully parsed, full validation is conducted as second step.
-The server will always return an `OperationOutcome` as the validation results.
+> [!NOTE]
+> Any valuesets that are to be used for validation must be uploaded to the FHIR server. This includes any Valuesets which are part of the FHIR specification, as well as any ValueSets defined in Implementation Guides. Only fully expanded Valuesets which contain a full list of all codes are supported. Any ValueSet definitions which reference external sources are not supported.
## Validating an existing resource
For example:
This request will first validate the resource. New resource you're specifying in the request will be created after validation. The server will always return an `OperationOutcome` as the result.
-## Validate on resource CREATE or resource UPDATE
+## Validate on resource CREATE/UPDATE using header
+
+You can choose when you'd like to validate your resource, such as on resource `CREATE` or `UPDATE`. By default, the FHIR service is configured to opt out of validation on resource `Create/Update`. This capability allows to validate on `Create/Update`, using the `x-ms-profile-validation` header. Set `x-ms-profile-validation' to true for validation.
-You can choose when you'd like to validate your resource, such as on resource `CREATE` or `UPDATE`. By default, the FHIR service is configured to opt out of validation on resource `Create/Update`. To validate on `Create/Update`, you can use the `x-ms-profile-validation` header set to true: `x-ms-profile-validation: true`.
> [!NOTE] > In the open-source FHIR service, you can change the server configuration setting, under the CoreFeatures.
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
If you don't have a device ready, you should create one before continuing with t
All IoT Edge gateways need a device CA certificate installed on them. The IoT Edge security daemon uses the IoT Edge device CA certificate to sign a workload CA certificate, which in turn signs a server certificate for IoT Edge hub. The gateway presents its server certificate to the downstream device during the initiation of the connection. The downstream device checks to make sure that the server certificate is part of a certificate chain that rolls up to the root CA certificate. This process allows the downstream device to confirm that the gateway comes from a trusted source. For more information, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
-![Gateway certificate setup](./media/how-to-create-transparent-gateway/gateway-setup.png)
The root CA certificate and the device CA certificate (with its private key) need to be present on the IoT Edge gateway device and configured in the IoT Edge config file. Remember that in this case *root CA certificate* means the topmost certificate authority for this IoT Edge scenario. The gateway device CA certificate and the downstream device certificates need to roll up to the same root CA certificate.
If you don't have your own certificate authority and want to use demo certificat
Now, you need to copy the certificates to the Azure IoT Edge for Linux on Windows virtual machine.
+For more information on the following commands, see [PowerShell functions for IoT Edge](reference-iot-edge-for-linux-on-windows-functions.md).
+ 1. Check the certificate meets [format requirements](how-to-manage-device-certificates.md#format-requirements). 1. Copy the certificates to the EFLOW virtual machine to a directory where you have write access. For example, the `/home/iotedge-user` home directory.
iot-edge How To Deploy Modules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-cli.md
Here's a basic deployment manifest with one module as an example:
You deploy modules to your device by applying the deployment manifest that you configured with the module information.
-Change directories into the folder where your deployment manifest is saved. If you used one of the VS Code IoT Edge templates, use the `deployment.json` file in the **config** folder of your solution directory and not the `deployment.template.json` file.
+Change directories into the folder where your deployment manifest is saved. If you used one of the Visual Studio Code IoT Edge templates, use the `deployment.json` file in the **config** folder of your solution directory and not the `deployment.template.json` file.
Use the following command to apply the configuration to an IoT Edge device:
iot-edge How To Deploy Modules Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-deploy-modules-vscode.md
You deploy modules to your device by applying the deployment manifest that you c
![Select Edge Deployment Manifest](./media/how-to-deploy-modules-vscode/select-deployment-manifest.png)
-The results of your deployment are printed in the VS Code output. Successful deployments are applied within a few minutes if the target device is running and connected to the internet.
+The results of your deployment are printed in the Visual Studio Code output. Successful deployments are applied within a few minutes if the target device is running and connected to the internet.
## View modules on your device
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
[!INCLUDE [iot-edge-version-1.4](includes/iot-edge-version-1.4.md)]
-All IoT Edge devices use certificates to create secure connections between the runtime and any modules running on the device. IoT Edge devices functioning as gateways use these same certificates to connect to their downstream devices, too. For more information about the function of the different certificates on an IoT Edge device, see [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
+All IoT Edge devices use certificates to create secure connections between the runtime and any modules running on the device. IoT Edge devices functioning as gateways use these same certificates to connect to their downstream devices, too.
> [!NOTE]
-> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You do not need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. In many cases, it is actually an intermediate CA certificate.
+> The term *root CA* used throughout this article refers to the topmost authority's certificate in the certificate chain for your IoT solution. You do not need to use the certificate root of a syndicated certificate authority, or the root of your organization's certificate authority. In many cases, it's actually an intermediate CA certificate.
## Prerequisites
-* [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md).
+* You should be familiar with the concepts in [Understand how Azure IoT Edge uses certificates](iot-edge-certs.md), in particular how IoT Edge uses certificates.
* An IoT Edge device.
- If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of the quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md).
+
+ If you don't have an IoT Edge device set up, you can create one in an Azure virtual machine. Follow the steps in one of these quickstart articles to [Create a virtual Linux device](quickstart-linux.md) or [Create a virtual Windows device](quickstart.md).
* Ability to edit the IoT Edge configuration file `config.toml` following the [configuration template](https://github.com/Azure/iotedge/blob/main/edgelet/contrib/config/linux/template.toml).
- * If your `config.toml` isn't based on the template, open the [template](https://github.com/Azure/iotedge/blob/main/edgelet/contrib/config/linux/template.toml) and use the commented guidance to add configuration sections following the structure of the template.
- * If you have a new IoT Edge installation that hasn't been configured, copy the template to initialize the configuration. Don't use this command if you have an existing configuration. It overwrites the file.
+
+* If your `config.toml` isn't based on the template, open the [template](https://github.com/Azure/iotedge/blob/main/edgelet/contrib/config/linux/template.toml) and use the commented guidance to add configuration sections following the structure of the template.
+
+* If you have a new IoT Edge installation that hasn't been configured, copy the template to initialize the configuration. Don't use this command if you have an existing configuration. It overwrites the file.
```bash sudo cp /etc/aziot/config.toml.edge.template /etc/aziot/config.toml
All IoT Edge devices use certificates to create secure connections between the r
> [!TIP] >
-> * A certificate can be encoded in a binary representation called DER, or a textual representation called PEM. The PEM format is a `--BEGIN CERTIFICATE--` header followed by the base64-encoded DER followed by a `--END CERTIFICATE--` footer.
+> * A certificate can be encoded in a binary representation called DER (Distinguished Encoding Rules), or a textual representation called PEM (Privacy Enhanced Mail). The PEM format has a `--BEGIN CERTIFICATE--` header followed by the base64-encoded DER followed by an `--END CERTIFICATE--` footer.
> * Similar to the certificate, the private key can be encoded in binary DER or textual representation PEM.
-> * Because PEM is delineated, it is also possible to construct a PEM that combines both the `CERTIFICATE` and `PRIVATE KEY` sequentially in the same file.
-> * Lastly, the certificate and private key can be encoded together in a binary representation called *PKCS#12*, that is encrypted with an optional password.
+> * Because PEM is delineated, it's also possible to construct a PEM that combines both the `CERTIFICATE` and `PRIVATE KEY` sequentially in the same file.
+> * Lastly, the certificate and private key can be encoded together in a binary representation called *PKCS#12*, that's encrypted with an optional password.
> > File extensions are arbitrary and you need to run the `file` command or view the file verify the type. In general, files use the following extension conventions: >
sudo find /var/aziot/secrets -type f -name "*.*" -exec chmod 600 {} \;
sudo ls -Rla /var/aziot ```
-The output of list with correct ownership and permission is similar to the following:
+The output of the list with the correct ownership and permission is similar to the following output:
```Output azureUser@vm:/var/aziot$ sudo ls -Rla /var/aziot
Using a self-signed certificate authority (CA) certificate as a root of trust wi
1. Get a publicly trusted root CA certificate from a PKI provider.
-1. Check the certificate meets [format requirements](#format-requirements).
+1. Check that the certificate meets the [format requirements](#format-requirements).
1. Copy the PEM file and give IoT Edge's certificate service access. For example, with `/var/aziot/certs` directory:
Using a self-signed certificate authority (CA) certificate as a root of trust wi
sudo chmod 644 /var/aziot/certs/root-ca.pem ```
-1. In the IoT Edge configuration file `config.toml`, find **Trust bundle cert** section. If the section is missing, you can copy it from the configuration template file.
+1. In the IoT Edge configuration file `config.toml`, find the **Trust bundle cert** section. If the section is missing, you can copy it from the configuration template file.
>[!TIP] >If the config file doesn't exist on your device yet, then use `/etc/aziot/config.toml.edge.template` as a template to create one.
-1. Set `trust_bundle_cert` key to the certificate file location.
+1. Set the `trust_bundle_cert` key to the certificate file location.
```toml trust_bundle_cert = "file:///var/aziot/certs/root-ca.pem"
Installing the certificate to the trust bundle file makes it available to contai
## Import certificate and private key files
-IoT Edge can use existing certificate and private key files to authenticate or attest to Azure, issue new module server certificates, and authenticate to EST servers. To install them:
+IoT Edge can use existing certificates and private key files to authenticate or attest to Azure, issue new module server certificates, and authenticate to EST servers. To install them:
1. Check the certificate and private key files meet the [format requirements](#format-requirements).
-1. Copy the PEM file to the IoT Edge device where IoT Edge modules can have access. For example, `/var/aziot/` directory.
+1. Copy the PEM file to the IoT Edge device where IoT Edge modules can have access. For example, the `/var/aziot/` directory.
```bash # If the certificate and keys directories don't exist, create, set ownership, and set permissions
This approach requires you to manually update the files as certificate expires.
IoT Edge can interface with an [Enrollment over Secure Transport (EST) server](https://wikipedia.org/wiki/Enrollment_over_Secure_Transport) for automatic certificate issuance and renewal. Using EST is recommended for production as it replaces the need for manual certificate management, which can be risky and error-prone. It can be configured globally and overridden for each certificate type.
-In this scenario, the bootstrap certificate and private key are expected to be long-lived and potentially installed on the device during manufacturing. IoT Edge uses the bootstrap credentials to authenticate to the EST server for the initial request to issue an identity certificate for subsequent requests, as well as for authentication to DPS or IoT Hub.
+In this scenario, the bootstrap certificate and private key are expected to be long-lived and potentially installed on the device during manufacturing. IoT Edge uses the bootstrap credentials to authenticate to the EST server for the initial request to issue an identity certificate for subsequent requests and for authentication to DPS or IoT Hub.
1. Get access to an EST server. If you don't have an EST server, use one of the following options to start testing:
iot-edge How To Vs Code Develop Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-vs-code-develop-module.md
Title: Develop and debug modules Azure IoT Edge modules using VS Code
+ Title: Develop and debug modules Azure IoT Edge modules using Visual Studio Code
description: Use Visual Studio Code to develop, build, and debug a module for Azure IoT Edge using C#, Python, Node.js, Java, or C
zone_pivot_groups: iotedge-dev
This article shows you how to use Visual Studio Code to develop and debug IoT Edge modules in multiple languages and multiple architectures. On your development computer, you can use Visual Studio Code to attach and debug your module in a local or remote module container.
-You can choose either the **Azure IoT Edge Dev Tool** CLI or the **Azure IoT Edge tools for VS Code** extension as your IoT Edge development tool. Use the tool selector button at the beginning to choose your tool option for this article.
+You can choose either the **Azure IoT Edge Dev Tool** CLI or the **Azure IoT Edge tools for Visual Studio Code** extension as your IoT Edge development tool. Use the tool selector button at the beginning to choose your tool option for this article.
Visual Studio Code supports writing IoT Edge modules in the following programming languages:
To build and deploy your module image, you need Docker to build the module image
::: zone pivot="iotedge-dev-cli" -- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) in order to set up your local development environment to debug, run, and test your IoT Edge solution. If you haven't already done so, install [Python (3.6/3.7/3.8) and Pip3](https://www.python.org/) and then install the IoT Edge Dev Tool (iotedgedev) by running this command in your terminal.
+- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) in order to set up your local development environment to debug, run, and test your IoT Edge solution. If you haven't already, install [Python (3.6/3.7)](https://www.python.org/downloads/) and Pip3 and then install the IoT Edge Dev Tool (iotedgedev) with the following command in your terminal.
```cmd pip3 install iotedgedev ```-
+
> [!NOTE] > > If you have multiple Python including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you are using `pip3` to install *IoT Edge Dev Tool (iotedgedev)*. > > For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md).
+ To stay current on the testing environment for IoT Edge, see the [test-coverage](https://github.com/Azure/iotedgedev/blob/main/docs/test-coverage.md) list.
+ ::: zone-end Install prerequisites specific to the language you're developing in:
Install prerequisites specific to the language you're developing in:
# [C\# / Azure Functions](#tab/csharp+azfunctions) - Install [.NET Core SDK](https://dotnet.microsoft.com/download)-- Install [C# VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
+- Install [C# Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-dotnettools.csharp)
# [C](#tab/c) -- Install [C/C++ VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools)
+- Install [C/C++ Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.cpptools)
# [Java](#tab/java)
Install prerequisites specific to the language you're developing in:
# [Python](#tab/python) -- Install [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installing/#installation) for installing Python packages (typically included with your Python installation).-- Install [Python VS Code extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
+- Install [Python](https://www.python.org/downloads/) and [Pip](https://pip.pypa.io/en/stable/installation/)
+- Install [Python Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-python.python)
+- Install the Python-based [Azure IoT Edge Dev Tool](https://pypi.org/project/iotedgedev/) to debug, run, and test your IoT Edge solution. You can alternatively install the Azure IoT Edge Dev Tool using the CLI:
+
+ ```cmd
+ pip3 install iotedgedev
+ ```
+
+ > [!NOTE]
+ >
+ > If you have multiple Python including pre-installed Python 2.7 (for example, on Ubuntu or macOS), make sure you are using `pip3` to install *IoT Edge Dev Tool (iotedgedev)*. For more information setting up your development machine, see [iotedgedev development setup](https://github.com/Azure/iotedgedev/blob/main/docs/environment-setup/manual-dev-machine-setup.md).
There are four items within the solution:
### Set IoT Edge runtime version
-The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets. Currently, the latest stable version is version 1.4. If you're developing modules for devices running the 1.1 long-term support version or the earlier 1.0 version, update the IoT Edge runtime version in Visual Studio Code to match.
+The IoT Edge extension defaults to the latest stable version of the IoT Edge runtime when it creates your deployment assets.
::: zone pivot="iotedge-dev-ext"
When you debug modules using this method, your modules are running on top of the
### Build and deploy your module to an IoT Edge device
-In Visual Studio Code, open *deployment.debug.template.json* deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials and your module images with the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md).
+In Visual Studio Code, open the *deployment.debug.template.json* deployment manifest file. The [deployment manifest](module-deployment-monitoring.md#deployment-manifest) is a JSON document that describes the modules to be configured on the targeted IoT Edge device. Before deployment, you need to update your Azure Container Registry credentials and your module images with the proper `createOptions` values. For more information about createOption values, see [How to configure container create options for IoT Edge modules](how-to-use-create-options.md).
::: zone pivot="iotedge-dev-cli"
az iot edge set-modules --hub-name my-iot-hub --device-id my-device --content ./
``` > [!TIP]
-> You can find your IoT Hub connection string in the Azure portal under Azure IoT Hub > **Security settings** > **Shared access policies**.
+> You can find your IoT Hub connection string in the Azure portal in your IoT Hub > **Security settings** > **Shared access policies** > **iothubowner**.
> ::: zone-end
Add your breakpoint to the file `main.py`in the callback method where you added
-To debug modules on a remote device, you can use Remote SSH debugging in VS Code.
+To debug modules on a remote device, you can use Remote SSH debugging in Visual Studio Code.
-To enable VS Code remote debugging, install the [Remote Development extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack). For more information about VS Code remote debugging, see [VS Code Remote Development](https://code.visualstudio.com/docs/remote/remote-overview).
+To enable Visual Studio Code remote debugging, install the [Remote Development extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.vscode-remote-extensionpack). For more information about Visual Studio Code remote debugging, see [Visual Studio Code Remote Development](https://code.visualstudio.com/docs/remote/remote-overview).
-For details on how to use Remote SSH debugging in VS Code, see [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh)
+For details on how to use Remote SSH debugging in Visual Studio Code, see [Remote Development using SSH](https://code.visualstudio.com/docs/remote/ssh)
In the Visual Studio Code Debug view, select the debug configuration file for your module. By default, the **.debug** Dockerfile, module's container `createOptions` settings, and `launch.json` file are configured to use *localhost*.
Select **Start Debugging** or select **F5**. Select the process to attach to. In
## Debug using Docker Remote SSH
-The Docker and Moby engines support SSH connections to containers allowing you to debug in VS Code connected to a remote device. You need to meet the following prerequisites before you can use this feature.
+The Docker and Moby engines support SSH connections to containers allowing you to debug in Visual Studio Code connected to a remote device. You need to meet the following prerequisites before you can use this feature.
### Configure Docker SSH tunneling 1. Follow the steps in [Docker SSH tunneling](https://code.visualstudio.com/docs/containers/ssh#_set-up-ssh-tunneling) to configure SSH tunneling on your development computer. SSH tunneling requires public/private key pair authentication and a Docker context defining the remote device endpoint. 1. Connecting to Docker requires root-level privileges. Follow the steps in [Manage docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall) to allow connection to the Docker daemon on the remote device. When you're finished debugging, you may want to remove your user from the Docker group.
-1. In Visual Studio Code, use the Command Palette (Ctrl+Shift+P) to issue the *Docker Context: Use* command to activate the Docker context pointing to the remote machine. This command causes both VS Code and Docker CLI to use the remote machine context.
+1. In Visual Studio Code, use the Command Palette (Ctrl+Shift+P) to issue the *Docker Context: Use* command to activate the Docker context pointing to the remote machine. This command causes both Visual Studio Code and Docker CLI to use the remote machine context.
> [!TIP] > All Docker commands use the current context. Remember to change context back to *default* when you are done debugging.
The Docker and Moby engines support SSH connections to containers allowing you t
edgeAgent ```
-1. In the *.vscode* directory, add a new configuration to **launch.json** by opening the file in VS Code. Select **Add configuration** then choose the matching remote attach template for your module. For example, the following configuration is for .NET Core. Change the value for the *-H* parameter in *PipeArgs* to your device DNS name or IP address.
+1. In the *.vscode* directory, add a new configuration to **launch.json** by opening the file in Visual Studio Code. Select **Add configuration** then choose the matching remote attach template for your module. For example, the following configuration is for .NET Core. Change the value for the *-H* parameter in *PipeArgs* to your device DNS name or IP address.
```json "configurations": [
The Docker and Moby engines support SSH connections to containers allowing you t
### Remotely debug your module
-1. In VS Code Debug view, select the debug configuration *Remote Debug IoT Edge Module (.NET Core)*.
+1. In Visual Studio Code Debug view, select the debug configuration *Remote Debug IoT Edge Module (.NET Core)*.
1. Select **Start Debugging** or select **F5**. Select the process to attach to. 1. In the Visual Studio Code Debug view, you'll see the variables in the left panel.
-1. In VS Code, set breakpoints in your custom module.
+1. In Visual Studio Code, set breakpoints in your custom module.
1. When a breakpoint is hit, you can inspect variables, step through code, and debug your module.
- :::image type="content" source="media/how-to-vs-code-develop-module/vs-code-breakpoint.png" alt-text="Screenshot of VS Code attached to a Docker container on a remote device paused at a breakpoint.":::
+ :::image type="content" source="media/how-to-vs-code-develop-module/vs-code-breakpoint.png" alt-text="Screenshot of Visual Studio Code attached to a Docker container on a remote device paused at a breakpoint.":::
> [!NOTE] > The preceding example shows how to debug IoT Edge modules on remote containers. It added a remote Docker context and changes to the Docker privileges on the remote device. After you finish debugging your modules, set your Docker context to *default* and remove privileges from your user account.
iot-edge Iot Edge Certs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/iot-edge-certs.md
These core scenarios are where IoT Edge uses certificates. Use the links to lear
To help understand IoT Edge certificate concepts, imagine a scenario where an IoT Edge device named *EdgeGateway* connects to an Azure IoT Hub named *ContosoIotHub*. In this example, all authentication is done with X.509 certificate authentication rather than symmetric keys. To establish trust in this scenario, we need to guarantee the IoT Hub and IoT Edge device are authentic: *"Is this device genuine and valid?"* and *"Is the identity of the IoT Hub correct?"*. The scenario can be illustrated as follows: <!-- mermaid stateDiagram-v2
The root certificate authority (CA) is the [Baltimore CyberTrust Root](https://w
Windows certificate store: Ubuntu certificate store: When a device checks for the *Baltimore CyberTrust Root* certificate, it's preinstalled in the OS. From *EdgeGateway* perspective, since the certificate chain presented by *ContosoIotHub* is signed by a root CA that the OS trusts, the certificate is considered trustworthy. The certificate is known as **IoT Hub server certificate**. To learn more about the IoT Hub server certificate, see [Transport Layer Security (TLS) support in IoT Hub](../iot-hub/iot-hub-tls-support.md).
If we view the thumbprint value for the *EdgeGateway* device in the Azure portal
:::image type="content" source="./media/iot-edge-certs/edge-id-thumbprint.png" alt-text="Screenshot from Azure portal of EdgeGateway device's thumbprint in ContosoIotHub.":::
-In summary, *ContosoIotHub* can trust *EdgeGateway* because *EdgeGateway* presents a valid **IoT Edge device identity certificate** whose thumbprint matches the one registered in IoT Hub.
+In summary, *ContosoIotHub* can trust *EdgeGateway* because *EdgeGateway* presents a valid **IoT Edge device identity certificate** whose thumbprint matches the one registered in IoT Hub.
+
+For more information about the certificate building process, see [Create and provision an IoT Edge device on Linux using X.509 certificates](how-to-provision-single-device-linux-x509.md).
> [!NOTE] > This example doesn't address Azure IoT Hub Device Provisioning Service (DPS), which has support for X.509 CA authentication with IoT Edge when provisioned with an enrollment group. Using DPS, you upload the CA certificate or an intermediate certificate, the certificate chain is verified, then the device is provisioned. To learn more, see [DPS X.509 certificate attestation](../iot-dps/concepts-x509-attestation.md).
You now have a good understanding of a simple interaction IoT Edge between and I
We add a regular IoT device named *TempSensor*, which connects to its parent IoT Edge device *EdgeGateway* that connects to IoT Hub *ContosoIotHub*. Similar to before, all authentication is done with X.509 certificate authentication. Our new scenario raises two new questions: *"Is the TempSensor device legitimate?"* and *"Is the identity of the EdgeGateway correct?"*. The scenario can be illustrated as follows: <!-- mermaid stateDiagram-v2
iot-edge Tutorial C Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-c-module.md
Use the following table to understand your options for developing and deploying
| C | Visual Studio Code | Visual Studio | | - | | - |
-| **Linux AMD64** | ![Use VS Code for C modules on Linux AMD64](./medi64](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM32** | ![Use VS Code for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM64** | ![Use VS Code for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) |
+| **Linux AMD64** | ![Use Visual Studio Code for C modules on Linux AMD64](./medi64](./media/tutorial-c-module/green-check.png) |
+| **Linux ARM32** | ![Use Visual Studio Code for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM32](./media/tutorial-c-module/green-check.png) |
+| **Linux ARM64** | ![Use Visual Studio Code for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | ![Use VS for C modules on Linux ARM64](./media/tutorial-c-module/green-check.png) |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
The following steps create an IoT Edge module project for C by using Visual Stud
Create a C solution template that you can customize with your own code.
-1. Select **View** > **Command Palette** to open the VS Code command palette.
+1. Select **View** > **Command Palette** to open the Visual Studio Code command palette.
2. In the command palette, type and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you've already signed in, you can skip this step.
Create a C solution template that you can customize with your own code.
| Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. | | Select module template | Choose **C Module**. | | Provide a module name | Name your module **CModule**. |
The environment file stores the credentials for your container registry and shar
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the .env file.
+1. In the Visual Studio Code explorer, open the .env file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save this file.
The default module code receives messages on an input queue and passes them alon
1. Save the main.c file.
-1. In the VS Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
1. Add the CModule module twin to the deployment manifest. Insert the following JSON content at the bottom of the `moduleContent` section, after the `$edgeHub` module twin:
The default module code receives messages on an input queue and passes them alon
In the previous section, you created an IoT Edge solution and added code to the CModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-1. Open the VS Code terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code terminal by selecting **View** > **Terminal**.
2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
In the previous section, you created an IoT Edge solution and added code to the
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-3. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
iot-edge Tutorial Csharp Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-csharp-module.md
Use the following table to understand your options for developing and deploying
| C# | Visual Studio Code | Visual Studio | | -- | | - |
-| **Linux AMD64** | ![C# modules for LinuxAMD64 in VS Code](./medi64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM32** | ![C# modules for LinuxARM32 in VS Code](./media/tutorial-c-module/green-check.png) | ![C# modules for LinuxARM32 in Visual Studio](./media/tutorial-c-module/green-check.png) |
-| **Linux ARM64** | ![C# modules for LinuxARM64 in VS Code](./media/tutorial-c-module/green-check.png) | ![C# modules for LinuxARM64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
+| **Linux AMD64** | ![C# modules for LinuxAMD64 in Visual Studio Code](./medi64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
+| **Linux ARM32** | ![C# modules for LinuxARM32 in Visual Studio Code](./media/tutorial-c-module/green-check.png) | ![C# modules for LinuxARM32 in Visual Studio](./media/tutorial-c-module/green-check.png) |
+| **Linux ARM64** | ![C# modules for LinuxARM64 in Visual Studio Code](./media/tutorial-c-module/green-check.png) | ![C# modules for LinuxARM64 in Visual Studio](./media/tutorial-c-module/green-check.png) |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment, [Develop an IoT Edge module using Linux containers](tutorial-develop-for-linux.md). After completing that tutorial, you already should have the following prerequisites:
The following steps create an IoT Edge module project for C# by using Visual Stu
Create a C# solution template that you can customize with your own code.
-1. In Visual Studio Code, select **View** > **Command Palette** to open the VS Code command palette.
+1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
2. In the command palette, enter and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you're already signed in, you can skip this step.
Create a C# solution template that you can customize with your own code.
| Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. | | Select module template | Choose **C# Module**. | | Provide a module name | Name your module **CSharpModule**. |
The environment file stores the credentials for your container registry and shar
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the **.env** file.
+1. In the Visual Studio Code explorer, open the **.env** file.
2. Update the fields with the **username** and **password** values from your Azure container registry. 3. Save this file.
Currently, Visual Studio Code can develop C# modules for Linux AMD64 and Linux A
### Update the module with custom code
-1. In the VS Code explorer, open **modules** > **CSharpModule** > **Program.cs**.
+1. In the Visual Studio Code explorer, open **modules** > **CSharpModule** > **Program.cs**.
1. At the top of the **CSharpModule** namespace, add three **using** statements for types that are used later:
Currently, Visual Studio Code can develop C# modules for Linux AMD64 and Linux A
1. Save the Program.cs file.
-1. In the VS Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+1. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
1. Since we changed the name of the endpoint that the module listens on, we also need to update the routes in the deployment manifest so that the edgeHub sends messages to the new endpoint.
Currently, Visual Studio Code can develop C# modules for Linux AMD64 and Linux A
In the previous section, you created an IoT Edge solution and added code to the CSharpModule. The new code filters out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-1. Open the VS Code integrated terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
1. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
In the previous section, you created an IoT Edge solution and added code to the
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-1. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+1. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
iot-edge Tutorial Deploy Custom Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-custom-vision.md
Now you have the files for a container version of your image classifier on your
A solution is a logical way of developing and organizing multiple modules for a single IoT Edge deployment. A solution contains code for one or more modules as well as the deployment manifest that declares how to configure them on an IoT Edge device.
-1. In Visual Studio Code, select **View** > **Command Palette** to open the VS Code command palette.
+1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
1. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. In the command palette, provide the following information to create your solution: | Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution, like **CustomVisionSolution**, or accept the default. | | Select module template | Choose **Python Module**. | | Provide a module name | Name your module **classifier**.<br><br>It's important that this module name be lowercase. IoT Edge is case-sensitive when referring to modules, and this solution uses a library that formats all requests in lowercase. |
The environment file stores the credentials for your container registry and shar
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the .env file.
+1. In the Visual Studio Code explorer, open the .env file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save this file.
In this section, you add a new module to the same CustomVisionSolution and provi
| Provide a module name | Name your module **cameraCapture** | | Provide Docker image repository for the module | Replace **localhost:5000** with the **Login server** value for your Azure container registry.<br><br>The final string looks like **\<registryname\>.azurecr.io/cameracapture**. |
- The VS Code window loads your new module in the solution workspace, and updates the deployment.template.json file. Now you should see two module folders: classifier and cameraCapture.
+ The Visual Studio Code window loads your new module in the solution workspace, and updates the deployment.template.json file. Now you should see two module folders: classifier and cameraCapture.
2. Open the **main.py** file in the **modules** / **cameraCapture** folder.
The IoT Edge extension for Visual Studio Code provides a template in each IoT Ed
With both modules created and the deployment manifest template configured, you're ready to build the container images and push them to your container registry.
-Once the images are in your registry, you can deploy the solution to an IoT Edge device. You can set modules on a device through the IoT Hub, but you can also access your IoT Hub and devices through Visual Studio Code. In this section, you set up access to your IoT Hub then use VS Code to deploy your solution to your IoT Edge device.
+Once the images are in your registry, you can deploy the solution to an IoT Edge device. You can set modules on a device through the IoT Hub, but you can also access your IoT Hub and devices through Visual Studio Code. In this section, you set up access to your IoT Hub then use Visual Studio Code to deploy your solution to your IoT Edge device.
First, build and push your solution to your container registry.
-1. Open the VS Code integrated terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
First, build and push your solution to your container registry.
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-3. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge solution**.
+3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, which is built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
iot-edge Tutorial Deploy Function https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-deploy-function.md
You can use Azure Functions to deploy code that implements your business logic d
> [!div class="checklist"] > > * Use Visual Studio Code to create an Azure Function.
-> * Use VS Code and Docker to create a Docker image and publish it to a container registry.
+> * Use Visual Studio Code and Docker to create a Docker image and publish it to a container registry.
> * Deploy the module from the container registry to your IoT Edge device. > * View filtered data.
Create a C# Function solution template that you can customize with your own code
1. Open Visual Studio Code on your development machine.
-2. Open the VS Code command palette by selecting **View** > **Command Palette**.
+2. Open the Visual Studio Code command palette by selecting **View** > **Command Palette**.
3. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution. | Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution, like **FunctionSolution**, or accept the default. | | Select module template | Choose **Azure Functions - C#**. | | Provide a module name | Name your module **CSharpFunction**. |
The environment file stores the credentials for your container registry and shar
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the .env file.
+1. In the Visual Studio Code explorer, open the .env file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save this file.
Let's add some additional code so that the module processes the messages at the
In the previous section, you created an IoT Edge solution and modified the **CSharpFunction** to filter out messages with reported machine temperatures below the acceptable threshold. Now you need to build the solution as a container image and push it to your container registry.
-1. Open the VS Code integrated terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
In the previous section, you created an IoT Edge solution and modified the **CSh
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-3. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, which is built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
Visual Studio Code outputs a success message when your container image is pushed
## Deploy and run the solution
-You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Edge and IoT Hub for VS Code that was listed in the prerequisites. Install the extension now, if you didn't already.
+You can use the Azure portal to deploy your Function module to an IoT Edge device like you did in the quickstarts. You can also deploy and monitor modules from within Visual Studio Code. The following sections use the Azure IoT Edge and IoT Hub for Visual Studio Code that was listed in the prerequisites. Install the extension now, if you didn't already.
1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
You can use the Azure portal to deploy your Function module to an IoT Edge devic
It may take a few moments for the new modules to show up. Your IoT Edge device has to retrieve its new deployment information from IoT Hub, start the new containers, and then report the status back to IoT Hub.
- ![View deployed modules in VS Code](./media/tutorial-deploy-function/view-modules.png)
+ ![View deployed modules in Visual Studio Code](./media/tutorial-deploy-function/view-modules.png)
## View the generated data
iot-edge Tutorial Develop For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-develop-for-linux.md
Use the Docker documentation to install on your development machine:
* Read [About Docker CE](https://docs.docker.com/install/) for installation information on several Linux platforms. * For the Windows Subsystem for Linux (WSL), install Docker Desktop for Windows.
-## Set up VS Code and tools
+## Set up Visual Studio Code and tools
Use the IoT extensions for Visual Studio Code to develop IoT Edge modules. These extensions provide project templates, automate the creation of the deployment manifest, and allow you to monitor and manage IoT Edge devices. In this section, you install Visual Studio Code and the IoT extension, then set up your Azure account to manage IoT Hub resources from within Visual Studio Code.
In the Visual Studio Code command palette, search for and select **Azure IoT Edg
| Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. | | Select module template | Choose **C# Module**. | | Provide a module name | Accept the default **SampleModule**. |
Visual Studio Code now has access to your container registry, so it's time to tu
![View both image versions in container registry](./media/tutorial-develop-for-linux/view-repository-versions.png)
-<!--Alternative steps: Use VS Code Docker tools to view ACR images with tags-->
+<!--Alternative steps: Use Visual Studio Code Docker tools to view ACR images with tags-->
### Troubleshoot
iot-edge Tutorial Java Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-java-module.md
Use the following table to understand your options for developing and deploying
| Java | Visual Studio Code | Visual Studio 2017/2019 | | - | | |
-| **Linux AMD64** | ![Use VS Code for Java modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM32** | ![Use VS Code for Java modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM64** | ![Use VS Code for Java modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
+| **Linux AMD64** | ![Use Visual Studio Code for Java modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM32** | ![Use Visual Studio Code for Java modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM64** | ![Use Visual Studio Code for Java modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules for Linux devices](tutorial-develop-for-linux.md). By completing either of those tutorials, you should have the following prerequisites in place:
The following steps create an IoT Edge module project that's based on the Azure
Create a Java solution template that you can customize with your own code.
-1. In Visual Studio Code, select **View** > **Command Palette** to open the VS Code command palette.
+1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
2. In the command palette, enter and run the command **Azure IoT Edge: New IoT Edge solution**. Follow the prompts in the command palette to create your solution. | Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. | | Select module template | Choose **Java Module**. | | Provide a module name | Name your module **JavaModule**. |
Create a Java solution template that you can customize with your own code.
![Provide Docker image repository](./media/tutorial-java-module/repository.png)
-If it's your first time creating Java module, it might take several minutes to download the maven packages. When the solution is ready, the VS Code window loads your IoT Edge solution workspace. The solution workspace contains five top-level components:
+If it's your first time creating Java module, it might take several minutes to download the maven packages. When the solution is ready, the Visual Studio Code window loads your IoT Edge solution workspace. The solution workspace contains five top-level components:
* The **modules** folder contains the Java code for your module and the Docker files to build your module as a container image. * The **\.env** file stores your container registry credentials.
The environment file stores the credentials for your container registry and shar
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the .env file.
+1. In the Visual Studio Code explorer, open the .env file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save this file.
Currently, Visual Studio Code can develop Java modules for Linux AMD64 and Linux
### Update the module with custom code
-1. In the VS Code explorer, open **modules** > **JavaModule** > **src** > **main** > **java** > **com** > **edgemodule** > **App.java**.
+1. In the Visual Studio Code explorer, open **modules** > **JavaModule** > **src** > **main** > **java** > **com** > **edgemodule** > **App.java**.
2. Add the following code at the top of the file to import new referenced classes.
Currently, Visual Studio Code can develop Java modules for Linux AMD64 and Linux
7. Save the App.java file.
-8. In the VS Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+8. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
9. Add the **JavaModule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **moduleContent** section, after the **$edgeHub** module twin:
Currently, Visual Studio Code can develop Java modules for Linux AMD64 and Linux
In the previous section, you created an IoT Edge solution and added code to the **JavaModule** to filter out messages where the reported machine temperature is below the acceptable limit. Now, build the solution as a container image and push it to your container registry.
-1. Open the VS Code integrated terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
In the previous section, you created an IoT Edge solution and added code to the
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-3. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, which is built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
iot-edge Tutorial Node Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-node-module.md
Use the following table to understand your options for developing and deploying
| Node.js | Visual Studio Code | Visual Studio 2022 | | - | | |
-| **Linux AMD64** | ![Use VS Code for Node.js modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM32** | ![Use VS Code for Node.js modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM64** | ![Use VS Code for Node.js modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
+| **Linux AMD64** | ![Use Visual Studio Code for Node.js modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM32** | ![Use Visual Studio Code for Node.js modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM64** | ![Use Visual Studio Code for Node.js modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
The following steps show you how to create an IoT Edge Node.js module using Visu
Use **npm** to create a Node.js solution template that you can build on top of.
-1. In Visual Studio Code, select **View** > **Integrated Terminal** to open the VS Code integrated terminal.
+1. In Visual Studio Code, select **View** > **Integrated Terminal** to open the Visual Studio Code integrated terminal.
2. In the integrated terminal, enter the following command to install **yeoman** and the generator for Node.js Azure IoT Edge module:
Use **npm** to create a Node.js solution template that you can build on top of.
npm install -g yo generator-azure-iot-edge-module ```
-3. Select **View** > **Command Palette** to open the VS Code command palette.
+3. Select **View** > **Command Palette** to open the Visual Studio Code command palette.
4. In the command palette, type and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you've already signed in, you can skip this step.
Use **npm** to create a Node.js solution template that you can build on top of.
| Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. | | Select module template | Choose **Node.js Module**. | | Provide a module name | Name your module **NodeModule**. |
The environment file stores the credentials for your container repository and sh
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the **.env** file.
+1. In the Visual Studio Code explorer, open the **.env** file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save this file.
Currently, Visual Studio Code can develop Node.js modules for Linux AMD64 and Li
Each template comes with sample code included, which takes simulated sensor data from the **SimulatedTemperatureSensor** module and routes it to IoT Hub. In this section, add code to have NodeModule analyze the messages before sending them.
-1. In the VS Code explorer, open **modules** > **NodeModule** > **app.js**.
+1. In the Visual Studio Code explorer, open **modules** > **NodeModule** > **app.js**.
2. Add a temperature threshold variable below required node modules. The temperature threshold sets the value that the measured temperature must exceed in order for the data to be sent to IoT Hub.
Each template comes with sample code included, which takes simulated sensor data
6. Save the app.js file.
-7. In the VS Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+7. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
8. Add the NodeModule module twin to the deployment manifest. Insert the following JSON content at the bottom of the `moduleContent` section, after the `$edgeHub` module twin:
Each template comes with sample code included, which takes simulated sensor data
In the previous section, you created an IoT Edge solution and added code to the NodeModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-1. Open the VS Code integrated terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
In the previous section, you created an IoT Edge solution and added code to the
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-3. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
iot-edge Tutorial Python Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-python-module.md
Use the following table to understand your options for developing and deploying
| Python | Visual Studio Code | Visual Studio 2017/2019 | | - | | |
-| **Linux AMD64** | ![Use VS Code for Python modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM32** | ![Use VS Code for Python modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
-| **Linux ARM64** | ![Use VS Code for Python modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
+| **Linux AMD64** | ![Use Visual Studio Code for Python modules on Linux AMD64](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM32** | ![Use Visual Studio Code for Python modules on Linux ARM32](./media/tutorial-c-module/green-check.png) | |
+| **Linux ARM64** | ![Use Visual Studio Code for Python modules on Linux ARM64](./media/tutorial-c-module/green-check.png) | |
Before beginning this tutorial, you should have gone through the previous tutorial to set up your development environment for Linux container development: [Develop IoT Edge modules using Linux containers](tutorial-develop-for-linux.md). By completing that tutorial, you should have the following prerequisites in place:
The following steps create an IoT Edge Python module by using Visual Studio Code
Create a Python solution template that you can customize with your own code.
-1. In Visual Studio Code, select **View** > **Command Palette** to open the VS Code command palette.
+1. In Visual Studio Code, select **View** > **Command Palette** to open the Visual Studio Code command palette.
2. In the command palette, enter and run the command **Azure: Sign in** and follow the instructions to sign in your Azure account. If you're already signed in, you can skip this step.
Create a Python solution template that you can customize with your own code.
| Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution or accept the default **EdgeSolution**. | | Select module template | Choose **Python Module**. | | Provide a module name | Name your module **PythonModule**. |
The environment file stores the credentials for your container repository and sh
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the **.env** file.
+1. In the Visual Studio Code explorer, open the **.env** file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save the .env file.
Currently, Visual Studio Code can develop Python modules for Linux AMD64 and Lin
Each template includes sample code, which takes simulated sensor data from the **SimulatedTemperatureSensor** module and routes it to the IoT hub. In this section, add the code that expands the **PythonModule** to analyze the messages before sending them.
-1. In the VS Code explorer, open **modules** > **PythonModule** > **main.py**.
+1. In the Visual Studio Code explorer, open **modules** > **PythonModule** > **main.py**.
2. At the top of the **main.py** file, import the **json** library:
Each template includes sample code, which takes simulated sensor data from the *
7. Save the main.py file.
-8. In the VS Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
+8. In the Visual Studio Code explorer, open the **deployment.template.json** file in your IoT Edge solution workspace.
9. Add the **PythonModule** module twin to the deployment manifest. Insert the following JSON content at the bottom of the **moduleContent** section, after the **$edgeHub** module twin:
Each template includes sample code, which takes simulated sensor data from the *
In the previous section, you created an IoT Edge solution and added code to the PythonModule that will filter out messages where the reported machine temperature is within the acceptable limits. Now you need to build the solution as a container image and push it to your container registry.
-1. Open the VS Code integrated terminal by selecting **View** > **Terminal**.
+1. Open the Visual Studio Code integrated terminal by selecting **View** > **Terminal**.
2. Sign in to Docker by entering the following command in the terminal. Sign in with the username, password, and login server from your Azure container registry. You can retrieve these values from the **Access keys** section of your registry in the Azure portal.
In the previous section, you created an IoT Edge solution and added code to the
You may receive a security warning recommending the use of `--password-stdin`. While that best practice is recommended for production scenarios, it's outside the scope of this tutorial. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) reference.
-3. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
+3. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge Solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
iot-edge Tutorial Store Data Sql Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-store-data-sql-server.md
The following steps show you how to create an IoT Edge function using Visual Stu
1. Open Visual Studio Code.
-2. Open the VS Code command palette by selecting **View** > **Command palette**.
+2. Open the Visual Studio Code command palette by selecting **View** > **Command palette**.
3. In the command palette, type and run the command **Azure IoT Edge: New IoT Edge solution**. In the command palette, provide the following information to create your solution: | Field | Value | | -- | -- |
- | Select folder | Choose the location on your development machine for VS Code to create the solution files. |
+ | Select folder | Choose the location on your development machine for Visual Studio Code to create the solution files. |
| Provide a solution name | Enter a descriptive name for your solution, like **SqlSolution**, or accept the default. | | Select module template | Choose **Azure Functions - C#**. | | Provide a module name | Name your module **sqlFunction**. | | Provide Docker image repository for the module | An image repository includes the name of your container registry and the name of your container image. Your container image is prepopulated from the last step. Replace **localhost:5000** with the **Login server** value from your Azure container registry. You can retrieve the Login server from the Overview page of your container registry in the Azure portal. <br><br>The final string looks like \<registry name\>.azurecr.io/sqlfunction. |
- The VS Code window loads your IoT Edge solution workspace.
+ The Visual Studio Code window loads your IoT Edge solution workspace.
### Add your registry credentials
The environment file stores the credentials for your container registry and shar
The IoT Edge extension tries to pull your container registry credentials from Azure and populate them in the environment file. Check to see if your credentials are already included. If not, add them now:
-1. In the VS Code explorer, open the .env file.
+1. In the Visual Studio Code explorer, open the .env file.
2. Update the fields with the **username** and **password** values that you copied from your Azure container registry. 3. Save this file.
You need to select which architecture you're targeting with each solution, becau
### Update the module with custom code
-1. In the VS Code explorer, open **modules** > **sqlFunction** > **sqlFunction.csproj**.
+1. In the Visual Studio Code explorer, open **modules** > **sqlFunction** > **sqlFunction.csproj**.
2. Find the group of package references, and add a new one to include SqlClient.
In the previous sections, you created a solution with one module, and then added
You might see a security warning recommending the use of the --password-stdin parameter. While its use is outside the scope of this article, we recommend following this best practice. For more information, see the [docker login](https://docs.docker.com/engine/reference/commandline/login/#provide-a-password-using-stdin) command reference.
-1. In the VS Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge solution**.
+1. In the Visual Studio Code explorer, right-click the **deployment.template.json** file and select **Build and Push IoT Edge solution**.
The build and push command starts three operations. First, it creates a new folder in the solution called **config** that holds the full deployment manifest, which is built out of information in the deployment template and other solution files. Second, it runs `docker build` to build the container image based on the appropriate dockerfile for your target architecture. Then, it runs `docker push` to push the image repository to your container registry.
In the previous sections, you created a solution with one module, and then added
## Deploy the solution to a device
-You can set modules on a device through the IoT Hub, but you can also access your IoT Hub and devices through Visual Studio Code. In this section, you set up access to your IoT Hub then use VS Code to deploy your solution to your IoT Edge device.
+You can set modules on a device through the IoT Hub, but you can also access your IoT Hub and devices through Visual Studio Code. In this section, you set up access to your IoT Hub then use Visual Studio Code to deploy your solution to your IoT Edge device.
1. In the Visual Studio Code explorer, under the **Azure IoT Hub** section, expand **Devices** to see your list of IoT devices.
iot-hub-device-update Device Update Agent Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-provisioning.md
# Device Update Agent Provisioning
-The Device Update Module agent can run alongside other system processes and [IoT Edge modules](../iot-edge/iot-edge-modules.md) that connect to your IoT Hub as part of the same logical device. This section describes how to provision the Device Update agent as a module identity.
+The Device Update Module agent can run alongside other system processes and [IoT Edge modules](../iot-edge/iot-edge-modules.md) that connect to your IoT Hub as part of the same logical device. This section describes how to provision the Device Update agent as a module identity.
## Changes to Device Update agent at GA release
If you are migrating from a device level agent to adding the agent as a Module i
The following IoT device over the air update types are currently supported with Device Update: * Linux devices (IoT Edge and Non-IoT Edge devices):
- * [Image )
+ * [Image )
* [Package update](device-update-ubuntu-agent.md) * [Proxy update for downstream devices](device-update-howto-proxy-updates.md)
-
+ * Constrained devices: * AzureRTOS Device Update agent samples: [Device Update for Azure IoT Hub tutorial for Azure-Real-Time-Operating-System](device-update-azure-real-time-operating-system.md)
-* Disconnected devices:
+* Disconnected devices:
* [Understand support for disconnected device update](connected-cache-disconnected-device-update.md)
If you're setting up the IoT device/IoT Edge device for [package based updates](
```shell curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list ```
-
+ 1. Copy the generated list to the sources.list.d directory. ```shell sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/ ```
-
+ 1. Install the Microsoft GPG public key. ```shell
If you're setting up the IoT device/IoT Edge device for [package based updates](
## How to provision the Device Update agent as a Module Identity
-This section describes how to provision the Device Update agent as a module identity on
+This section describes how to provision the Device Update agent as a module identity on
* IoT Edge enabled devices, or * Non-Edge IoT devices, or * Other IoT devices.
Follow these instructions to provision the Device Update agent on [IoT Edge enab
1. Install the Device Update package update agent.
- - For latest agent versions from packages.miscrosoft.com: Update package lists on your device and install the Device Update agent package and its dependencies using:
+ - For latest agent versions from packages.microsoft.com: Update package lists on your device and install the Device Update agent package and its dependencies using:
```shell sudo apt-get update
Follow these instructions to provision the Device Update agent on [IoT Edge enab
```shell sudo apt-get install -y ./"<PATH TO FILE>"/"<.DEB FILE NAME>" ```
- - If you are setting up a [MCC for a disconnected device scenario](connected-cache-disconnected-device-update.md), then install the Delivery Optmization Apt plugin:
+ - If you are setting up a [MCC for a disconnected device scenario](connected-cache-disconnected-device-update.md), then install the Delivery Optimization APT plugin:
```shell sudo apt-get install deliveryoptimization-plugin-apt
Follow these instructions to provision the Device Update agent on [IoT Edge enab
Follow these instructions to provision the Device Update agent on your IoT Linux devices.
-1. Install the IoT Identity Service and add the latest version to your IoT device by following instrucions in [Installing the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/installation.html#install-from-packagesmicrosoftcom).
+1. Install the IoT Identity Service and add the latest version to your IoT device by following instructions in [Installing the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/installation.html#install-from-packagesmicrosoftcom).
2. Configure the IoT Identity Service by following the instructions in [Configuring the Azure IoT Identity Service](https://azure.github.io/iot-identity-service/configuration.html).
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-python.md
Title: Quickstart ΓÇô Azure Key Vault Python client library ΓÇô manage certifica
description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the Python client library Previously updated : 01/22/2022 Last updated : 02/03/2023 ms.devlang: python-+ # Quickstart: Azure Key Vault certificate client library for Python
-Get started with the Azure Key Vault certificate client library for Python. Follow the steps below to install the package and try out example code for basic tasks. By using Key Vault to store certificates, you avoid storing certificates in your code, which increases the security of your app.
+Get started with the Azure Key Vault certificate client library for Python. Follow these steps to install the package and try out example code for basic tasks. By using Key Vault to store certificates, you avoid storing certificates in your code, which increases the security of your app.
[API reference documentation](/python/api/overview/azure/keyvault-certificates-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-certificates) | [Package (Python Package Index)](https://pypi.org/project/azure-keyvault-certificates) ## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 2.7+ or 3.6+](/azure/developer/python/configure-local-development-environment)
+- [Python 3.7+](/azure/developer/python/configure-local-development-environment)
- [Azure CLI](/cli/azure/install-azure-cli)
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
## Set up your local environment
-This quickstart is using Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme)
+This quickstart uses the Azure Identity library with Azure CLI or Azure PowerShell to authenticate the user to Azure services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
### Sign in to Azure
+### [Azure CLI](#tab/azure-cli)
+ 1. Run the `login` command. ```azurecli-interactive
This quickstart is using Azure Identity library with Azure CLI to authenticate u
2. Sign in with your account credentials in the browser.
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Run the `Connect-AzAccount` command.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+ If PowerShell can open your default browser, it will do so and load an Azure sign-in page.
+
+ Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the
+ authorization code displayed in your terminal.
+
+2. Sign in with your account credentials in the browser.
+++ ### Install the packages 1. In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments)
This quickstart is using Azure Identity library with Azure CLI to authenticate u
Create an access policy for your key vault that grants certificate permission to your user account
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --certificate-permissions delete get list create ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToCertificates delete,get,list,create
+```
+++ ## Create the sample code The Azure Key Vault certificate client library for Python allows you to manage certificates. The following code sample demonstrates how to create a client, set a certificate, retrieve a certificate, and delete a certificate.
Create a file named *kv_certificates.py* that contains this code.
```python import os
-from azure.keyvault.certificates import CertificateClient, CertificatePolicy,CertificateContentType, WellKnownIssuerNames
+from azure.keyvault.certificates import CertificateClient, CertificatePolicy
from azure.identity import DefaultAzureCredential keyVaultName = os.environ["KEY_VAULT_NAME"]
Make sure the code in the previous section is in a file named *kv_certificates.p
python kv_certificates.py ``` -- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` command](#grant-access-to-your-key-vault).-- Re-running the code with the same key name may produce the error, "(Conflict) Certificate \<name\> is currently in a deleted but recoverable state." Use a different key name.
+- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` or `Set-AzKeyVaultAccessPolicy` command](#grant-access-to-your-key-vault).
+- Rerunning the code with the same key name may produce the error, "(Conflict) Certificate \<name\> is currently in a deleted but recoverable state." Use a different key name.
## Code details ### Authenticate and create a client
-In this quickstart, logged in user is used to authenticate to key vault, which is preferred method for local development. For applications deployed to Azure, managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) class provided by the [Azure Identity client library](/python/api/overview/azure/identity-readme) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
+
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
-In below example, the name of your key vault is expanded to the key vault URI, in the format `https://\<your-key-vault-name\>.vault.azure.net`. This example is using ['DefaultAzureCredential()'](/python/api/azure-identity/azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/python/api/overview/azure/identity-readme).
+In the example code, the name of your key vault is expanded to the key vault URI, in the format `https://\<your-key-vault-name>.vault.azure.net`.
```python credential = DefaultAzureCredential()
client = CertificateClient(vault_url=KVUri, credential=credential)
### Save a certificate
-Once you've obtained the client object for the key vault, you can create a certificate using the [begin_create_certificate](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificateclient?#begin-create-certificate-certificate-name--policy-kwargs-) method:
+Once you've obtained the client object for the key vault, you can create a certificate using the [begin_create_certificate](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificateclient#azure-keyvault-certificates-certificateclient-begin-create-certificate) method:
```python policy = CertificatePolicy.get_default()
poller = client.begin_create_certificate(certificate_name=certificateName, polic
certificate = poller.result() ```
-Here, the certificate requires a policy obtained with the [CertificatePolicy.get_default](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificatepolicy?#get-default--) method.
+Here, the certificate requires a policy obtained with the [CertificatePolicy.get_default](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificatepolicy#azure-keyvault-certificates-certificatepolicy-get-default) method.
Calling a `begin_create_certificate` method generates an asynchronous call to the Azure REST API for the key vault. The asynchronous call returns a poller object. To wait for the result of the operation, call the poller's `result` method.
-When handling the request, Azure authenticates the caller's identity (the service principal) using the credential object you provided to the client.
+When Azure handles the request, it authenticates the caller's identity (the service principal) using the credential object you provided to the client.
### Retrieve a certificate
-To read a certificate from Key Vault, use the [get_certificate](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificateclient?#get-certificate-certificate-name-kwargs-) method:
+To read a certificate from Key Vault, use the [get_certificate](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificateclient#azure-keyvault-certificates-certificateclient-get-certificate) method:
```python retrieved_certificate = client.get_certificate(certificateName) ```
-You can also verify that the certificate has been set with the Azure CLI command [az keyvault certificate show](/cli/azure/keyvault/certificate?#az-keyvault-certificate-show).
+You can also verify that the certificate has been set with the Azure CLI command [az keyvault certificate show](/cli/azure/keyvault/certificate?#az-keyvault-certificate-show) or the Azure PowerShell cmdlet [Get-AzKeyVaultCertificate](/powershell/module/az.keyvault/get-azkeyvaultcertificate)
### Delete a certificate
-To delete a certificate, use the [begin_delete_certificate](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificateclient?#begin-delete-certificate-certificate-name-kwargs-) method:
+To delete a certificate, use the [begin_delete_certificate](/python/api/azure-keyvault-certificates/azure.keyvault.certificates.certificateclient#azure-keyvault-certificates-certificateclient-begin-delete-certificate) method:
```python poller = client.begin_delete_certificate(certificateName)
deleted_certificate = poller.result()
The `begin_delete_certificate` method is asynchronous and returns a poller object. Calling the poller's `result` method waits for its completion.
-You can verify that the certificate is deleted with the Azure CLI command [az keyvault certificate show](/cli/azure/keyvault/certificate?#az-keyvault-certificate-show).
+You can verify that the certificate is deleted with the Azure CLI command [az keyvault certificate show](/cli/azure/keyvault/certificate#az-keyvault-certificate-show) or the Azure PowerShell cmdlet [Get-AzKeyVaultCertificate](/powershell/module/az.keyvault/get-azkeyvaultcertificate).
Once deleted, a certificate remains in a deleted but recoverable state for a time. If you run the code again, use a different certificate name.
If you want to also experiment with [secrets](../secrets/quick-create-python.md)
Otherwise, when you're finished with the resources created in this article, use the following command to delete the resource group and all its contained resources:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli az group delete --resource-group myResourceGroup ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name myResourceGroup
+```
+++ ## Next steps - [Overview of Azure Key Vault](../general/overview.md)
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-python.md
Title: Quickstart ΓÇô Azure Key Vault Python client library ΓÇô manage keys
description: Learn how to create, retrieve, and delete keys from an Azure key vault using the Python client library Previously updated : 01/22/2022 Last updated : 02/03/2023 ms.devlang: python-+ # Quickstart: Azure Key Vault keys client library for Python
-Get started with the Azure Key Vault client library for Python. Follow the steps below to install the package and try out example code for basic tasks. By using Key Vault to store cryptographic keys, you avoid storing such keys in your code, which increases the security of your app.
+Get started with the Azure Key Vault client library for Python. Follow these steps to install the package and try out example code for basic tasks. By using Key Vault to store cryptographic keys, you avoid storing such keys in your code, which increases the security of your app.
[API reference documentation](/python/api/overview/azure/keyvault-keys-readme) | [Library source code](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/keyvault/azure-keyvault-keys) | [Package (Python Package Index)](https://pypi.org/project/azure-keyvault-keys/) ## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 2.7+ or 3.6+](/azure/developer/python/configure-local-development-environment)
+- [Python 3.7+](/azure/developer/python/configure-local-development-environment)
- [Azure CLI](/cli/azure/install-azure-cli)
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
## Set up your local environment
-This quickstart is using Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
+This quickstart is using the Azure Identity library with Azure CLI or Azure PowerShell to authenticate the user to Azure services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls. For more information, see [Authenticate the client with Azure Identity client library](/python/api/overview/azure/identity-readme).
### Sign in to Azure
+### [Azure CLI](#tab/azure-cli)
+ 1. Run the `login` command. ```azurecli-interactive
This quickstart is using Azure Identity library with Azure CLI to authenticate u
2. Sign in with your account credentials in the browser.
+### [Azure PowerShell](#tab/azure-powershell)
+
+1. Run the `Connect-AzAccount` command.
+
+ ```azurepowershell
+ Connect-AzAccount
+ ```
+
+ If PowerShell can open your default browser, it will do so and load an Azure sign-in page.
+
+ Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the
+ authorization code displayed in your terminal.
+
+2. Sign in with your account credentials in the browser.
+++ ### Install the packages 1. In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments).
This quickstart is using Azure Identity library with Azure CLI to authenticate u
### Grant access to your key vault
-Create an access policy for your key vault that grants secret permission to your user account.
+Create an access policy for your key vault that grants key permission to your user account.
+
+### [Azure CLI](#tab/azure-cli)
```azurecli
-az keyvault set-policy --name <<your-unique-keyvault-name> --upn user@domain.com --secret-permissions delete get list set
+az keyvault set-policy --name <your-unique-keyvault-name> --upn user@domain.com --key-permissions get list create delete
+```
+
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy -VaultName "<your-unique-keyvault-name>" -UserPrincipalName "user@domain.com" -PermissionsToKeys get,list,create,delete
``` ++ ## Create the sample code The Azure Key Vault key client library for Python allows you to manage cryptographic keys. The following code sample demonstrates how to create a client, set a key, retrieve a key, and delete a key.
Make sure the code in the previous section is in a file named *kv_keys.py*. Then
python kv_keys.py ``` -- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` command](#grant-access-to-your-key-vault).-- Re-running the code with the same key name may produce the error, "(Conflict) Key \<name\> is currently in a deleted but recoverable state." Use a different key name.
+- If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` or `Set-AzKeyVaultAccessPolicy` command](#grant-access-to-your-key-vault).
+- Rerunning the code with the same key name may produce the error, "(Conflict) Key \<name\> is currently in a deleted but recoverable state." Use a different key name.
## Code details ### Authenticate and create a client
-In this quickstart, logged in user is used to authenticate to key vault, which is preferred method for local development. For applications deployed to Azure, managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) class provided by the [Azure Identity client library](/python/api/overview/azure/identity-readme) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
-In below example, the name of your key vault is expanded to the key vault URI, in the format `https://\<your-key-vault-name\>.vault.azure.net`. This example is using ['DefaultAzureCredential()'](/python/api/azure-identity/azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/python/api/overview/azure/identity-readme).
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+In the example code, the name of your key vault is expanded using the value of the `KVUri` variable, in the format: "https://\<your-key-vault-name>.vault.azure.net".
```python credential = DefaultAzureCredential()
client = KeyClient(vault_url=KVUri, credential=credential)
## Save a key
-Once you've obtained the client object for the key vault, you can store a key using the [create_rsa_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient?#create-rsa-key-name-kwargs-) method:
+Once you've obtained the client object for the key vault, you can store a key using the [create_rsa_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient#azure-keyvault-keys-keyclient-create-rsa-key) method:
```python rsa_key = client.create_rsa_key(keyName, size=2048) ```
-You can also use [create_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient?#create-key-name--key-type-kwargs-) or [create_ec_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient?#create-ec-key-name-kwargs-).
+You can also use [create_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient#azure-keyvault-keys-keyclient-create-key) or [create_ec_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient#azure-keyvault-keys-keyclient-create-ec-key).
Calling a `create` method generates a call to the Azure REST API for the key vault.
-When handling the request, Azure authenticates the caller's identity (the service principal) using the credential object you provided to the client.
+When Azure handles the request, it authenticates the caller's identity (the service principal) using the credential object you provided to the client.
## Retrieve a key
-To read a key from Key Vault, use the [get_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient?#get-key-name--version-none-kwargs-) method:
+To read a key from Key Vault, use the [get_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient#azure-keyvault-keys-keyclient-get-key) method:
```python retrieved_key = client.get_key(keyName) ```
-You can also verify that the key has been set with the Azure CLI command [az keyvault key show](/cli/azure/keyvault/key?#az-keyvault-key-show).
+You can also verify that the key has been set with the Azure CLI command [az keyvault key show](/cli/azure/keyvault/key?#az-keyvault-key-show) or the Azure PowerShell cmdlet [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey).
### Delete a key
-To delete a key, use the [begin_delete_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient?#begin-delete-key-name-kwargs-) method:
+To delete a key, use the [begin_delete_key](/python/api/azure-keyvault-keys/azure.keyvault.keys.keyclient#azure-keyvault-keys-keyclient-begin-delete-key) method:
```python poller = client.begin_delete_key(keyName)
deleted_key = poller.result()
The `begin_delete_key` method is asynchronous and returns a poller object. Calling the poller's `result` method waits for its completion.
-You can verify that the key is deleted with the Azure CLI command [az keyvault key show](/cli/azure/keyvault/key?#az-keyvault-key-show).
+You can verify that the key is deleted with the Azure CLI command [az keyvault key show](/cli/azure/keyvault/key?#az-keyvault-key-show) or the Azure PowerShell cmdlet [Get-AzKeyVaultKey](/powershell/module/az.keyvault/get-azkeyvaultkey).
Once deleted, a key remains in a deleted but recoverable state for a time. If you run the code again, use a different key name.
If you want to also experiment with [certificates](../certificates/quick-create-
Otherwise, when you're finished with the resources created in this article, use the following command to delete the resource group and all its contained resources:
+### [Azure CLI](#tab/azure-cli)
+ ```azurecli az group delete --resource-group myResourceGroup ```
+### [Azure PowerShell](#tab/azure-powershell)
+
+```azurepowershell
+Remove-AzResourceGroup -Name myResourceGroup
+```
+++ ## Next steps - [Overview of Azure Key Vault](../general/overview.md)
key-vault Quick Create Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-python.md
Title: Quickstart ΓÇô Azure Key Vault Python client library ΓÇô manage secrets
description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the Python client library Previously updated : 01/22/2022 Last updated : 02/03/2023 ms.devlang: python-+ # Quickstart: Azure Key Vault secret client library for Python
Get started with the Azure Key Vault secret client library for Python. Follow th
## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Python 2.7+ or 3.6+](/azure/developer/python/configure-local-development-environment).
+- [Python 3.7+](/azure/developer/python/configure-local-development-environment).
- [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps). This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps) in a Linux terminal window.
This quickstart is using Azure Identity library with Azure CLI or Azure PowerShe
Connect-AzAccount ```
- If the PowerShell can open your default browser, it will do so and load an Azure sign-in page.
+ If PowerShell can open your default browser, it will do so and load an Azure sign-in page.
Otherwise, open a browser page at [https://aka.ms/devicelogin](https://aka.ms/devicelogin) and enter the authorization code displayed in your terminal.
This quickstart is using Azure Identity library with Azure CLI or Azure PowerShe
### Install the packages
-1. In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment?tabs=cmd#use-python-virtual-environments).
+1. In a terminal or command prompt, create a suitable project folder, and then create and activate a Python virtual environment as described on [Use Python virtual environments](/azure/developer/python/configure-local-development-environment#configure-python-virtual-environment).
1. Install the Azure Active Directory identity library:
python kv_secrets.py
``` - If you encounter permissions errors, make sure you ran the [`az keyvault set-policy` or `Set-AzKeyVaultAccessPolicy` command](#grant-access-to-your-key-vault).-- Re-running the code with the same secret name may produce the error, "(Conflict) Secret \<name\> is currently in a deleted but recoverable state." Use a different secret name.
+- Rerunning the code with the same secret name may produce the error, "(Conflict) Secret \<name\> is currently in a deleted but recoverable state." Use a different secret name.
## Code details ### Authenticate and create a client
-In this quickstart, the logged in user is used to authenticate to key vault, which is the preferred method for local development. For applications deployed to Azure, a managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/python/api/azure-identity/azure.identity.defaultazurecredential) class provided by the [Azure Identity client library](/python/api/overview/azure/identity-readme) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
-In this example, the name of your key vault is expanded using the value of the "KVUri" variable, in the format: "https://\<your-key-vault-name\>.vault.azure.net". This example is using ['DefaultAzureCredential()'](/python/api/azure-identity/azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/python/api/overview/azure/identity-readme).
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+
+In the example code, the name of your key vault is expanded using the value of the `KVUri` variable, in the format: "https://\<your-key-vault-name>.vault.azure.net".
```python credential = DefaultAzureCredential()
client = SecretClient(vault_url=KVUri, credential=credential)
### Save a secret
-Once you've obtained the client object for the key vault, you can store a secret using the [set_secret](/python/api/azure-keyvault-secrets/azure.keyvault.secrets.secretclient?#set-secret-name--value-kwargs-) method:
+Once you've obtained the client object for the key vault, you can store a secret using the [set_secret](/python/api/azure-keyvault-secrets/azure.keyvault.secrets.secretclient?#azure-keyvault-secrets-secretclient-set-secret) method:
```python client.set_secret(secretName, secretValue)
client.set_secret(secretName, secretValue)
Calling `set_secret` generates a call to the Azure REST API for the key vault.
-When handling the request, Azure authenticates the caller's identity (the service principal) using the credential object you provided to the client.
+When Azure handles the request, it authenticates the caller's identity (the service principal) using the credential object you provided to the client.
### Retrieve a secret
-To read a secret from Key Vault, use the [get_secret](/python/api/azure-keyvault-secrets/azure.keyvault.secrets.secretclient?#get-secret-name--version-none-kwargs-) method:
+To read a secret from Key Vault, use the [get_secret](/python/api/azure-keyvault-secrets/azure.keyvault.secrets.secretclient?#azure-keyvault-secrets-secretclient-get-secret) method:
```python retrieved_secret = client.get_secret(secretName)
retrieved_secret = client.get_secret(secretName)
The secret value is contained in `retrieved_secret.value`.
-You can also retrieve a secret with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show) or the Azure PowerShell command [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
+You can also retrieve a secret with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show) or the Azure PowerShell cmdlet [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
### Delete a secret
-To delete a secret, use the [begin_delete_secret](/python/api/azure-keyvault-secrets/azure.keyvault.secrets.secretclient?#begin-delete-secret-name-kwargs-) method:
+To delete a secret, use the [begin_delete_secret](/python/api/azure-keyvault-secrets/azure.keyvault.secrets.secretclient?#azure-keyvault-secrets-secretclient-begin-delete-secret) method:
```python poller = client.begin_delete_secret(secretName)
deleted_secret = poller.result()
The `begin_delete_secret` method is asynchronous and returns a poller object. Calling the poller's `result` method waits for its completion.
-You can verify that the secret had been removed with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show) or the Azure PowerShell command [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
+You can verify that the secret had been removed with the Azure CLI command [az keyvault secret show](/cli/azure/keyvault/secret?#az-keyvault-secret-show) or the Azure PowerShell cmdlet [Get-AzKeyVaultSecret](/powershell/module/az.keyvault/get-azkeyvaultsecret).
Once deleted, a secret remains in a deleted but recoverable state for a time. If you run the code again, use a different secret name.
load-balancer Load Balancer Tcp Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-tcp-reset.md
The setting works for inbound connections only. To avoid losing the connection,
TCP keep-alive works for scenarios where battery life isn't a constraint. It isn't recommended for mobile applications. Using a TCP keep-alive in a mobile application can drain the device battery faster.
+## Order of precedence
+
+It is important to take into account how the idle timeout values set for different IPs could potentially interact.
+
+### Inbound
+
+- If there is an (inbound) load balancer rule with an idle timeout value set differently than the idle timeout of the frontend IP it references, the load balancer idle timeout will take precedence.
+- If there is an inbound NAT rule with an idle timeout value set differently than the idle timeout of the frontend IP it references, the load balancer idle timeout will take precedence.
+
+### Outbound
+
+- If there is an outbound rule with an idle timeout value different than 4 minutes (which is what public IP outbound idle timeout is locked at), the outbound rule idle timeout will take precedence.
+- Because a NAT gateway will always take precedence over load balancer outbound rules (and over public IP addresses assigned directly to VMs), the idle timeout value assigned to the NAT gateway will be used. (Along the same lines, the locked public IP outbound idle timeouts of 4 minutes of any IPs assigned to the NAT GW are not considered.)
## Limitations
load-balancer Troubleshoot Rhc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/troubleshoot-rhc.md
Title: Troubleshoot Azure Load Balancer resource health, frontend, and backend availability issues description: Use the available metrics to diagnose your degraded or unavailable Azure Standard Load Balancer. Last updated 08/14/2020
The next place we need to look is our health probe status metric to determine wh
Let's say we check our health probe status and find out that all instances are showing as unhealthy. This finding explains why our data path is unavailable as traffic has nowhere to go. We should then go through the following checklist to rule out common configuration errors: * Check the CPU utilization for your resources to determine if they are under high load. * You can check this by viewing the resource's Percentage CPU metric via the Metrics page. Learn how to [Troubleshoot high-CPU issues for Azure virtual machines](/troubleshoot/azure/virtual-machines/troubleshoot-high-cpu-issues-azure-windows-vm).
-* If using an HTTP or HTTPS probe check if the application is healthy and responsive
- * Validate application is functional by directly accessing the applications through the private IP address or instance-level public IP address associated with your backend instance
-* Review the Network Security Groups applied to our backend resources. Ensure that there are no rules of a higher priority than AllowAzureLoadBalancerInBound that will block the health probe
- * You can do this by visiting the Networking blade of your backend VMs or Virtual Machine Scale Sets
- * If you find this NSG issue is the case, move the existing Allow rule or create a new high priority rule to allow AzureLoadBalancer traffic
-* Check your OS. Ensure your VMs are listening on the probe port and review their OS firewall rules to ensure they aren't blocking the probe traffic originating from IP address `168.63.129.16`
- * You can check listening ports by running `netstat -a` from a Windows command prompt or `netstat -l` from a Linux terminal
-* Don't place a firewall NVA VM in the backend pool of the load balancer, use [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to route traffic to backend instances through the firewall
-* Ensure you're using the right protocol, if using HTTP to probe a port listening for a non-HTTP application the probe will fail
+* If using an HTTP or HTTPS probe check if the application is healthy and responsive.
+ * Validate application is functional by directly accessing the applications through the private IP address or instance-level public IP address associated with your backend instance.
+* Review the Network Security Groups applied to our backend resources. Ensure that there are no rules of a higher priority than AllowAzureLoadBalancerInBound that will block the health probe.
+ * You can do this by visiting the Networking blade of your backend VMs or Virtual Machine Scale Sets.
+ * If you find this NSG issue is the case, move the existing Allow rule or create a new high priority rule to allow AzureLoadBalancer traffic.
+* Check your OS. Ensure your VMs are listening on the probe port and review their OS firewall rules to ensure they aren't blocking the probe traffic originating from IP address `168.63.129.16`.
+ * You can check listening ports by running `netstat -a` from a Windows command prompt or `netstat -l` from a Linux terminal.
+* Don't place a firewall NVA VM in the backend pool of the load balancer, use [user-defined routes](../virtual-network/virtual-networks-udr-overview.md#user-defined) to route traffic to backend instances through the firewall,
+* Ensure you're using the right protocol. For example, a probe using HTTP to probe a port listening for a non-HTTP application fails.
If you've gone through this checklist and are still finding health probe failures, there may be rare platform issues impacting the probe service for your instances. In this case, Azure has your back and an automated alert is sent to our team to rapidly resolve all platform issues.
logic-apps Call From Power Automate Power Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/call-from-power-automate-power-apps.md
Here are errors that might happen when you export your logic app as a custom con
To connect to the logic app that you exported with your Power Automate flow:
-1. Sign in to [Power Automate](https://flow.microsoft.com).
+1. Sign in to [Power Automate](https://make.powerautomate.com).
1. From the **Power Automate** home page menu, select **My flows**.
To connect to the logic app that you exported with your Power Automate flow:
## Delete logic app connector from Power Automate
-1. Sign in to [Power Automate](https://flow.microsoft.com).
+1. Sign in to [Power Automate](https://make.powerautomate.com).
1. On the **Power Automate** home page, select **Data** &gt; **Custom connectors** in the menu.
logic-apps Create Single Tenant Workflows Visual Studio Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-visual-studio-code.md
For more information, review the [Azurite documentation](https://github.com/Azur
* [C# for Visual Studio Code extension](https://marketplace.visualstudio.com/items?itemName=ms-vscode.csharp), which enables F5 functionality to run your logic app.
+ * [.NET SDK 6.x.x](https://dotnet.microsoft.com/download/dotnet/6.0), which includes the .NET Runtime 6.x.x, a prerequisite for the Azure Logic Apps (Standard) runtime.
+ * [Azure Functions Core Tools - 4.x version](https://github.com/Azure/azure-functions-core-tools/releases/tag/4.0.4865) by using the Microsoft Installer (MSI) version, which is `func-cli-X.X.XXXX-x*.msi`. These tools include a version of the same runtime that powers the Azure Functions runtime, which the Azure Logic Apps (Standard) extension uses in Visual Studio Code. * If you have an installation that's earlier than these versions, uninstall that version first, or make sure that the PATH environment variable points at the version that you download and install.
- * Azure Functions v3 support ends in late 2022. Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have
- a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v3 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
+ * Azure Functions v3 support in Azure Logic Apps ends on March 31, 2023. Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Since January 31, 2023, existing Standard workflows in the Azure portal were automatically migrated to Azure Functions v4.
+
+ Unless you deployed your Standard logic apps as NuGet-based projects, pinned your logic apps to a specific bundle version, or Microsoft determined that you had to take action before the automatic migration, this upgrade is designed to require no action from you nor have a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v3 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
* [Azure Logic Apps (Standard) extension for Visual Studio Code](https://go.microsoft.com/fwlink/p/?linkid=2143167).
logic-apps Export From Microsoft Flow Logic App Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/export-from-microsoft-flow-logic-app-template.md
Last updated 01/23/2023
[!INCLUDE [logic-apps-sku-consumption](../../includes/logic-apps-sku-consumption.md)]
-To extend and expand your flow's capabilities, you can migrate that flow from [Power Automate](https://flow.microsoft.com) to a Consumption logic app workflow that runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md). You can export your flow as an Azure Resource Manager template for a logic app, deploy that logic app template to an Azure resource group, and then open that logic app in the workflow designer.
+To extend and expand your flow's capabilities, you can migrate that flow from [Power Automate](https://make.powerautomate.com) to a Consumption logic app workflow that runs in [multi-tenant Azure Logic Apps](logic-apps-overview.md). You can export your flow as an Azure Resource Manager template for a logic app, deploy that logic app template to an Azure resource group, and then open that logic app in the workflow designer.
> [!IMPORTANT] > Export to Azure Logic Apps is unavailable for Power Automate flows created after August 2020. In October 2020, Power Automate
Not all Power Automate connectors are available in Azure Logic Apps. You can mig
## Export your flow
-1. Sign in to [Power Automate](https://flow.microsoft.com), and select **My flows**. Find and select your flow. On the toolbar, select the ellipses (**...**) button > **Export** > **Logic Apps template (.json)**.
+1. Sign in to [Power Automate](https://make.powerautomate.com), and select **My flows**. Find and select your flow. On the toolbar, select the ellipses (**...**) button > **Export** > **Logic Apps template (.json)**.
![Export flow from Power Automate](./media/export-from-microsoft-flow-logic-app-template/export-flow.png)
machine-learning Apache Spark Azure Ml Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/apache-spark-azure-ml-concepts.md
Title: "Apache Spark in Azure Machine Learning (preview)"
-description: This article explains difference options for accessing Apache Spark in Azure Machine Learning.
+description: This article explains the options for accessing Apache Spark in Azure Machine Learning.
Last updated 01/30/2023
-#Customer intent: As a Full Stack ML Pro, I want to use Apache Spark in Azure Machine Learning.
+#Customer intent: As a full-stack machine learning pro, I want to use Apache Spark in Azure Machine Learning.
# Apache Spark in Azure Machine Learning (preview)
-The Azure Machine Learning integration with Azure Synapse Analytics (preview) provides easy access to distributed computing, using the Apache Spark framework. This integration offers these Apache Spark computing experiences:
+
+Azure Machine Learning integration with Azure Synapse Analytics (preview) provides easy access to distributed computing through the Apache Spark framework. This integration offers these Apache Spark computing experiences:
+ - Managed (Automatic) Spark compute - Attached Synapse Spark pool ## Managed (Automatic) Spark compute
-Azure Machine Learning Managed (Automatic) Spark compute is the easiest way to execute distributed computing tasks in the Azure Machine Learning environment, using the Apache Spark framework. Azure Machine Learning users can use a fully managed, serverless, on-demand Apache Spark compute cluster. Those users can avoid the need to create an Azure Synapse Workspace and an Azure Synapse Spark pool. Users can define the resources, including
-- instance type-- Apache Spark runtime version
+Azure Machine Learning Managed (Automatic) Spark compute is the easiest way to accomplish distributed computing tasks in the Azure Machine Learning environment by using the Apache Spark framework. Azure Machine Learning users can use a fully managed, serverless, on-demand Apache Spark compute cluster. Those users can avoid the need to create an Azure Synapse workspace and a Synapse Spark pool.
-to access the Managed (Automatic) Spark compute in Azure Machine Learning Notebooks, for
+Users can define resources, including instance type and Apache Spark runtime version. They can then use those resources to access Managed (Automatic) Spark compute in Azure Machine Learning notebooks for:
-- [interactive Spark code development](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
+- [Interactive Spark code development](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
- [Spark batch job submissions](./how-to-submit-spark-jobs.md)-- [running machine learning pipelines with a Spark component](./how-to-submit-spark-jobs.md#spark-component-in-a-pipeline-job)
+- [Running machine learning pipelines with a Spark component](./how-to-submit-spark-jobs.md#spark-component-in-a-pipeline-job)
-### Some points to consider
-Managed (Automatic) Spark compute works well for most user scenarios that require quick access to distributed computing using Apache Spark. To make an informed decision, however, users should consider the advantages and disadvantages of this approach.
+### Points to consider
-### Advantages
+Managed (Automatic) Spark compute works well for most user scenarios that require quick access to distributed computing through Apache Spark. But to make an informed decision, users should consider the advantages and disadvantages of this approach.
+
+Advantages:
-- No dependencies on other Azure resources to be created for Apache Spark-- No permissions required in the subscription to create Synapse-related resources-- No need for SQL pool quota-
-### Disadvantages
-
-- Persistent Hive metastore is missing. Therefore, Managed (Automatic) Spark compute only supports in-memory Spark SQL-- No available tables or databases-- Missing Purview integration-- Linked Services not available-- Fewer Data sources/connectors-- Missing pool-level configuration-- Missing pool-level library management-- Partial support for `mssparkutils`
+- There are no dependencies on other Azure resources to be created for Apache Spark.
+- No permissions are required in the subscription to create Azure Synapse-related resources.
+- There's no need for SQL pool quotas.
+
+Disadvantages:
+
+- A persistent Hive metastore is missing. Managed (Automatic) Spark compute supports only in-memory Spark SQL.
+- No tables or databases are available.
+- Azure Purview integration is missing.
+- Linked services aren't available.
+- There are fewer data sources and connectors.
+- Pool-level configuration is missing.
+- Pool-level library management is missing.
+- There's only partial support for `mssparkutils`.
### Network configuration
-As of January 2023, the Managed (Automatic) Spark compute doesn't support managed VNet or private endpoint creation to Azure Synapse.
-### Inactivity periods and tear down mechanism
-A Managed (Automatic) Spark compute (**cold start**) resource might need three to five minutes to start the Spark session, when first launched. The automated Managed (Automatic) Spark compute provisioning, backed by Azure Synapse, causes this delay. Once the Managed (Automatic) Spark compute is provisioned, and an Apache Spark session starts, subsequent code executions (**warm start**) won't experience this delay. The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will terminate after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following 10 minutes, resources provisioned for the Managed (Automatic) Spark compute will be torn down. Once the Managed (Automatic) Spark compute resource tear-down happens, submission of the next job will require a *cold start*. The next visualization shows some session inactivity period and cluster teardown scenarios.
+As of January 2023, creating a Managed (Automatic) Spark compute inside a virtual network and creating a private endpoint to Azure Synapse are not supported.
+
+### Inactivity periods and tear-down mechanism
+
+A Managed (Automatic) Spark compute (*cold start*) resource might need three to five minutes to start the Spark session when it's first launched. The automated Managed (Automatic) Spark compute provisioning, backed by Azure Synapse, causes this delay. After the Managed (Automatic) Spark compute is provisioned and an Apache Spark session starts, subsequent code executions (*warm start*) won't experience this delay.
+The Spark session configuration offers an option that defines a session timeout (in minutes). The Spark session will end after an inactivity period that exceeds the user-defined timeout. If another Spark session doesn't start in the following 10 minutes, resources provisioned for the Managed (Automatic) Spark compute will be torn down.
+
+After the Managed (Automatic) Spark compute resource tear-down happens, submission of the next job will require a *cold start*. The next visualization shows some session inactivity period and cluster teardown scenarios.
+ ## Attached Synapse Spark pool
-A Synapse Spark pool created in an Azure Synapse workspace becomes available in the Azure Machine Learning workspace with the Attached Synapse Spark pool. This option may be suitable for the users who want to reuse an existing Azure Synapse Spark pool. Attachment of an Azure Synapse Spark pool to the Azure Machine Learning workspace requires [other steps](./how-to-manage-synapse-spark-pool.md), before the Azure Synapse Spark pool can be used in the Azure Machine Learning for
-- [interactive Spark code development](./interactive-data-wrangling-with-apache-spark-azure-ml.md)-- [Spark batch job submission](./how-to-submit-spark-jobs.md), or -- [running machine learning pipelines with a Spark component](./how-to-submit-spark-jobs.md#spark-component-in-a-pipeline-job)
+A Spark pool created in an Azure Synapse workspace becomes available in the Azure Machine Learning workspace with the attached Synapse Spark pool. This option might be suitable for users who want to reuse an existing Synapse Spark pool.
+
+Attachment of a Synapse Spark pool to an Azure Machine Learning workspace requires [other steps](./how-to-manage-synapse-spark-pool.md) before you can use the pool in Azure Machine Learning for:
+
+- [Interactive Spark code development](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
+- [Spark batch job submission](./how-to-submit-spark-jobs.md)
+- [Running machine learning pipelines with a Spark component](./how-to-submit-spark-jobs.md#spark-component-in-a-pipeline-job)
-While an attached Synapse Spark pool provides access to native Synapse features, the user is responsible for provisioning, attaching, configuring, and managing the Synapse Spark pool.
+An attached Synapse Spark pool provides access to native Azure Synapse features. The user is responsible for provisioning, attaching, configuring, and managing the Synapse Spark pool.
-The Spark session configuration for an attached Synapse Spark pool also offers an option to define a session timeout (in minutes). The session timeout behavior resembles the description seen in [the previous section](#inactivity-periods-and-tear-down-mechanism), except the associated resources are never torn down after the session timeout.
+The Spark session configuration for an attached Synapse Spark pool also offers an option to define a session timeout (in minutes). The session timeout behavior resembles the description in [the previous section](#inactivity-periods-and-tear-down-mechanism), except that the associated resources are never torn down after the session timeout.
## Defining Spark cluster size
-You can define three parameter values
-- number of executors-- executor cores-- executor memory
+You can define Spark cluster size by using three parameter values in Azure Machine Learning Spark jobs:
-in Azure Machine Learning Spark jobs. You should consider an Azure Machine Learning Apache Spark executor as an equivalent of Azure Spark worker nodes. An example will explain these parameters. Let's say that you have defined number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job will then have access to a cluster with 24 cores and 168-GB memory.
+- Number of executors
+- Executor cores
+- Executor memory
+
+You should consider an Azure Machine Learning Apache Spark executor as an equivalent of Azure Spark worker nodes. An example can explain these parameters. Let's say that you defined the number of executors as 6 (equivalent to six worker nodes), executor cores as 4, and executor memory as 28 GB. Your Spark job will then have access to a cluster with 24 cores and 168 GB of memory.
## Ensuring resource access for Spark jobs
-To access data and other resources, a Spark job can either use either user identity passthrough, or a managed identity. This table summarizes the different mechanisms Spark jobs use to access resources.
+
+To access data and other resources, a Spark job can use either a user identity passthrough or a managed identity. This table summarizes the mechanisms that Spark jobs use to access resources.
|Spark pool|Supported identities|Default identity| | - | -- | - | |Managed (Automatic) Spark compute|User identity and managed identity|User identity| |Attached Synapse Spark pool|User identity and managed identity|Managed identity - compute identity of the attached Synapse Spark pool|
-[This page](./how-to-submit-spark-jobs.md#ensuring-resource-access-for-spark-jobs) describes Spark job resource access. In a Notebooks session, both the Managed (Automatic) Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md).
+[This article](./how-to-submit-spark-jobs.md#ensuring-resource-access-for-spark-jobs) describes resource access for Spark jobs. In a notebook session, both the Managed (Automatic) Spark compute and the attached Synapse Spark pool use user identity passthrough for data access during [interactive data wrangling](./interactive-data-wrangling-with-apache-spark-azure-ml.md).
> [!NOTE]
-> - To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles, on the Azure storage account used for data input and output, to the identity used for the Spark job.
-> - If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace, and that workspace has an associated managed virtual network associated, [configure a managed private endpoint to storage account](../synapse-analytics/security/connect-to-a-secure-storage-account.md), to ensure data access.
+> To ensure successful Spark job execution, assign **Contributor** and **Storage Blob Data Contributor** roles (on the Azure storage account that's used for data input and output) to the identity that's used for submitting the Spark job.
+>
+> If an [attached Synapse Spark pool](./how-to-manage-synapse-spark-pool.md) points to a Synapse Spark pool in an Azure Synapse workspace, and that workspace has an associated managed virtual network, [configure a managed private endpoint to a storage account](../synapse-analytics/security/connect-to-a-secure-storage-account.md). This configuration will help ensure data access.
-This [quickstart guide](./quickstart-spark-jobs.md) describes how to start using Managed (Automatic) Spark compute to submit your Spark jobs in Azure Machine Learning.
+[This quickstart](./quickstart-spark-jobs.md) describes how to start using Managed (Automatic) Spark compute to submit your Spark jobs in Azure Machine Learning.
## Next steps-- [Quickstart: Apache Spark jobs in Azure Machine Learning (preview)](./quickstart-spark-jobs.md)+ - [Attach and manage a Synapse Spark pool in Azure Machine Learning (preview)](./how-to-manage-synapse-spark-pool.md)-- [Interactive Data Wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
+- [Interactive data wrangling with Apache Spark in Azure Machine Learning (preview)](./interactive-data-wrangling-with-apache-spark-azure-ml.md)
- [Submit Spark jobs in Azure Machine Learning (preview)](./how-to-submit-spark-jobs.md)-- [Code samples for Spark jobs using Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)-- [Code samples for Spark jobs using Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
+- [Code samples for Spark jobs using the Azure Machine Learning CLI](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/spark)
+- [Code samples for Spark jobs using the Azure Machine Learning Python SDK](https://github.com/Azure/azureml-examples/tree/main/sdk/python/jobs/spark)
machine-learning How To Datastore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md
Last updated 02/08/2023 -
-# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute, to train my machine learning models.
+# Customer intent: As an experienced Python developer, I need to make my data in Azure storage available to my remote compute resource, to train my machine learning models.
# Create datastores
[!INCLUDE [dev v2](../../includes/machine-learning-dev-v2.md)]
-In this article, learn how to connect to Azure data storage services, with Azure Machine Learning datastores.
+In this article, learn how to connect to Azure data storage services with Azure Machine Learning datastores.
## Prerequisites
In this article, learn how to connect to Azure data storage services, with Azure
- An Azure Machine Learning workspace. > [!NOTE]
-> Azure Machine Learning datastores do **not** create the underlying storage accounts. Instead, they link an **existing** storage account for Azure Machine Learning use. Azure Machine Learning datastores are not required for this. If you have access to the underlying data, you can use storage URIs directly.
+> Azure Machine Learning datastores do **not** create the underlying storage account resources. Instead, they link an **existing** storage account for Azure Machine Learning use. Azure Machine Learning datastores are not required for this. If you have access to the underlying data, you can use storage URIs directly.
## Create an Azure Blob datastore
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
version = registered_model.version
__endpoint.yaml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml":::
+ <!-- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/create-endpoint.yaml"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
# [Azure CLI](#tab/cli)
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint":::
+ <!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_endpoint"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
__sklearn-deployment.yaml__
- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml":::
+ <!-- :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sklearn-deployment.yaml"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
# [Azure CLI](#tab/cli)
- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment":::
+ <!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="create_sklearn_deployment"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
Once your deployment completes, your deployment is ready to serve request. One o
**sample-request-sklearn.json**
+<!-- :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sample-request-sklearn.json"::: -->
> [!NOTE] > Notice how the key `input_data` has been used in this example instead of `inputs` as used in MLflow serving. This is because Azure Machine Learning requires a different input format to be able to automatically generate the swagger contracts for the endpoints. See [Differences between models deployed in Azure Machine Learning and MLflow built-in server](how-to-deploy-mlflow-models.md#differences-between-models-deployed-in-azure-machine-learning-and-mlflow-built-in-server) for details about expected input format.
To submit a request to the endpoint, you can do as follows:
# [Azure CLI](#tab/cli)
+<!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="test_sklearn_deployment"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
Use the following steps to deploy an MLflow model with a custom scoring script.
**sample-request-sklearn.json**
- :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sample-request-sklearn.json":::
+ <!-- :::code language="json" source="~/azureml-examples-main/cli/endpoints/online/mlflow/sample-request-sklearn.json"::: -->
To submit a request to the endpoint, you can do as follows:
Once you're done with the endpoint, you can delete the associated resources:
# [Azure CLI](#tab/cli)
+<!-- :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-mlflow.sh" ID="delete_endpoint"::: -->
# [Python (Azure ML SDK)](#tab/sdk)
machine-learning How To Designer Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-python.md
- Previously updated : 10/21/2021+ Last updated : 02/08/2023 # Run Python code in Azure Machine Learning designer
-In this article, you learn how to use the [Execute Python Script](algorithm-module-reference/execute-python-script.md) component to add custom logic to Azure Machine Learning designer. In the following how-to, you use the Pandas library to do simple feature engineering.
+In this article, you'll learn how to use the [Execute Python Script](algorithm-module-reference/execute-python-script.md) component to add custom logic to the Azure Machine Learning designer. In this how-to, you use the Pandas library to do simple feature engineering.
-You can use the in-built code editor to quickly add simple Python logic. If you want to add more complex code or upload additional Python libraries, you should use the zip file method.
+You can use the in-built code editor to quickly add simple Python logic. You should use the zip file method to add more complex code, or to upload additional Python libraries.
-The default execution environment uses the Anacondas distribution of Python. For a complete list of pre-installed packages, see the [Execute Python Script component reference](algorithm-module-reference/execute-python-script.md) page.
+The default execution environment uses the Anacondas distribution of Python. See the [Execute Python Script component reference](algorithm-module-reference/execute-python-script.md) page for a complete list of pre-installed packages.
![Execute Python input map](media/how-to-designer-python/execute-python-map.png)
The default execution environment uses the Anacondas distribution of Python. For
### Connect input datasets
-This article uses the sample dataset, **Automobile price data (Raw)**.
+This article uses the **Automobile price data (Raw)** sample dataset.
1. Drag and drop your dataset to the pipeline canvas. 1. Connect the output port of the dataset to the top-left input port of the **Execute Python Script** component. The designer exposes the input as a parameter to the entry point script.
-
+ The right input port is reserved for zipped Python libraries. ![Connect datasets](media/how-to-designer-python/connect-dataset.png)
-
-1. Take note of which input port you use. The designer assigns the left input port to the variable `dataset1` and the middle input port to `dataset2`.
+1. Carefully note the specific input port you use. The designer assigns the left input port to the variable `dataset1`, and the middle input port to `dataset2`.
-Input components are optional since you can generate or import data directly in the **Execute Python Script** component.
+Input components are optional, since you can generate or import data directly in the **Execute Python Script** component.
### Write your Python code
-The designer provides an initial entry point script for you to edit and enter your own Python code.
+The designer provides an initial entry point script for you to edit and enter your own Python code.
-In this example, you use Pandas to combine two columns found in the automobile dataset, **Price** and **Horsepower**, to create a new column, **Dollars per horsepower**. This column represents how much you pay for each horsepower, which could be a useful feature to decide if a car is a good deal for the money.
+In this example, you use Pandas to combine two of the automobile dataset columns - **Price** and **Horsepower** - to create a new column, **Dollars per horsepower**. This column represents how much you pay for each horsepower unit, which could become a useful information point to decide if a specific car is a good deal for its price.
1. Select the **Execute Python Script** component. 1. In the pane that appears to the right of the canvas, select the **Python script** text box.
-1. Copy and paste the following code into the text box.
+1. Copy and paste the following code into the text box:
```python import pandas as pd
In this example, you use Pandas to combine two columns found in the automobile d
dataframe1['Dollar/HP'] = dataframe1.price / dataframe1.horsepower return dataframe1 ```
- Your pipeline should look the following image:
+ Your pipeline should look like this image:
![Execute Python pipeline](media/how-to-designer-python/execute-python-pipeline.png)
- The entry point script must contain the function `azureml_main`. There are two function parameters that map to the two input ports for the **Execute Python Script** component.
+ The entry point script must contain the function `azureml_main`. The function has two function parameters that map to the two input ports for the **Execute Python Script** component.
- The return value must be a Pandas Dataframe. You can return up to two dataframes as component outputs.
+ The return value must be a Pandas Dataframe. You can return at most two dataframes as component outputs.
1. Submit the pipeline.
-Now, you have a dataset with the new feature **Dollars/HP**, which could be useful in training a car recommender. This is an example of feature extraction and dimensionality reduction.
+Now you have a dataset, which has a new **Dollars/HP** feature. This new feature could help to train a car recommender. This example shows feature extraction and dimensionality reduction.
## Next steps
-Learn how to [import your own data](v1/how-to-designer-import-data.md) in Azure Machine Learning designer.
+Learn how to [import your own data](v1/how-to-designer-import-data.md) in Azure Machine Learning designer.
machine-learning How To Designer Transform Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-designer-transform-data.md
description: Learn how to import and transform data in Azure Machine Learning de
-+ Previously updated : 10/21/2021 Last updated : 02/08/2023 # Transform data in Azure Machine Learning designer
+In this article, you'll learn how to transform and save datasets in the Azure Machine Learning designer, to prepare your own data for machine learning.
-In this article, you learn how to transform and save datasets in Azure Machine Learning designer so that you can prepare your own data for machine learning.
+You'll use the sample [Adult Census Income Binary Classification](./samples-designer.md) dataset to prepare two datasets: one dataset that includes adult census information from only the United States, and another dataset that includes census information from non-US adults.
-You will use the sample [Adult Census Income Binary Classification](./samples-designer.md) dataset to prepare two datasets: one dataset that includes adult census information from only the United States and another dataset that includes census information from non-US adults.
-
-In this article, you learn how to:
+In this article, you'll learn how to:
1. Transform a dataset to prepare it for training. 1. Export the resulting datasets to a datastore.
-1. View results.
+1. View the results.
-This how-to is a prerequisite for the [how to retrain designer models](how-to-retrain-designer.md) article. In that article, you will learn how to use the transformed datasets to train multiple models with pipeline parameters.
+This how-to is a prerequisite for the [how to retrain designer models](how-to-retrain-designer.md) article. In that article, you'll learn how to use the transformed datasets to train multiple models, with pipeline parameters.
[!INCLUDE [machine-learning-missing-ui](../../includes/machine-learning-missing-ui.md)] ## Transform a dataset
-In this section, you learn how to import the sample dataset and split the data into US and non-US datasets. For more information on how to import your own data into the designer, see [how to import data](v1/how-to-designer-import-data.md).
+In this section, you'll learn how to import the sample dataset, and split the data into US and non-US datasets. See [how to import data](v1/how-to-designer-import-data.md) for more information about how to import your own data into the designer.
### Import data
-Use the following steps to import the sample dataset.
+Use these steps to import the sample dataset:
-1. Sign in to <a href="https://ml.azure.com?tabs=jre" target="_blank">ml.azure.com</a>, and select the workspace you want to work with.
+1. Sign in to <a href="https://ml.azure.com?tabs=jre" target="_blank">ml.azure.com</a>, and select the workspace you want to use.
1. Go to the designer. Select **Easy-to-use-prebuild components** to create a new pipeline. 1. Select a default compute target to run the pipeline.
-1. To the left of the pipeline canvas is a palette of datasets and components. Select **Datasets**. Then view the **Samples** section.
+1. To the left of the pipeline canvas, you'll see a palette of datasets and components. Select **Datasets**. Then view the **Samples** section.
1. Drag and drop the **Adult Census Income Binary classification** dataset onto the canvas.
Use the following steps to import the sample dataset.
### Split the data
-In this section, you use the [Split Data component](algorithm-module-reference/split-data.md) to identify and split rows that contain "United-States" in the "native-country" column.
+In this section, you'll use the [Split Data component](algorithm-module-reference/split-data.md) to identify and split rows that contain "United-States" in the "native-country" column.
-1. In the component palette to the left of the canvas, expand the **Data Transformation** section and find the **Split Data** component.
+1. To the left of the canvas, in the component palette, expand the **Data Transformation** section, and find the **Split Data** component.
-1. Drag the **Split Data** component onto the canvas, and drop the component below the dataset component.
+1. Drag the **Split Data** component onto the canvas, and drop that component below the dataset component.
1. Connect the dataset component to the **Split Data** component. 1. Select the **Split Data** component.
-1. In the component details pane to the right of the canvas, set **Splitting mode** to **Regular Expression**.
+1. To the right of the canvas in the component details pane, set **Splitting mode** to **Regular Expression**.
1. Enter the **Regular Expression**: `\"native-country" United-States`.
- The **Regular expression** mode tests a single column for a value. For more information on the Split Data component, see the related [algorithm component reference page](algorithm-module-reference/split-data.md).
+ The **Regular expression** mode tests a single column for a value. See the related [algorithm component reference page](algorithm-module-reference/split-data.md) for more information on the Split Data component.
Your pipeline should look like this: ## Save the datasets
-Now that your pipeline is set up to split the data, you need to specify where to persist the datasets. For this example, use the **Export Data** component to save your dataset to a datastore. For more information on datastores, see [Connect to Azure storage services](how-to-access-data.md)
+Now that you set up your pipeline to split the data, you must specify where to persist the datasets. For this example, use the **Export Data** component to save your dataset to a datastore. See [Connect to Azure storage services](how-to-access-data.md) for more information about datastores.
-1. In the component palette to the left of the canvas, expand the **Data Input and Output** section and find the **Export Data** component.
+1. To the left of the canvas in the component palette, expand the **Data Input and Output** section, and find the **Export Data** component.
1. Drag and drop two **Export Data** components below the **Split Data** component.
Now that your pipeline is set up to split the data, you need to specify where to
![Screenshot showing how to connect the Export Data components](media/how-to-designer-transform-data/export-data-pipeline.png).
-1. Select the **Export Data** component that is connected to the *left*-most port of the **Split Data** component.
+1. Select the **Export Data** component connected to the *left*-most port of the **Split Data** component.
- The order of the output ports matter for the **Split Data** component. The first output port contains the rows where the regular expression is true. In this case, the first port contains rows for US-based income, and the second port contains rows for non-US based income.
+ For the **Split Data** component, the output port order matters. The first output port contains the rows where the regular expression is true. In this case, the first port contains rows for US-based income, and the second port contains rows for non-US based income.
1. In the component details pane to the right of the canvas, set the following options: **Datastore type**: Azure Blob Storage
- **Datastore**: Select an existing datastore or select "New datastore" to create one now.
+ **Datastore**: Select an existing datastore, or select "New datastore" to create one now.
**Path**: `/data/us-income` **File format**: csv > [!NOTE]
- > This article assumes that you have access to a datastore registered to the current Azure Machine Learning workspace. For instructions on how to setup a datastore, see [Connect to Azure storage services](v1/how-to-connect-data-ui.md#create-datastores).
+ > This article assumes that you have access to a datastore registered to the current Azure Machine Learning workspace. See [Connect to Azure storage services](v1/how-to-connect-data-ui.md#create-datastores) for datastore setup instructions.
- If you don't have a datastore, you can create one now. For example purposes, this article will save the datasets to the default blob storage account associated with the workspace. It will save the datasets into the `azureml` container in a new folder called `data`.
+ You can create a datastore if you don't have one now. For example purposes, this article will save the datasets to the default blob storage account associated with the workspace. It will save the datasets into the `azureml` container, in a new folder named `data`.
1. Select the **Export Data** component connected to the *right*-most port of the **Split Data** component.
-1. In the component details pane to the right of the canvas, set the following options:
+1. To the right of the canvas in the component details pane, set the following options:
**Datastore type**: Azure Blob Storage
Now that your pipeline is set up to split the data, you need to specify where to
**File format**: csv
-1. Confirm the **Export Data** component connected to the left port of the **Split Data** has the **Path** `/data/us-income`.
+1. Verify that the **Export Data** component connected to the left port of the **Split Data** has the **Path** `/data/us-income`.
-1. Confirm the **Export Data** component connected to the right port has the **Path** `/data/non-us-income`.
+1. Verify that the **Export Data** component connected to the right port has the **Path** `/data/non-us-income`.
Your pipeline and settings should look like this:
Now that your pipeline is set up to split the data, you need to specify where to
### Submit the job
-Now that your pipeline is setup to split and export the data, submit a pipeline job.
+Now that you set up your pipeline to split and export the data, submit a pipeline job.
-1. At the top of the canvas, select **Submit**.
+1. Select **Submit** at the top of the canvas.
-1. In the **Set up pipeline job** dialog, select **Create new** to create an experiment.
+1. Select **Create new** in the **Set up pipeline job**, to create an experiment.
- Experiments logically group together related pipeline jobs. If you run this pipeline in the future, you should use the same experiment for logging and tracking purposes.
+ Experiments logically group related pipeline jobs together. If you run this pipeline in the future, you should use the same experiment for logging and tracking purposes.
-1. Provide a descriptive experiment name like "split-census-data".
+1. Provide a descriptive experiment name - for example "split-census-data".
1. Select **Submit**. ## View results
-After the pipeline finishes running, you can view your results by navigating to your blob storage in the Azure portal. You can also view the intermediary results of the **Split Data** component to confirm that your data has been split correctly.
+After the pipeline finishes running, you can navigate to your Azure portal blob storage to view your results. You can also view the intermediary results of the **Split Data** component to confirm that your data has been split correctly.
1. Select the **Split Data** component.
-1. In the component details pane to the right of the canvas, select **Outputs + logs**.
+1. In the component details pane to the right of the canvas, select **Outputs + logs**.
-1. Select the visualize icon ![visualize icon](media/how-to-designer-transform-data/visualize-icon.png) next to **Results dataset1**.
+1. Select the visualize icon ![visualize icon](media/how-to-designer-transform-data/visualize-icon.png) next to **Results dataset1**.
-1. Verify that the "native-country" column only contains the value "United-States".
+1. Verify that the "native-country" column contains only the value "United-States".
-1. Select the visualize icon ![visualize icon](media/how-to-designer-transform-data/visualize-icon.png) next to **Results dataset2**.
+1. Select the visualize icon ![visualize icon](media/how-to-designer-transform-data/visualize-icon.png) next to **Results dataset2**.
1. Verify that the "native-country" column does not contain the value "United-States". ## Clean up resources
-Skip this section if you want to continue on with part 2 of this how to, [Retrain models with Azure Machine Learning designer](how-to-retrain-designer.md).
+To continue with part two of this [Retrain models with Azure Machine Learning designer](how-to-retrain-designer.md) how-to, skip this section.
[!INCLUDE [aml-ui-cleanup](../../includes/aml-ui-cleanup.md)] ## Next steps
-In this article, you learned how to transform a dataset and save it to a registered datastore.
+In this article, you learned how to transform a dataset, and save it to a registered datastore.
-Continue to the next part of this how-to series with [Retrain models with Azure Machine Learning designer](how-to-retrain-designer.md) to use your transformed datasets and pipeline parameters to train machine learning models.
+Continue to the next part of this how-to series with [Retrain models with Azure Machine Learning designer](how-to-retrain-designer.md), to use your transformed datasets and pipeline parameters to train machine learning models.
machine-learning How To Use Pipeline Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-pipeline-ui.md
Status and definitions:
||--|-|-| | Not started | Job is submitted from client side and accepted in Azure ML services. Time spent in this stage is mainly in Azure ML service scheduling and preprocessing. | If there's no backend service issue, this time should be very short.| Open support case via Azure portal. | |Preparing | In this status, job is pending for some preparation on job dependencies, for example, environment image building.| If you're using curated or registered custom environment, this time should be very short. | Check image building log. |
-|Inqueue | Job is pending for compute resource allocation. Time spent in this stage is mainly depending on the status of your compute cluster or job yield policy for scope job.| If you're using a cluster with enough compute resource, this time should be short. | Check with workspace admin whether to increase the max nodes of the target compute or change the job to another less busy compute. |
+|Inqueue | Job is pending for compute resource allocation. Time spent in this stage is mainly depending on the status of your compute cluster.| If you're using a cluster with enough compute resource, this time should be short. | Check with workspace admin whether to increase the max nodes of the target compute or change the job to another less busy compute. |
|Running | Job is executing on remote compute. Time spent in this stage is mainly in two parts: <br> Runtime preparation: image pulling, docker starting and data preparation (mount or download). <br> User script execution. | This status is expected to be most time consuming one. | 1. Go to the source code check if there's any user error. <br> 2. View the monitoring tab of compute metrics (CPU, memory, networking etc.) to identify the bottleneck. <br> 3. Try online debug with [interactive endpoints](how-to-interactive-jobs.md) if the job is running or locally debug of your code. |
-| Finalizing | Job is in post processing after execution complete. Time spent in this stage is mainly for some post processes like: output uploading, metric/logs uploading and resources clean up.| It will be short for command job. However, might be very long for PRS/MPI job because for a distributed job, the finalizing status is from the first node starting finalizing to the last node done finalizing. | Change your step run output mode from upload to mount if you find unexpected long finalizing time, or open support case via Azure portal. |
--
-Along with the profiling, you can also use the *Output + logs* (on the details page), the Common Runtime enabled monitoring metric for PRS/MPI jobs.
+| Finalizing | Job is in post processing after execution complete. Time spent in this stage is mainly for some post processes like: output uploading, metric/logs uploading and resources clean up.| It will be short for command job. However, might be very long for PRS/MPI job because for a distributed job, the finalizing status is from the first node starting finalizing to the last node done finalizing. | Change your step job output mode from upload to mount if you find unexpected long finalizing time, or open support case via Azure portal. |
### Different view of Gantt chart
marketplace Azure Container Technical Assets Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-container-technical-assets-kubernetes.md
In addition to your solution domain, your engineering team should have knowledge
- The application must be deployable to Linux environment. -- If running the CNAB packaging tool manually, you will need docker installed on your local machine.
+- If running the CNAB packaging tool manually, you'll need docker installed on your local machine.
## Limitations - Container Marketplace supports only Linux platform-based AMD64 images. - Managed AKS only.-- Single containers are not supported.-- Linked Azure Resource Manager templates are not supported.
+- Single containers aren't supported.
+- Linked Azure Resource Manager templates aren't supported.
> [!IMPORTANT] > The Kubernetes application-based offer experience is in preview. Preview features are available on a self-service, opt-in basis. Previews are provided "as is" and "as available," and they're excluded from the service-level agreements and limited warranty. Previews are partially covered by customer support on a best-effort basis. As such, these features aren't meant for production use.
Ensure the Helm chart adheres to the following rules:
### Make updates based on your billing model
-After reviewing the billing models available, select one appropriate for your use case and complete the following steps:
+After reviewing the [available billing models][available billing models], select one appropriate for your use case and complete the following steps:
-Complete the following steps to add identifier in the *Per core* billing model:
+Complete the following steps to add identifier in the *Per core*, *Per pod*, *Per node* billing models:
-- Add a billing identifier label and cpu cores request to your `deployment.yaml` file.
+- Add a billing identifier label `azure-extensions-usage-release-identifier` to the Pod spec in your [workload][workload] yaml files.
+ - If the workload is specified as Deployments or Replicasets or Statefulsets or Daemonsets specs, add this label under **.spec.template.metadata.labels**.
+ - If the workload is specified directly as Pod specs, add this label under **.metadata.labels**.
- :::image type="content" source="./media/azure-container/billing-identifier-label.png" alt-text="A screenshot of a properly formatted billing identifier label in a deployment.yaml file. The content resembles the sample depoyment.yaml file linked in this article":::
- :::image type="content" source="./media/azure-container/resources.png" alt-text="A screenshot of CPU resource requests in a deployment.yaml file. The content resembles the sample depoyment.yaml file linked in this article.":::
-- Add a billing identifier value for `global.azure.billingidentifier` in `values.yaml`.
+ :::image type="content" source="./media/azure-container/billing-depoyment.png" alt-text="A screenshot of a properly formatted billing identifier label in a deployment.yaml file. The content resembles the sample depoyment.yaml file linked in this article.":::
- :::image type="content" source="./media/azure-container/billing-identifier-value.png" alt-text="A screenshot of a properly formatted values.yaml file, showing the global > Azure > billingIdentifier field.":::
-Complete the following steps to add a billing identifier label in the *Per pod* and *Per node* billing model:
-- Add a billing identifier label `azure-extensions-usage-release-identifier` to your `deployment.yaml` file (Under **Template** > **Metadata** > **Labels**>).
+ :::image type="content" source="./media/azure-container/billing-statefulsets.png" alt-text="A screenshot of a properly formatted billing identifier label in a statefulsets.yaml file. The content resembles the sample statefulsets.yaml file linked in this article.":::
-Note that at deployment time, the cluster extensions feature will replace the billing identifier value with the extension type name you provide while setting up plan details.
++
+ :::image type="content" source="./media/azure-container/billing-daemonsets.png" alt-text="A screenshot of CPU resource requests in a daemonsets.yaml file. The content resembles the sample daemonsets.yaml file linked in this article.":::
+++
+ :::image type="content" source="./media/azure-container/billing-pods.png" alt-text="A screenshot of CPU resource requests in a pods.yaml file. The content resembles the sample pods.yaml file linked in this article.":::
+
+- For *perCore* billing model, specify [CPU Request][CPU Request] by including the `resources:requests` field in the container resource manifest. Note that this step is only required for *perCore* billing model.
+
+ :::image type="content" source="./media/azure-container/percorebilling.png" alt-text="A screenshot of CPU resource requests in a pods.yaml file. The content resembles the sample per core billing model file linked in this article.":::
+
+Note that at deployment time, the cluster extensions feature will replace the billing identifier value with the extension instance name.
For examples configured to deploy the [Azure Voting App][azure-voting-app], see the following:
For an example of how to integrate `container-package-app` into an Azure Pipelin
- [Create your Kubernetes offer](azure-container-offer-setup.md) <!-- LINKS -->-
+[available billing models]: azure-container-technical-assets-kubernetes.md#available-billing-models
+[workload]:https://kubernetes.io/docs/concepts/workloads/
+[CPU Request]:https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#specify-a-cpu-request-and-a-cpu-limit
[cnab]: https://cnab.io/ [cluster-extensions]: ../aks/cluster-extensions.md [azure-voting-app]: https://github.com/Azure-Samples/kubernetes-offer-samples/tree/main/samples/k8s-offer-azure-vote/azure-vote
marketplace Commercial Marketplace Lead Management Instructions Azure Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-azure-table.md
If your customer relationship management (CRM) system isn't explicitly supported
## (Optional) Use Power Automate to get lead notifications
-You can use [Power Automate](/power-automate/) to automate notifications every time a lead is added to your Azure Storage table. If you don't have an account, you can [sign up for a free account](https://flow.microsoft.com/).
+You can use [Power Automate](/power-automate/) to automate notifications every time a lead is added to your Azure Storage table. If you don't have an account, you can [sign up for a free account](https://make.powerautomate.com/).
### Lead notification example
marketplace Commercial Marketplace Lead Management Instructions Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/partner-center-portal/commercial-marketplace-lead-management-instructions-https.md
This article explains how to create a new flow in Power Automate to generate the
## Create a flow by using Power Automate
-1. Open the [Power Automate](https://flow.microsoft.com/) webpage. Select **Sign in**. If you don't already have an account, select **Sign up free** to create one.
+1. Open the [Power Automate](https://make.powerautomate.com/) webpage. Select **Sign in**. If you don't already have an account, select **Sign up free** to create one.
1. Sign in, select **My flows**, and switch the Environment from **Microsoft (default)** to your Dataverse (CRM) Environment.
You can test your configuration with [Postman](https://app.getpostman.com/app/do
![Paste the HTTP POST URL](./media/commercial-marketplace-lead-management-instructions-https/paste-http-post-url.png)
-1. Go back to [Power Automate](https://flow.microsoft.com/). Find the flow you created to send leads by going to **My Flows** from the Power Automate menu bar. Select the ellipsis next to the flow name to see more options, and select **Edit**.
+1. Go back to [Power Automate](https://make.powerautomate.com/). Find the flow you created to send leads by going to **My Flows** from the Power Automate menu bar. Select the ellipsis next to the flow name to see more options, and select **Edit**.
1. Select **Test** in the upper-right corner, select **I'll perform the trigger action**, and then select **Test**. You'll see an indication at the top of the screen that the test has started.
migrate Tutorial Migrate Physical Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-migrate-physical-virtual-machines.md
The first step of migration is to set up the replication appliance. To set up th
![Finalize registration](./media/tutorial-migrate-physical-virtual-machines/finalize-registration.png)
-It may take some time after finalizing registration until discovered machines appear in the Migration and modernization tool. As VMs are discovered, the **Discovered servers** count rises.
+Mobility service agent needs to be installed on the servers to get them discovered using replication appliance. Discovered machines appear in Azure Migrate: Server Migration. As VMs are discovered, the **Discovered servers** count rises.
![Discovered servers](./media/tutorial-migrate-physical-virtual-machines/discovered-servers.png)
+> [!NOTE]
+> It is recommended to perform discovery and asessment prior to the migration using the Azure Migrate: Discovery and assessment tool, a separate lightweight Azure Migrate appliance. You can deploy the appliance as a physical server to continuously discover servers and performance metadata. For detailed steps, see [Discover physical servers](tutorial-discover-physical.md).
+ ## Install the Mobility service
After you've verified that the test migration works as expected, you can migrate
## Next steps
-Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
+Investigate the [cloud migration journey](/azure/architecture/cloud-adoption/getting-started/migrate) in the Azure Cloud Adoption Framework.
mysql Tutorial Power Automate With Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-power-automate-with-mysql.md
In this quickstart shows how to create an automated workflow usingPower automate
## Prerequisites
-* An account on [flow.microsoft.com](https://flow.microsoft.com).
+* An account on [make.powerautomate.com](https://make.powerautomate.com).
* An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free).
For this tutorial, we'll use **instant cloud flow* that can be triggered manuall
## Specify an event to start the flow Follow the steps to create an instant cloud flow with a manual trigger.
-1. In [Power Automate](https://flow.microsoft.com), select **Create** from the navigation bar on the left.
+1. In [Power Automate](https://make.powerautomate.com), select **Create** from the navigation bar on the left.
2. Under **Start from blank*, select **Instant cloud flow**. 3. Give your flow a name in the **Flow name" field and select **Manually trigger a flow**.
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
Sign in to the [Azure portal - Azure Orbital Preview](https://aka.ms/orbital/por
| **Region** | Select **West US 2**. | | **Minimum viable contact duration** | Enter **PT1M**. | | **Minimum elevation** | Enter **5.0**. |
- | **Auto track configuration** | Select **Disabled**. |
+ | **Auto track configuration** | Select **X-band**. |
| **Event Hubs Namespace** | Select an Azure Event Hubs namespace to which you'll send telemetry data for your contacts. You must select a subscription before you can select an Event Hubs namespace. | | **Event Hubs Instance** | Select an Event Hubs instance that belongs to the previously selected namespace. This field appears only if you select an Event Hubs namespace first. |
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
The following modifications will trigger a packet core reinstall, during which y
- Detaching a data network from the packet core instance. - Changing the packet core instance's custom location.
-If you're making any of these changes, we recommend modifying your packet core instance during a maintenance window to minimize the impact on your service. The packet core reinstall will take approximately 45 minutes, but this time may vary between systems. You should allow up to two hours for the process to complete.
+If you're making any of these changes, we recommend modifying your packet core instance during a maintenance window to minimize the impact on your service. You should allow up to two hours for the reinstall process to complete.
If you're making a change that doesn't trigger a reinstall, you can skip the next step and move to [Select the packet core instance to modify](#select-the-packet-core-instance-to-modify).
private-5g-core Reinstall Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reinstall-packet-core.md
Each Azure Private 5G Core site contains a packet core instance, which is a cloud-native implementation of the 3GPP standards-defined 5G Next Generation Core (5G NGC or 5GC).
-If you're experiencing issues with your deployment, reinstalling the packet core may help return it to a good state. In this how-to guide, you'll learn how to reinstall a packet core instance using the Azure portal.
+In this how-to guide, you'll learn how to reinstall a packet core instance using the Azure portal. If you're experiencing issues with your deployment, reinstalling the packet core may help return it to a good state.
+
+Reinstalling the packet core deletes the packet core instance and redeploys it with the existing site configuration. Site-dependent resources such as the **Packet Core Control Plane**, **Packet Core Data Plane** and **Attached Data Network** are not affected.
## Prerequisites - Ensure you can sign in to the Azure portal using an account with access to the active subscription you used to create your private mobile network. This account must have the built-in Contributor or Owner role at the subscription scope.-- If your packet core instance is still handling requests from your UEs, we recommend performing the reinstall during a maintenance window to minimize the impact on your service. The packet core reinstall will take approximately 45 minutes, but this time may vary between systems. You should allow up to two hours for the process to complete.
+- If your packet core instance is still handling requests from your UEs, we recommend performing the reinstall during a maintenance window to minimize the impact on your service. You should allow up to two hours for the reinstall process to complete.
- If you use Azure Active Directory (Azure AD) to authenticate access to your local monitoring tools, ensure your local machine has core kubectl access to the Azure Arc-enabled Kubernetes cluster. This requires a core kubeconfig file, which you can obtain by following [Set up kubectl access](commission-cluster.md#set-up-kubectl-access). ## View the packet core instance's installation status
-Before reinstalling, follow this step to check the packet core instance's installation status.
+Follow this step to check the packet core instance's installation status and to ensure no other processes are running before you attempt a reinstall.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Search for and select the **Mobile Network** resource representing the private mobile network.
Before reinstalling, follow this step to check the packet core instance's instal
:::image type="content" source="media/packet-core-field.png" alt-text="Screenshot of the Azure portal showing the Packet Core field.":::
-1. Under the **Essentials** heading, check the current packet core state under the **Packet core installation state** field. If the status under this field indicates the packet core instance is already running a reinstall process, wait for it to finish before attempting another reinstall.
+1. Under the **Essentials** heading, check the current packet core state under the **Packet core installation state** field. If the status under this field indicates the packet core instance is already running a process, such as an upgrade or another reinstall, wait for it to finish before attempting the reinstall.
## Back up deployment information
To reinstall your packet core instance:
:::image type="content" source="media/reinstall-packet-core/reinstall-packet-core-confirmation.png" alt-text="Screenshot of the Azure portal showing the Reinstall packet core screen."::: 1. Select **Reinstall**.
-1. Azure will now uninstall the packet core instance and redeploy it with the same configuration. This process will take approximately 45 minutes. You can check the progress of the reinstall by selecting the notifications icon and then **More events in the activity log**.
+1. Azure will now uninstall the packet core instance and redeploy it with the same configuration. You can check the status of the reinstall by selecting **Refresh** and looking at the **Packet core installation state** field. Once the process is complete, you'll receive a notification with information on whether the reinstall was successful.
+
+ If the packet core reinstall failed, you can find more details about the reason for the failure by selecting the notifications icon and then **More events in the activity log**.
:::image type="content" source="media/reinstall-packet-core/reinstall-packet-core-status.png" alt-text="Screenshot of the Azure portal showing the reinstall packet core status in the Notifications screen.":::
purview Concept Policies Data Owner Action Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-data-owner-action-update.md
+
+ Title: Role definitions SQL Performance Monitor and SQL Security Auditor supported only from DevOps policies experience
+description: This guide discusses actions that have been retired from Data owner policies experience and are now supported only from the DevOps policies experience.
+++++ Last updated : 02/09/2023++
+# Role definitions SQL Performance Monitor and SQL Security Auditor supported only from DevOps policies experience
+
+This guide discusses actions that have been retired from Data owner policies experience and are now supported only from the DevOps policies experience.
+
+## Important considerations
+The following two actions that were previously available in the Microsoft Purview Data owner policies experience are now supported only from the Microsoft Purview DevOps policies, which is a more focused experience.
+- SQL Performance Monitor
+- SQL Security Auditor
+
+If you currently have Data owner policies with these actions we encourage you to give the DevOps policies experience a try. Creating new policies or editing existing policies that involve these two actions is now unsupported from the Data owner experience.
+
+## Next steps
+Check these concept guides
+* [DevOps policies](concept-policies-devops.md)
purview Concept Policies Purview Account Delete https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/concept-policies-purview-account-delete.md
+
+ Title: Impact of deleting Microsoft Purview account on access policies
+description: This guide discusses the consequences of deleting a Microsoft Purview account on published access policies
+++++ Last updated : 02/09/2023++
+# Impact of deleting Microsoft Purview account on access policies
+
+## Important considerations
+Deleting a Microsoft Purview account that has active (that is, published) policies removes those policies. Any access to a data source or a dataset that was previously provisioned from Microsoft Purview gets revoked. This can lead to outages, that is, users or groups in your organization not able to access critical data. Review the decision to delete the Microsoft Purview account with the people in Policy Author role at root collection level before proceeding. To find out who holds that role in the Microsoft Purview account, review the section on managing role assignments in this [guide](./how-to-create-and-manage-collections.md#add-roles-and-restrict-access-through-collections).
+
+Before deleting the Microsoft Purview account, it's advisable that you provision access to the users in your organization that need access to datasets using an alternate mechanism or a different Purview account. Then orderly delete or unpublish any active policies
+* [Deleting DevOps policies](how-to-policies-devops-authoring-generic.md#delete-a-devops-policy) - You need to delete DevOps policies for them to be unpublished.
+* [Unpublishing Data Owner policies](how-to-policies-data-owner-authoring-generic.md#unpublish-a-policy).
+* [Deleting Self-service access policies](how-to-delete-self-service-data-access-policy.md) - You need to delete Self-service access policies for them to be unpublished.
+
+## Next steps
+Check these concept guides
+* [DevOps policies](concept-policies-devops.md)
+* [Data owner access policies](concept-policies-data-owner.md)
+* [Self-service access policies](concept-self-service-data-access-policy.md)
purview How To Data Share Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-data-share-faq.md
You can access shared data from storage clients, Azure Synapse Analytics Spark a
## Does the recipient of the share need to be a user's email address or can I share data with an application?
-Through the UI, you can only share data with recipient's Azure sign in email.
+Through the UI, you can share data with recipient's Azure sign in email or using service principal's object ID and tenant ID.
Through API and SDK, you also send invitation to object ID of a user principal or service principal. Also, you can optionally specify a tenant ID for which you want the share to be received into.
purview How To Receive Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/how-to-receive-share.md
If you can't access shared data, it's likely due to the following reasons:
* After asset mapping is successful, it may take some time for the data to appear in the target data store. Try again in a few minutes. Likewise, after you delete asset mapping, it may take a few minutes for the data to disappear in the target data store. * You're accessing shared data using a storage API version prior to February 2020. Only storage API version February 2020 and later are supported for accessing shared data. Ensure you're using the latest version of the storage SDK, PowerShell, CLI and Azure Storage Explorer. * You're accessing shared data using an analytics tool which uses a storage API version prior to February 2020. You can access shared data from Azure Synapse Analytics Spark and Databricks. You won't be able to access shared data using Azure Data Factory, Power BI or AzCopy.
-* YouΓÇÖre accessing shared data using ACLs. ACL is not supported for accessing shared data. You can use RBAC instead.
## Next steps
purview Scan Data Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/scan-data-sources.md
In the steps below we'll be using [Azure Blob Storage](register-scan-azure-blob-
Depending on the amount of data in your data source, a scan can take some time to run, so here's how you can check on progress and see results when the scan is complete.
-1. Navigate to the _data source_ in the _Collection_ and select **View Details** to check the status of the scan
+1. You can view your scan from the collection or from the source itself.
+
+1. To view from the collection, navigate to your _Collection_ in the data map, and select the **Scans** button.
+
+ :::image type="content" source="media/scan-data-sources/select-scans.png" alt-text="Screenshot of the collection page with the scans button highlighted.":::
+
+1. Select your scan name to see details.
+
+ :::image type="content" source="media/scan-data-sources/select-scan-name.png" alt-text="Screenshot of the scans in the collection list with the most recent scan name highlighted.":::
+
+1. Or, you can navigate directly to the _data source_ in the _Collection_ and select **View Details** to check the status of the scan.
:::image type="content" source="media/scan-data-sources/register-blob-view-scan.png" alt-text="Screenshot of the data map with a source's view details button highlighted.":::
Depending on the amount of data in your data source, a scan can take some time t
After a scan is complete, it can be managed or run again.
-1. Select the **Scan name** to manage the scan
+1. Select the **Scan name** from either the collections list or the source page to manage the scan.
:::image type="content" source="media/scan-data-sources/register-blob-manage-scan.png" alt-text="Screenshot of a source details page with the scan name link highlighted.":::
reliability Availability Zones Service Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-service-support.md
Azure services that support availability zones, including zonal and zone-redunda
Three types of Azure services support availability zones: *zonal*, *zone-redundant*, and *always-available* services. You can combine all three of these approaches to architecture when you design your reliability strategy. -- **Zonal services**: A resource can be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources can be pinned to a specific zone. For example, virtual machines, managed disks, or standard IP addresses can be pinned to a specific zone, which allows for increased resiliency by having one or more instances of resources spread across zones.-- **Zone-redundant services**: Resources are replicated or distributed across zones automatically. For example, zone-redundant services replicate the data across three zones so that a failure in one zone doesn't affect the high availability of the data.ΓÇ»
+- **Zonal services**: A resource can be deployed to a specific, self-selected availability zone to achieve more stringent latency or performance requirements. Resiliency is self-architected by replicating applications and data to one or more zones within the region. Resources are aligned to a selected zone. For example, virtual machines, managed disks, or standard IP addresses can be aligned to a same zone, which allows for increased resiliency by having multiple instances of resources deployed to different zones.
+
+- **Zone-redundant services**: Resources are replicated or distributed across zones automatically. For example, zone-redundant services replicate the data across multiple zones so that a failure in one zone doesn't affect the high availability of the data.ΓÇ»
- **Always-available services**: Always available across all Azure geographies and are resilient to zone-wide outages and region-wide outages. For a complete list of always-available services, also called non-regional services, in Azure, see [Products available by region](https://azure.microsoft.com/global-infrastructure/services/). For more information on older-generation virtual machines, see [Previous generations of virtual machine sizes](../virtual-machines/sizes-previous-gen.md).
remote-rendering Commercial Ready https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/remote-rendering/tutorials/unity/commercial-ready/commercial-ready.md
This approach could be taken one step further by persisting an association betwe
For more information:
-* [Microsoft Power Automate Template for OneDrive to Azure Storage Replication](https://flow.microsoft.com/galleries/public/templates/2f90b5d3-029b-4e2e-ad37-1c0fe6d187fe/when-a-file-is-uploaded-to-onedrive-copy-it-to-azure-storage-container/)
+* [Microsoft Power Automate Template for OneDrive to Azure Storage Replication](https://make.powerautomate.com/galleries/public/templates/2f90b5d3-029b-4e2e-ad37-1c0fe6d187fe/when-a-file-is-uploaded-to-onedrive-copy-it-to-azure-storage-container/)
* [OneDrive File Storage API Overview](/graph/onedrive-concept-overview) ### Direct CAD access
search Index Sql Relational Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-sql-relational-data.md
Previously updated : 11/12/2021 Last updated : 02/08/2023 # How to model relational SQL data for import and indexing in Azure Cognitive Search
This rowset is now ready for import into Azure Cognitive Search.
> [!NOTE] > This approach assumes that embedded JSON is under the [maximum column size limits of SQL Server](/sql/sql-server/maximum-capacity-specifications-for-sql-server).
- ## Use a complex collection for the "many" side of a one-to-many relationship
+## Use a complex collection for the "many" side of a one-to-many relationship
On the Azure Cognitive Search side, create an index schema that models the one-to-many relationship using nested JSON. The result set you created in the previous section generally corresponds to the index schema provided below (we cut some fields for brevity).
search Monitor Azure Cognitive Search Data Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/monitor-azure-cognitive-search-data-reference.md
Previously updated : 07/06/2022 Last updated : 02/08/2023
search Search Blob Metadata Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-blob-metadata-properties.md
Previously updated : 01/15/2022 Last updated : 02/08/2023 # Content metadata properties used in Azure Cognitive Search
search Search Blob Storage Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-blob-storage-integration.md
Previously updated : 01/14/2022 Last updated : 02/07/2023 # Search over Azure Blob Storage content
Inputs are your blobs, in a single container, in Azure Blob Storage. Blobs can b
Output is always an Azure Cognitive Search index, used for fast text search, retrieval, and exploration in client applications. In between is the indexing pipeline architecture itself. The pipeline is based on the *indexer* feature, discussed further on in this article.
-Once the index is created and populated, it exists independently of your blob container, but you can re-run indexing operations to refresh your index based on changed documents. Timestamp information on individual blobs is used for change detection. You can opt for either scheduled execution or on-demand indexing as the refresh mechanism.
+Once the index is created and populated, it exists independently of your blob container, but you can rerun indexing operations to refresh your index based on changed documents. Timestamp information on individual blobs is used for change detection. You can opt for either scheduled execution or on-demand indexing as the refresh mechanism.
## Resources used in a blob-search solution
Textual content of a document is extracted into a string field named "content".
## Use a Blob indexer for content extraction
-An *indexer* is a data-source-aware subservice in Cognitive Search, equipped with internal logic for sampling data, reading metadata data, retrieving data, and serializing data from native formats into JSON documents for subsequent import.
+An *indexer* is a data-source-aware subservice in Cognitive Search, equipped with internal logic for sampling data, reading and retrieving data and metadata, and serializing data from native formats into JSON documents for subsequent import.
Blobs in Azure Storage are indexed using the [blob indexer](search-howto-indexing-azure-blob-storage.md). You can invoke this indexer by using the **Azure search** command in Azure Storage, the **Import data** wizard, a REST API, or the .NET SDK. In code, you use this indexer by setting the type, and by providing connection information that includes an Azure Storage account along with a blob container. You can subset your blobs by creating a virtual directory, which you can then pass as a parameter, or by filtering on a file type extension.
search Search How To Alias https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-alias.md
POST /indexes/my-alias/docs/search?api-version=2021-04-30-preview
} ```
-If you expect that you may need to make updates to your index definition for your production indexes, you should use an alias rather than the index name for requests in your client-side application. Scenarios that require you to create a new index are outlined under these [rebuild conditions](search-howto-reindex.md#rebuild-conditions).
+If you expect to make updates to a production index, specify an alias rather than the index name in your client-side application. Scenarios that require an index rebuild are outlined in [Drop and rebuild an index](search-howto-reindex.md).
> [!NOTE] > You can only use an alias with [document operations](/rest/api/searchservice/document-operations) or to get and update an index definition. Aliases can't be used to delete an index, can't be used with the Analyze Text API, and can't be used as the `targetIndexName` on an indexer.
search Search Howto Powerapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-powerapps.md
Last updated 12/07/2021
# Tutorial: Query a Cognitive Search index from Power Apps
-Leverage the rapid application development environment of Power Apps to create a custom app for your searchable content in Azure Cognitive Search.
+Use the rapid application development environment of Power Apps to create a custom app for your searchable content in Azure Cognitive Search.
In this tutorial, you learn how to:
If you don't have an Azure subscription, open a [free account](https://azure.mic
## 1 - Create a custom connector
-A connector in Power Apps is a data source connection. In this step, you'll create a custom connector to connect to a search index in the cloud.
+A connector in Power Apps is a data source connection. In this step, create a custom connector to connect to a search index in the cloud.
1. [Sign in](https://make.powerapps.com) to Power Apps.
-1. On the left, expand **Dataverse** > **Custom Connectors**.
-
+1. On the left, expand **... More**. Find, pin, and then select **Custom Connectors**.
+ :::image type="content" source="./media/search-howto-powerapps/1-2-custom-connector.png" alt-text="Custom connector menu" border="true"::: 1. Select **+ New custom connector**, and then select **Create from blank**. :::image type="content" source="./media/search-howto-powerapps/1-3-create-blank.png" alt-text="Create from blank menu" border="true":::
-1. Give your custom connector a name (for example, *AzureSearchQuery*), and then click **Continue**.
+1. Give your custom connector a name (for example, *AzureSearchQuery*), and then select **Continue**.
1. Enter information in the General Page: * Icon background color (for instance, #007ee5) * Description (for instance, "A connector to Azure Cognitive Search")
- * In the Host, you will need to enter your search service URL (such as `<yourservicename>.search.windows.net`)
- * For Base URL, simply enter "/"
+ * In the Host, enter your search service URL (such as `<yourservicename>.search.windows.net`)
+ * For Base URL, enter "/"
:::image type="content" source="./media/search-howto-powerapps/1-5-general-info.png" alt-text="General information dialogue" border="true":::
-1. In the Security Page, set *API Key* as the **Authentication Type**, set both the parameter label and parameter name to *api-key*. For **Parameter location**, select *Header* as shown below.
+1. In the Security Page, set *API Key* as the **Authentication Type**, set both the parameter label and parameter name to *api-key*. For **Parameter location**, select *Header* as shown in the following screenshot.
:::image type="content" source="./media/search-howto-powerapps/1-6-authentication-type.png" alt-text="Authentication type option" border="true":::
-1. In the Definitions Page, select **+ New Action** to create an action that will query the index. Enter the value "Query" for the summary and the name of the operation ID. Enter a description like *"Queries the search index"*.
+1. In the Definitions Page, select **+ New Action** to create an action that queries the index. Enter the value "Query" for the summary and the name of the operation ID. Enter a description like *"Queries the search index"*.
:::image type="content" source="./media/search-howto-powerapps/1-7-new-action.png" alt-text="New action options" border="true":::
A connector in Power Apps is a data source connection. In this step, you'll crea
* Select the verb `GET`
- * For the URL enter a sample query for your search index (`search=*` returns all documents, `$select=` lets you choose fields). The API version is required. Fully specified, a URL might look like this: `https://mydemo.search.windows.net/indexes/hotels-sample-index/docs?search=*&$select=HotelName,Description,Address/City&api-version=2020-06-30`
+ * For the URL, enter a sample query for your search index (`search=*` returns all documents, `$select=` lets you choose fields). The API version is required. Fully specified, a URL might look like this: `https://mydemo.search.windows.net/indexes/hotels-sample-index/docs?search=*&$select=HotelName,Description,Address/City&api-version=2020-06-30`
* For Headers, type `Content-Type`. You'll set the value to `application/json` in a later step.
A connector in Power Apps is a data source connection. In this step, you'll crea
:::image type="content" source="./media/search-howto-powerapps/1-8-1-import-from-sample.png" alt-text="Import from sample" border="true":::
-1. Click **Import** to auto-fill the Request. Complete setting the parameter metadata by clicking the **...** symbol next to each of the parameters. Click **Back** to return to the Request page after each parameter update.
+1. Select **Import** to auto-fill the Request. Complete setting the parameter metadata by clicking the **...** symbol next to each of the parameters. Select **Back** to return to the Request page after each parameter update.
:::image type="content" source="./media/search-howto-powerapps/1-8-2-import-from-sample.png" alt-text="Import from sample dialogue" border="true":::
A connector in Power Apps is a data source connection. In this step, you'll crea
- {name: Content-Type, in: header, required: false, type: string} ```
-1. Switch back to the wizard and return to the **3. Request** step. Scroll down to the Response section. Click **"Add default response"**. This is critical because it will help Power Apps understand the schema of the response.
+1. Switch back to the wizard and return to the **3. Request** step. Scroll down to the Response section. Select **"Add default response"**. This step is critical because it helps Power Apps understand the schema of the response.
-1. Paste a sample response. An easy way to capture a sample response is through Search Explorer in the Azure portal. In Search Explorer, you should enter the same query as you did for the request, but add **$top=2** to constrain results to just two documents: : `search=*&$select=HotelName,Description,Address/City&$top=2`.
+1. Paste a sample response. An easy way to capture a sample response is through Search Explorer in the Azure portal. In Search Explorer, you should enter the same query as you did for the request, but add **$top=2** to constrain results to just two documents: `search=*&$select=HotelName,Description,Address/City&$top=2`.
Power Apps only needs a few results to detect the schema. You can copy the following response into the wizard now, assuming you're using the hotels-sample-index.
A connector in Power Apps is a data source connection. In this step, you'll crea
> [!TIP] > There is a character limit to the JSON response you can enter, so you may want to simplify the JSON before pasting it. The schema and format of the response is more important than the values themselves. For example, the Description field could be simplified to just the first sentence.
-1. Click **Import** to add the default response.
+1. Select **Import** to add the default response.
-1. Click **Create connector** on the top right to save the definition.
+1. Select **Create connector** on the top right to save the definition.
-1. Click **Close** to close the connector.
+1. Select **Close** to close the connector.
## 2 - Test the connection
-When the connector is first created, you need to reopen it from the Custom Connectors list in order to test it. Later, if you make additional updates, you can test from within the wizard.
+When the connector is first created, you need to reopen it from the Custom Connectors list in order to test it. Later, if you make more updates, you can test from within the wizard.
-You will need a [query API key](search-security-api-keys.md#find-existing-keys) for this task. Each time a connection is created, whether for a test run or inclusion in an app, the connector needs the query API key used for connecting to Azure Cognitive Search.
+You'll need a [query API key](search-security-api-keys.md#find-existing-keys) for this task. Each time a connection is created, whether for a test run or inclusion in an app, the connector needs the query API key used for connecting to Azure Cognitive Search.
-1. On the far left, click **Custom Connectors**.
+1. On the far left, select **Custom Connectors**.
1. Find your connector in the list (in this tutorial, is "AzureSearchQuery").
You will need a [query API key](search-security-api-keys.md#find-existing-keys)
1. Select **Edit** on the top right.
-1. Select **4. Test** to open the test page.
+1. Select **5. Test** to open the test page.
-1. In Test Operation, click **+ New Connection**.
+1. In Test Operation, select **+ New Connection**.
1. Enter a query API key. This is an Azure Cognitive Search query for read-only access to an index. You can [find the key](search-security-api-keys.md#find-existing-keys) in the Azure portal.
-1. In Operations, click the **Test operation** button. If you are successful you should see a 200 status, and in the body of the response you should see JSON that describes the search results.
+1. In Operations, select the **Test operation** button. If you're successful you should see a 200 status, and in the body of the response you should see JSON that describes the search results.
:::image type="content" source="./media/search-howto-powerapps/1-11-2-test-connector.png" alt-text="JSON response" border="true":::
+If the test fails, recheck the inputs. In particular, revisit the sample response and make sure it was created properly. The connector definition should show the expected items for the response.
+ ## 3 - Visualize results In this step, create a Power App with a search box, a search button, and a display area for the results. The Power App will connect to the recently created custom connector to get the data from Azure Search.
In this step, create a Power App with a search box, a search button, and a displ
:::image type="content" source="./media/search-howto-powerapps/2-1-create-canvas.png" alt-text="Create canvas app" border="true":::
-1. Select the type of application. For this tutorial, create a **Blank App** with the **Phone Layout**. The **Power Apps Studio** will appear.
+1. Select the type of application. For this tutorial, create a **Blank App** with the **Phone Layout**. Give the app a name, such as "Hotel Finder". Select **Create**. The **Power Apps Studio** appears.
-1. Once in the studio, select the **Data Sources** tab, and click on the new Connector you have just created. In our case, it is called *AzureSearchQuery*. Click **Add a connection**.
+1. In the studio, select the **Data Sources** tab, select **+ Add data**, and then find the new Connector you have just created. In this tutorial, it's called *AzureSearchQuery*. Select **Add a connection**.
Enter the query API key.
In this step, create a Power App with a search box, a search button, and a displ
:::image type="content" source="./media/search-howto-powerapps/2-6-search-button-event.png" alt-text="Button OnSelect" border="true":::
- This action will cause the button to update a new collection called *azResult* with the result of the search query, using the text in the *txtQuery* text box as the query term.
+ This action causes the button to update a new collection called *azResult* with the result of the search query, using the text in the *txtQuery* text box as the query term.
> [!NOTE] > Try this if you get a formula syntax error "The function 'ClearCollect' has some invalid functions":
When you're working in your own subscription, it's a good idea at the end of a p
You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
+If you're using a free service, remember that you're limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
## Next steps
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
Previously updated : 01/10/2022 Last updated : 02/07/2023 # Drop and rebuild an index in Azure Cognitive Search
-This article explains how to drop and rebuild an Azure Cognitive Search index, the circumstances under which rebuilds are required, and recommendations for mitigating the impact of rebuilds on ongoing query requests. If you frequently have to rebuild your search index, we recommend using [index aliases](search-how-to-alias.md) to make it easier to swap which index your application is pointing to.
+This article explains how to drop and rebuild an Azure Cognitive Search index. It explains the circumstances under which rebuilds are required, and provides recommendations for mitigating the impact of rebuilds on ongoing query requests. If you have to rebuild frequently, we recommend using [index aliases](search-how-to-alias.md) to make it easier to swap which index your application is pointing to.
-A search index is a collection of physical folders and field-based inverted indexes of your content, distributed in shards across the number of partitions allocated to your search index. In Azure Cognitive Search, you cannot drop and recreate individual fields. If you want to fully rebuild a field, all field storage must be deleted, recreated based on an existing or revised index schema, and then repopulated with data pushed to the index or pulled from external sources.
+During active development, it's common to drop and rebuild indexes when you're iterating over index design. Most developers work with a small representative sample of their data to facilitate this process.
-It's common to drop and rebuild indexes during development when you are iterating over index design. Most developers work with a small representative sample of their data to facilitate this process.
+## Modifications requiring a rebuild
-## Rebuild conditions
+The following table lists the modifications that require an index rebuild.
-The following table enumerates the conditions under which a rebuild is required.
-
-| Condition | Description |
+| Action | Description |
|--|-|
+| Delete a field | To physically remove all traces of a field, you have to rebuild the index. When an immediate rebuild isn't practical, you can modify application code to disable access to the "deleted" field or use the [$select query parameter](search-query-odata-select.md) to choose which fields are represented in the result set. Physically, the field definition and contents remain in the index until the next rebuild, when you apply a schema that omits the field in question. |
| Change a field definition | Revising a field name, data type, or specific [index attributes](/rest/api/searchservice/create-index) (searchable, filterable, sortable, facetable) requires a full rebuild. | | Assign an analyzer to a field | [Analyzers](search-analyzers.md) are defined in an index and then assigned to fields. You can add a new analyzer definition to an index at any time, but you can only *assign* an analyzer when the field is created. This is true for both the **analyzer** and **indexAnalyzer** properties. The **searchAnalyzer** property is an exception (you can assign this property to an existing field). |
-| Update or delete an analyzer definition in an index | You cannot delete or change an existing analyzer configuration (analyzer, tokenizer, token filter, or char filter) in the index unless you rebuild the entire index. |
+| Update or delete an analyzer definition in an index | You can't delete or change an existing analyzer configuration (analyzer, tokenizer, token filter, or char filter) in the index unless you rebuild the entire index. |
| Add a field to a suggester | If a field already exists and you want to add it to a [Suggesters](index-add-suggesters.md) construct, you must rebuild the index. |
-| Delete a field | To physically remove all traces of a field, you have to rebuild the index. When an immediate rebuild is not practical, you can modify application code to disable access to the "deleted" field or use the [$select query parameter](search-query-odata-select.md) to choose which fields are represented in the result set. Physically, the field definition and contents remain in the index until the next rebuild, when you apply a schema that omits the field in question. |
-| Switch tiers | If you require more capacity, there is no in-place upgrade in the Azure portal. A new service must be created, and indexes must be built from scratch on the new service. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
+| Switch tiers | In-place upgrades aren't supported. If you require more capacity, you must create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure Cognitive Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-samples). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
-## Update conditions
+## Modifications with no rebuild requirement
-Many other modifications can be made without impacting existing physical structures. Specifically, the following changes do *not* require an index rebuild. For these changes, you can [update an index definition](/rest/api/searchservice/update-index) with your changes.
+Many other modifications can be made without impacting existing physical structures. Specifically, the following changes don't require an index rebuild. For these changes, you can [update an existing index definition](/rest/api/searchservice/update-index) with your changes.
+ Add a new field + Set the **retrievable** attribute on an existing field
During development, the index schema changes frequently. You can plan for it by
For applications already in production, we recommend creating a new index that runs side by side an existing index to avoid query downtime. Your application code provides redirection to the new index.
-1. Determine whether a rebuild is required. If you are just adding fields, or changing some part of the index that is unrelated to fields, you might be able to simply [update the definition](/rest/api/searchservice/update-index) without deleting, recreating, and fully reloading it.
+1. Determine whether a rebuild is required. If you're just adding fields, or changing some part of the index that is unrelated to fields, you might be able to simply [update the definition](/rest/api/searchservice/update-index) without deleting, recreating, and fully reloading it.
1. [Get an index definition](/rest/api/searchservice/get-index) in case you need it for future reference.
-1. [Drop the existing index](/rest/api/searchservice/delete-index), assuming you are not running new and old indexes side by side.
+1. [Drop the existing index](/rest/api/searchservice/delete-index), assuming you aren't running new and old indexes side by side.
Any queries targeting that index are immediately dropped. Remember that deleting an index is irreversible, destroying physical storage for the fields collection and other constructs. Pause to think about the implications before dropping it.
For applications already in production, we recommend creating a new index that r
1. [Load the index with documents](/rest/api/searchservice/addupdate-or-delete-documents) from an external source.
-When you create the index, physical storage is allocated for each field in the index schema, with an inverted index created for each searchable field. Fields that are not searchable can be used in filters or expressions, but do not have inverted indexes and are not full-text or fuzzy searchable. On an index rebuild, these inverted indexes are deleted and recreated based on the index schema you provide.
+When you create the index, physical storage is allocated for each field in the index schema, with an inverted index created for each searchable field. Fields that aren't searchable can be used in filters or expressions, but don't have inverted indexes and aren't full-text or fuzzy searchable. On an index rebuild, these inverted indexes are deleted and recreated based on the index schema you provide.
When you load the index, each field's inverted index is populated with all of the unique, tokenized words from each document, with a map to corresponding document IDs. For example, when indexing a hotels data set, an inverted index created for a City field might contain terms for Seattle, Portland, and so forth. Documents that include Seattle or Portland in the City field would have their document ID listed alongside the term. On any [Add, Update or Delete](/rest/api/searchservice/addupdate-or-delete-documents) operation, the terms and document ID list are updated accordingly. ## Balancing workloads
-Indexing does not run in the background, but the search service will balance any indexing jobs against ongoing queries. During indexing, you can [monitor query requests](search-monitor-queries.md) in the portal to ensure queries are completing in a timely manner.
+Indexing doesn't run in the background, but the search service will balance any indexing jobs against ongoing queries. During indexing, you can [monitor query requests](search-monitor-queries.md) in the portal to ensure queries are completing in a timely manner.
If indexing workloads introduce unacceptable levels of query latency, conduct [performance analysis](search-performance-analysis.md) and review these [performance tips](search-performance-tips.md) for potential mitigation.
search Search Query Odata Collection Operators https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-odata-collection-operators.md
Previously updated : 09/16/2021
-translation.priority.mt:
- - "de-de"
- - "es-es"
- - "fr-fr"
- - "it-it"
- - "ja-jp"
- - "ko-kr"
- - "pt-br"
- - "ru-ru"
- - "zh-cn"
- - "zh-tw"
Last updated : 02/07/2023+ # OData collection operators in Azure Cognitive Search - `any` and `all`
-When writing an [OData filter expression](query-odata-filter-orderby-syntax.md) to use with Azure Cognitive Search, it is often useful to filter on collection fields. You can achieve this using the `any` and `all` operators.
+When writing an [OData filter expression](query-odata-filter-orderby-syntax.md) to use with Azure Cognitive Search, it's often useful to filter on collection fields. You can achieve this using the `any` and `all` operators.
## Syntax
Match documents where the `rooms` field is empty:
not rooms/any() ```
-Match documents where for all rooms, the `rooms/amenities` field contains "tv" and `rooms/baseRate` is less than 100:
+Match documents where (for all rooms) the `rooms/amenities` field contains "tv", and `rooms/baseRate` is less than 100:
```text rooms/all(room: room/amenities/any(a: a eq 'tv') and room/baseRate lt 100.0)
search Service Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-configure-firewall.md
Previously updated : 01/31/2022 Last updated : 02/08/2023 # Configure an IP firewall for Azure Cognitive Search
You can set IP rules in the Azure portal, as described in this article, or use t
1. Set **Public Network Access** to **Selected Networks**. If your connectivity is set to **Disabled**, you can only access your search service via a [private endpoint](service-create-private-endpoint.md).
- :::image type="content" source="media/service-configure-firewall/azure-portal-firewall.png" alt-text="Screenshot showing how to configure the IP firewall in the Azure portal" border="true":::
+ :::image type="content" source="media/service-configure-firewall/azure-portal-firewall.png" alt-text="Screenshot showing how to configure the IP firewall in the Azure portal." border="true":::
The Azure portal provides the ability to specify IP addresses and IP address ranges in the CIDR format. An example of CIDR notation is 8.8.8.0/24, which represents the IPs that range from 8.8.8.0 to 8.8.8.255.
sentinel Connect Logstash Data Connection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md
The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y
The Microsoft Sentinel output plugin is available in the Logstash collection. -- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-logstash-output-azure-loganalytics](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-logstash-output-plugin)** plugin.
+- Follow the instructions in the Logstash [Working with plugins](https://www.elastic.co/guide/en/logstash/current/working-with-plugins.html) document to install the **[microsoft-sentinel-logstash-output-plugin](https://github.com/Azure/Azure-Sentinel/tree/master/DataConnectors/microsoft-sentinel-logstash-output-plugin)** plugin.
- If your Logstash system does not have Internet access, follow the instructions in the Logstash [Offline Plugin Management](https://www.elastic.co/guide/en/logstash/current/offline-plugins.html) document to prepare and use an offline plugin pack. (This will require you to build another Logstash system with Internet access.) ### Create a sample file
After you retrieve the required values:
#### Optional configuration
-|Field |How to retrieve |Default value |
+|Field |Description |Default value |
|||| |`key_names` |An array of strings. Provide this field if you want to send a subset of the columns to Log Analytics. |None (field is empty) | |`plugin_flush_interval` |Defines the maximal time difference (in seconds) between sending two messages to Log Analytics. |`5` | |`retransmission_time` |Sets the amount of time in seconds for retransmitting messages once sending failed. |`10` | |`compress_data` |When this field is `True`, the event data is compressed before using the API. Recommended for high throughput pipelines. |`False` |
+|`proxy` |Specify which proxy URL to use for all API calls. |None (field is empty) |
#### Example: Output plugin configuration section
output {
dcr_stream_name => "<enter your stream name here> " create_sample_file=> false sample_file_path => "c:\\temp"
+ proxy => "http://proxy.example.com"
} } ```
If you are not seeing any data in this log file, generate and send some events l
In this article, you learned how to use Logstash to connect external data sources to Microsoft Sentinel. To learn more about Microsoft Sentinel, see the following articles: - Learn how to [get visibility into your data and potential threats](get-visibility.md).-- Get started detecting threats with Microsoft Sentinel, using [built-in](detect-threats-built-in.md) or [custom](detect-threats-custom.md) rules.
+- Get started detecting threats with Microsoft Sentinel, using [built-in](detect-threats-built-in.md) or [custom](detect-threats-custom.md) rules.
service-bus-messaging Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/private-link-service.md
If you already have an existing namespace, you can create a private endpoint by
[!INCLUDE [service-bus-trusted-services](./includes/service-bus-trusted-services.md)]
+To allow trusted services to access your namespace, switch to the **Public Access** tab on the **Networking** page, and select **Yes** for **Allow trusted Microsoft services to bypass this firewall?**.
++ ## Add a private endpoint using PowerShell The following example shows you how to use Azure PowerShell to create a private endpoint connection to a Service Bus namespace.
service-bus-messaging Service Bus Migrate Standard Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-migrate-standard-premium.md
After the migration is committed, the connection string that pointed to the stan
The sender and receiver applications will disconnect from the standard Namespace and reconnect to the premium namespace automatically.
+If your are using the ARM Id for configuration rather a connection string (e.g. as a destination for an Event Grid Subscription), then you need to update the ARM Id to be that of the Premium namespace.
+ ### What do I do after the standard to premium migration is complete? The standard to premium migration ensures that the entity metadata such as topics, subscriptions, and filters are copied from the standard namespace to the premium namespace. The message data that was committed to the standard namespace isn't copied from the standard namespace to the premium namespace.
static-web-apps Apis Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/apis-functions.md
The following table contrasts the differences between using managed and existing
| Feature | Managed Functions | Bring your own Functions | |||| | Access to Azure Functions [triggers](../azure-functions/functions-triggers-bindings.md#supported-bindings) | HTTP only | All |
-| Supported Azure Functions [runtimes](../azure-functions/supported-languages.md#languages-by-runtime-version)<sup>1</sup> | Node.js 12<br>Node.js 14<br>Node.js 16<br>.NET Core 3.1<br>.NET 6.0<br>.NET 7.0<br>Python 3.8<br>Python 3.9 | All |
+| Supported Azure Functions [runtimes](../azure-functions/supported-languages.md#languages-by-runtime-version)<sup>1</sup> | Node.js 12<br>Node.js 14<br>Node.js 16<br>Node.js 18 (public preview)<br>.NET Core 3.1<br>.NET 6.0<br>.NET 7.0<br>Python 3.8<br>Python 3.9<br>Python 3.10 (public preview) | All |
| Supported Azure Functions [hosting plans](../azure-functions/functions-scale.md) | Consumption | Consumption<br>Premium<br>Dedicated | | [Integrated security](user-information.md) with direct access to user authentication and role-based authorization data | Γ£ö | Γ£ö | | [Routing integration](./configuration.md?#routes) that makes the `/api` route available to the web app securely without requiring custom CORS rules. | Γ£ö | Γ£ö |
In addition to the Static Web Apps API [constraints](apis-overview.md#constraint
| Managed functions | Bring your own functions | |||
-| <ul><li>Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md).</li><li>The Azure Functions app must either be in Node.js 12, Node.js 14, Node.js 16, .NET Core 3.1, .NET 6.0, Python 3.8, or Python 3.9.</li><li>Some application settings are managed by the service, therefore the following prefixes are reserved by the runtime:<ul><li>*APPSETTING\_, AZUREBLOBSTORAGE\_, AZUREFILESSTORAGE\_, AZURE_FUNCTION\_, CONTAINER\_, DIAGNOSTICS\_, DOCKER\_, FUNCTIONS\_, IDENTITY\_, MACHINEKEY\_, MAINSITE\_, MSDEPLOY\_, SCMSITE\_, SCM\_, WEBSITES\_, WEBSITE\_, WEBSOCKET\_, AzureWeb*</li></ul></li><li>Some application tags are internally used by the service. Therefore, the following tags are reserved:<ul><li> *AccountId, EnvironmentId, FunctionAppId*.</li></ul></li></ul> | <ul><li>You are responsible to manage the Functions app deployment.</li></ul> |
+| <ul><li>Triggers are limited to [HTTP](../azure-functions/functions-bindings-http-webhook.md).</li><li>The Azure Functions app must either be in Node.js 12, Node.js 14, Node.js 16, Node.js 18 (public preview), .NET Core 3.1, .NET 6.0, Python 3.8, Python 3.9 or Python 3.10 (public preview).</li><li>Some application settings are managed by the service, therefore the following prefixes are reserved by the runtime:<ul><li>*APPSETTING\_, AZUREBLOBSTORAGE\_, AZUREFILESSTORAGE\_, AZURE_FUNCTION\_, CONTAINER\_, DIAGNOSTICS\_, DOCKER\_, FUNCTIONS\_, IDENTITY\_, MACHINEKEY\_, MAINSITE\_, MSDEPLOY\_, SCMSITE\_, SCM\_, WEBSITES\_, WEBSITE\_, WEBSOCKET\_, AzureWeb*</li></ul></li><li>Some application tags are internally used by the service. Therefore, the following tags are reserved:<ul><li> *AccountId, EnvironmentId, FunctionAppId*.</li></ul></li></ul> | <ul><li>You are responsible to manage the Functions app deployment.</li></ul> |
## Next steps
static-web-apps Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/custom-domain.md
The following table includes links to articles that demonstrate how to configure
<sup>1</sup> Some registrars like GoDaddy and Google don't support domain records that affect how you configure your apex domain. Consider using [Azure DNS](custom-domain-azure-dns.md) with these registrars to set up your apex domain.
+> [!NOTE]
+> Adding a custom domain to a [preview environment](preview-environments.md) is not supported. Unicode domains, including Pynocode domains and the `xn--` prefix are also not supported.
+ ## About domains Setting up an apex domain is a common scenario to configure once your domain name is set up. Creating an apex domain is achieved by configuring an `ALIAS` or `ANAME` record or through `CNAME` flattening. Some domain registrars like GoDaddy and Google don't support these DNS records. If your domain registrar doesn't support all the DNS records you need, consider using [Azure DNS to configure your domain](custom-domain-azure-dns.md).
static-web-apps Quotas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/quotas.md
Previously updated : 1/24/2023 Last updated : 02/09/2023
The following quotas exist for Azure Static Web Apps.
| Authorization (built-in roles) | Unlimited end-users that may authenticate with built-in `authenticated` role | Unlimited end-users that may authenticate with built-in `authenticated` role | | Authorization (custom roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-custom.md#manage-roles) | Maximum of 25 end-users that may belong to custom roles via [invitations](authentication-custom.md#manage-roles), or unlimited end-users that may be assigned custom roles via [serverless function](authentication-custom.md#manage-roles) | | Request Size Limit | 30 MB | 30 MB |
+| File count | 15,000 | 15,000|
## GitHub storage
See the following resources for more detail:
## Next steps -- [Overview](overview.md)
+- [Azure Static Web Apps overview](overview.md)
storage Calculate Blob Count Size https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/calculate-blob-count-size.md
After you create your Azure Synapse workspace, do the following steps.
## Next steps - [Use Azure Storage blob inventory to manage blob data](blob-inventory.md)
+- [Tutorial: Calculate container statistics by using Databricks](storage-blob-calculate-container-statistics-databricks.md)
- [Calculate the total billing size of a blob container](../scripts/storage-blobs-container-calculate-billing-size-powershell.md)
storage Lifecycle Management Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-overview.md
Lifecycle management supports tiering and deletion of current versions, previous
| Action | Current Version | Snapshot | Previous Versions |--|--||| | tierToCool | Supported for `blockBlob` | Supported | Supported |