Updates from: 01/14/2023 02:22:21
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory How To Mfa Number Match https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/how-to-mfa-number-match.md
description: Learn how to use number matching in MFA notifications
Previously updated : 01/06/2023 Last updated : 01/13/2023
AD FS adapter will require number matching on supported versions of Windows Serv
||--| | Windows Server 2022 | [November 9, 2021ΓÇöKB5007205 (OS Build 20348.350)](https://support.microsoft.com/topic/november-9-2021-kb5007205-os-build-20348-350-af102e6f-cc7c-4cd4-8dc2-8b08d73d2b31) | | Windows Server 2019 | [November 9, 2021ΓÇöKB5007206 (OS Build 17763.2300)](https://support.microsoft.com/topic/november-9-2021-kb5007206-os-build-17763-2300-c63b76fa-a9b4-4685-b17c-7d866bb50e48) |
+| Windows Server 2016 | [October 12, 2021ΓÇöKB5006669 (OS Build 14393.4704)](https://support.microsoft.com/topic/october-12-2021-kb5006669-os-build-14393-4704-bcc95546-0768-49ae-bec9-240cc59df384) |
### NPS extension
active-directory Howto Authentication Methods Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-authentication-methods-activity.md
Previously updated : 07/13/2021 Last updated : 01/12/2023
The registration details report shows the following information for each user:
- SSPR Registered (Registered, Not Registered) - SSPR Enabled (Enabled, Not Enabled) - SSPR Capable (Capable, Not Capable) -- Methods registered (Email, Mobile Phone, Alternative Mobile Phone, Office Phone, Microsoft Authenticator Push, Software One Time Passcode, FIDO2, Security Key, Security questions)
+- Methods registered (Email, Mobile Phone, Alternative Mobile Phone, Office Phone, Microsoft Authenticator Push, Software One Time Passcode, FIDO2, Security Key, Security questions, Hardware OATH token)
![Screenshot of user registration details](media/how-to-authentication-methods-usage-insights/registration-details.png)
The registration details report shows the following information for each user:
## Limitations - The data in the report is not updated in real-time and may reflect a latency of up to a few hours.-- The **PhoneAppNotification** or **PhoneAppOTP** methods that a user might have configured are not displayed in the dashboard.
+- The **PhoneAppNotification** or **PhoneAppOTP** methods that a user might have configured are not displayed in the dashboard on **Azure AD Authentication methods - Policies**.
## Next steps
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
This article walks you through the installation process for the Azure Active Directory (Azure AD) Connect provisioning agent and how to initially configure it in the Azure portal.
->[!IMPORTANT]
->The following installation instructions assume that all the [prerequisites](how-to-prerequisites.md) were met.
+> [!IMPORTANT]
+> The following installation instructions assume that you've met all the [prerequisites](how-to-prerequisites.md).
>[!NOTE]
->This article deals with installing the provisioning agent by using the wizard. For information on installing the Azure AD Connect provisioning agent by using a command-line interface (CLI), see [Install the Azure AD Connect provisioning agent by using a CLI and PowerShell](how-to-install-pshell.md).
+>This article deals with installing the provisioning agent by using the wizard. For information about installing the Azure AD Connect provisioning agent by using a CLI, see [Install the Azure AD Connect provisioning agent by using a CLI and PowerShell](how-to-install-pshell.md).
-For more information and an example, see the following video.
+For more information and an example, view the following video:
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWK5mR] ## Group Managed Service Accounts
-A Group Managed Service Account (gMSA) is a managed domain account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators. It also extends this functionality over multiple servers. Azure AD Connect cloud sync supports and recommends the use of a Group Managed Service Account for running the agent. For more information on a Group Managed Service Account, see [Group Managed Service Accounts](how-to-prerequisites.md#group-managed-service-accounts).
+A group Managed Service Account (gMSA) is a managed domain account that provides automatic password management, simplified service principal name (SPN) management, and the ability to delegate the management to other administrators. A gMSA also extends this functionality over multiple servers. Azure AD Connect cloud sync supports and recommends the use of a gMSA for running the agent. For more information, see [Group Managed Service Accounts](how-to-prerequisites.md#group-managed-service-accounts).
-### Upgrade an existing agent to use the gMSA
-To upgrade an existing agent to use the Group Managed Service Account created during installation, update the agent service to the latest version by running AADConnectProvisioningAgent.msi. Now run through the installation wizard again and provide the credentials to create the account when prompted.
+### Update an existing agent to use the gMSA
+To update an existing agent to use the Group Managed Service Account created during installation, upgrade the agent service to the latest version by running *AADConnectProvisioningAgent.msi*. Now run through the installation wizard again and provide the credentials to create the account when you're prompted to do so.
## Install the agent [!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
-## Verify agent installation
+## Verify the agent installation
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] >[!IMPORTANT]
->The agent has been installed, but it must be configured and enabled before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
+> After you've installed the agent, you must configure and enable it before it will start synchronizing users. To configure a new agent, see [Create a new configuration for Azure AD Connect cloud sync](how-to-configure.md).
## Enable password writeback in Azure AD Connect cloud sync
-To use password writeback and enable the self-service password reset (SSPR) service to detect the cloud sync agent, you need to use the `Set-AADCloudSyncPasswordWritebackConfiguration` cmdlet and tenantΓÇÖs global administrator credentials:
+To use *password writeback* and enable the self-service password reset (SSPR) service to detect the cloud sync agent, use the `Set-AADCloudSyncPasswordWritebackConfiguration` cmdlet and the tenantΓÇÖs global administrator credentials:
``` Import-Module "C:\\Program Files\\Microsoft Azure AD Connect Provisioning Agent\\Microsoft.CloudSync.Powershell.dll" Set-AADCloudSyncPasswordWritebackConfiguration -Enable $true -Credential $(Get-Credential) ```
-For more information on using password writeback with Azure AD Connect cloud sync, see [Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment (preview)](../../active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
+For more information about using password writeback with Azure AD Connect cloud sync, see [Tutorial: Enable cloud sync self-service password reset writeback to an on-premises environment (preview)](../../active-directory/authentication/tutorial-enable-cloud-sync-sspr-writeback.md).
-## Installing against US government cloud
+## Install an agent in the US government cloud
-By default, the Azure Active Directory (Azure AD) Connect provisioning agent installs against the default Azure cloud environment. If you're installing the agent for use in the US government, follow these steps:
+By default, the Azure AD Connect provisioning agent is installed in the default Azure environment. If you're installing the agent for US government use, make this change in step 7 of the preceding installation procedure:
-- In step #7 above, instead of select **Open file**, go to start run and navigate to the **AADConnectProvisioningAgentSetup.exe** file. In the run box, after the executable, enter **ENVIRONMENTNAME=AzureUSGovernment** and select **Ok**.
+- Instead of selecting **Open file**, select **Start** > **Run**, and then go to the *AADConnectProvisioningAgentSetup.exe* file. In the **Run** box, after the executable, enter **ENVIRONMENTNAME=AzureUSGovernment**, and then select **OK**.
- [![Screenshot showing US government cloud install.](media/how-to-install/new-install-12.png)](media/how-to-install/new-install-12.png#lightbox)
+ [![Screenshot that shows how to install an agent in the US government cloud.](media/how-to-install/new-install-12.png)](media/how-to-install/new-install-12.png#lightbox)
## Password hash synchronization and FIPS with cloud sync
-If your server has been locked down according to Federal Information Processing Standard (FIPS), then MD5 is disabled.
-
+If your server has been locked down according to the Federal Information Processing Standard (FIPS), MD5 (message-digest algorithm 5) is disabled.
-To enable MD5 for password hash synchronization, perform the following steps:
+To enable MD5 for password hash synchronization, do the following:
1. Go to %programfiles%\Microsoft Azure AD Connect Provisioning Agent.
-2. Open AADConnectProvisioningAgent.exe.config.
-3. Go to the configuration/runtime node at the top of the file.
-4. Add the following node: `<enforceFIPSPolicy enabled="false"/>`
-5. Save your changes.
+1. Open *AADConnectProvisioningAgent.exe.config*.
+1. Go to the configuration/runtime node at the top of the file.
+1. Add the `<enforceFIPSPolicy enabled="false"/>` node.
+1. Save your changes.
-For reference, this snippet is what it should look like:
+For reference, your code should look like the following snippet:
```xml <configuration>
For reference, this snippet is what it should look like:
</configuration> ```
-For information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://blogs.technet.microsoft.com/enterprisemobility/2014/06/28/aad-password-sync-encryption-and-fips-compliance/).
+For more information about security and FIPS, see [Azure AD password hash sync, encryption, and FIPS compliance](https://blogs.technet.microsoft.com/enterprisemobility/2014/06/28/aad-password-sync-encryption-and-fips-compliance/).
## Next steps
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-existing-forest.md
Title: Tutorial - Integrate an existing forest and a new forest with a single Azure AD tenant using Azure AD Connect cloud sync.
+ Title: Tutorial - Integrate an existing forest and a new forest with a single Azure AD tenant by using Azure AD Connect cloud sync
description: Learn how to add cloud sync to an existing hybrid identity environment.
-# Integrate an existing forest and a new forest with a single Azure AD tenant
+# Tutorial: Integrate an existing forest and a new forest with a single Azure AD tenant
This tutorial walks you through adding cloud sync to an existing hybrid identity environment.
This tutorial walks you through adding cloud sync to an existing hybrid identity
You can use the environment you create in this tutorial for testing or for getting more familiar with how a hybrid identity works.
-In this scenario, there's an existing forest synced using Azure AD Connect sync to an Azure AD tenant. And you have a new forest that you want to sync to the same Azure AD tenant. You'll set up cloud sync for the new forest.
+In this scenario, you sync an existing forest with an Azure AD tenant by using Azure Active Directory (Azure AD) Connect. You want to sync a new forest with the same Azure AD tenant. You'll set up cloud sync for the new forest.
## Prerequisites
-### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
-2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
+Before you begin, set up your environments.
+
+### In the Azure AD admin center
+
+1. Create a cloud-only global administrator account on your Azure AD tenant.
+
+ This way, you can manage the configuration of your tenant if your on-premises services fail or become unavailable. [Learn how to add a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Complete this step to ensure that you don't get locked out of your tenant.
+
+1. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
### In your on-premises environment
-1. Identify a domain-joined host server running Windows Server 2012 R2 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
+1. Identify a domain-joined host server that's running Windows Server 2012 R2 or later, with at least 4 GB of RAM and .NET 4.7.1+ runtime.
+
+1. If there's a firewall between your servers and Azure AD, configure the following items:
-2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used | | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
- | **443** | Handles all outbound communication with the service |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed on the Azure AD portal. |
+ | **80** | Downloads the certificate revocation lists (CRLs) while it validates the TLS/SSL certificate. |
+ | **443** | Handles all outbound communication with the service. |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.
- - If your firewall or proxy allows you to specify safe suffixes, then add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+
+ - If your firewall or proxy allows you to specify safe suffixes, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If it doesn't, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+ - Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
+
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Because these URLs are used to validate certificates for other Microsoft products, you might already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
+If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, the agent is DC1. To install the agent, do the following:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync
- Use the following steps to configure provisioning
+
+To configure the cloud sync setup, do the following:
1. Sign in to the Azure AD portal.
-2. Select **Azure Active Directory**
-3. Select **Azure AD Connect**
-4. Select **Manage cloud sync**
+1. Select **Azure Active Directory**.
+1. Select **Azure AD Connect**.
+1. Select **Manage cloud sync**.
- ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot that highlights the "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-5. Select **New Configuration**
+1. Select **New Configuration**.
- ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
+ ![Screenshot of the Azure AD Connect cloud sync page, with the "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
+1. On the **Configuration** page, enter a **Notification email**, move the selector to **Enable**, and then select **Save**.
- ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)
+ ![Screenshot of the "Edit provisioning configuration" page.](media/how-to-configure/configure-2.png)
1. The configuration status should now be **Healthy**.
- ![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
-
-## Verify users are created and synchronization is occurring
+ ![Screenshot of Azure AD Connect cloud sync page, showing a "Healthy" status.](media/how-to-configure/manage-4.png)
-You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in our Azure AD tenant. This process may take a few hours to complete. To verify users are synchronized, do the following:
+## Verify that users are created and synchronization is occurring
+You'll now verify that the users in your on-premises Active Directory have been synchronized and exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users are synchronized, do the following:
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
-2. On the left, select **Azure Active Directory**
-3. Under **Manage**, select **Users**.
-4. Verify that you see the new users in our tenant
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
+1. On the left pane, select **Azure Active Directory**.
+1. Under **Manage**, select **Users**.
+1. Verify that the new users are displayed in your tenant.
-## Test signing in with one of our users
+## Test signing in with one of your users
-1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-2. Sign in with a user account that was created in our new tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
+1. Go to the [Microsoft My Apps](https://myapps.microsoft.com) page.
+1. Sign in with a user account that was created in your new tenant. You'll need to sign in by using the following format: *user@domain.onmicrosoft.com*. Use the same password that the user uses to sign in on-premises.
- ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
+ ![Screenshot that shows the My Apps portal with signed-in users.](media/tutorial-single-forest/verify-1.png)
You have now successfully set up a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced AD forest
-description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
+ Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced Active Directory forest
+description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced by using Azure Active Directory (Azure AD) Connect sync.
-# Pilot cloud sync for an existing synced AD forest
+# Pilot cloud sync for an existing synced Active Directory forest
-This tutorial walks you through piloting cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
+This tutorial walks you through piloting cloud sync for a test Active Directory forest that's already synced by using Azure Active Directory (Azure AD) Connect sync.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-migrate-aadc-aadccp/diagram-2.png) ## Considerations
-Before you try this tutorial, consider the following items:
+Before you try this tutorial, keep the following in mind:
-1. Ensure that you're familiar with basics of cloud sync.
+* You should be familiar with the basics of cloud sync.
-1. Ensure that you're running Azure AD Connect sync version 1.4.32.0 or later and have configured the sync rules as documented.
+* Ensure that you're running Azure AD Connect cloud sync version 1.4.32.0 or later and you've configured the sync rules as documented.
-1. When piloting, you'll be removing a test OU or group from Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
+* When you're piloting, you'll be removing a test organizational unit (OU) or group from the Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
- - User objects, the objects in Azure AD are soft-deleted and can be restored.
- - Group objects, the objects in Azure AD are hard-deleted and can't be restored.
+ - **User objects**: The objects in Azure AD that are soft-deleted and can be restored.
+ - **Group objects**: The objects in Azure AD that are hard-deleted and can't be restored.
- A new link type has been introduced in Azure AD Connect sync, which will prevent the deletion in a piloting scenario.
+ A new link type has been introduced in Azure AD Connect sync, which will prevent deletions in a piloting scenario.
-1. Ensure that the objects in the pilot scope have ms-ds-consistencyGUID populated so cloud sync hard matches the objects.
+* Ensure that the objects in the pilot scope have *ms-ds-consistencyGUID* populated so that cloud sync hard matches the objects.
> [!NOTE]
- > Azure AD Connect sync does not populate *ms-ds-consistencyGUID* by default for group objects.
+ > Azure AD Connect sync doesn't populate *ms-ds-consistencyGUID* by default for group objects.
-1. This configuration is for advanced scenarios. Ensure that you follow the steps documented in this tutorial precisely.
+* This configuration is for advanced scenarios. Be sure to follow the steps documented in this tutorial precisely.
## Prerequisites
-The following are prerequisites required for completing this tutorial
+Before you begin, be sure that you've set up your environment to meet the following prerequisites:
-- A test environment with Azure AD Connect sync version 1.4.32.0 or later-- An OU or group that is in scope of sync and can be used the pilot. We recommend starting with a small set of objects.-- A server running Windows Server 2012 R2 or later that will host the provisioning agent.-- Source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID*
+- A test environment with [Azure AD connect version 1.4.32.0 or later](https://www.microsoft.com/download/details.aspx?id=47594).
+
+ To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
-## Update Azure AD Connect
+- An OU or group that's in scope of sync and can be used in the pilot. We recommend starting with a small set of objects.
-As a minimum, you should have [Azure AD connect](https://www.microsoft.com/download/details.aspx?id=47594) 1.4.32.0. To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
+- Windows Server 2012 R2 or later, which will host the provisioning agent.
+
+- The source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID*.
## Stop the scheduler
-Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. In order to modify and add custom rules, you want to disable the scheduler so that synchronizations won't run while you're working making the changes. To stop the scheduler, use the following steps:
+Azure AD Connect sync synchronizes changes occurring in your on-premises directory by using a scheduler. To modify and add custom rules, disable the scheduler so that synchronizations won't run while you're making the changes. To stop the scheduler:
-1. On the server that is running Azure AD Connect sync open PowerShell with Administrative Privileges.
-2. Run `Stop-ADSyncSyncCycle`. Hit Enter.
-3. Run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
+1. On the server that's running Azure AD Connect sync, open PowerShell with administrative privileges.
+1. Run `Stop-ADSyncSyncCycle`, and then select **Enter**.
+1. Run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
>[!NOTE]
->If you are running your own custom scheduler for Azure AD Connect sync, then please disable the scheduler.
+>If you're running your own custom scheduler for Azure AD Connect sync, be sure to disable the scheduler.
-## Create custom user inbound rule
+## Create a custom user inbound rule
- 1. Launch the synchronization editor from the application menu in desktop as shown below:
+1. Open **Synchronization Rules Editor** from the application menu in the desktop, as shown in the following screenshot:
- ![Screenshot of the synchronization rule editor menu.](media/tutorial-migrate-aadc-aadccp/user-8.png)
+ ![Screenshot of the "Synchronization Rules Editor" command.](media/tutorial-migrate-aadc-aadccp/user-8.png)
- 2. Select **Inbound** from the drop-down list for Direction and select **Add new rule**.
+1. Under **Direction**, select **Inbound** from the dropdown list, and then select **Add new rule**.
- ![Screenshot that shows the "View and manage your synchronization rules" window with "Inbound" and the "Add new rule" button selected.](media/tutorial-migrate-aadc-aadccp/user-1.png)
+ ![Screenshot of the "View and manage your synchronization rules" pane, with "Inbound" and the "Add new rule" button selected.](media/tutorial-migrate-aadc-aadccp/user-1.png)
- 3. On the **Description** page, enter the following and select **Next**:
+1. On the **Description** page, do the following:
- - **Name:** Give the rule a meaningful name
- - **Description:** Add a meaningful description
- - **Connected System:** Choose the AD connector that you're writing the custom sync rule for
- - **Connected System Object Type:** User
- - **Metaverse Object Type:** Person
- - **Link Type:** Join
- - **Precedence:** Provide a value that is unique in the system
- - **Tag:** Leave this empty
+ - **Name**: Give the rule a meaningful name.
+ - **Description**: Add a meaningful description.
+ - **Connected System**: Select the Active Directory connector that you're writing the custom sync rule for.
+ - **Connected System Object Type**: Select **User**.
+ - **Metaverse Object Type**: Select **Person**.
+ - **Link Type**: Select **Join**.
+ - **Precedence**: Enter a value that's unique in the system.
+ - **Tag**: Leave this field empty.
![Screenshot that shows the "Create inbound synchronization rule - Description" page with values entered.](media/tutorial-migrate-aadc-aadccp/user-2.png)
- 4. On the **Scoping filter** page, enter the OU or security group that you want the pilot based off. To filter on OU, add the OU portion of the distinguished name. This rule will be applied to all users who are in that OU. So, if DN ends with "OU=CPUsers,DC=contoso,DC=com, you would add this filter. Then select **Next**.
+1. On the **Scoping filter** page, enter the OU or security group that the pilot is based on.
+
+ To filter on OU, add the OU portion of the *distinguished name* (DN). This rule will be applied to all users who are in that OU. for example, if DN ends with "OU=CPUsers,DC=contoso,DC=com, add this filter.
|Rule|Attribute|Operator|Value| |--|-|-|--|
- |Scoping OU|DN|ENDSWITH|Distinguished name of the OU.|
- |Scoping group||ISMEMBEROF|Distinguished name of the security group.|
+ |Scoping&nbsp;OU|DN|ENDSWITH|The distinguished name of the OU.|
+ |Scoping&nbsp;group||ISMEMBEROF|The distinguished name of the security group.|
- ![Screenshot that shows the **Create inbound synchronization rule - Scoping filter** page with a scoping filter value entered.](media/tutorial-migrate-aadc-aadccp/user-3.png)
+ ![Screenshot that shows the "Create inbound synchronization rule" page with a scoping filter value entered.](media/tutorial-migrate-aadc-aadccp/user-3.png)
- 5. On the **Join** rules page, select **Next**.
- 6. On the **Transformations** page, add a Constant transformation: flow True to cloudNoFlow attribute. Select **Add**.
+1. Select **Next**.
+1. On the **Join** rules page, select **Next**.
+1. Under **Add transformations**, do the following:
+
+ * **FlowType**: Select **Constant**.
+ * **Target Attribute**: Select **cloudNoFlow**.
+ * **Source**: Select **True**.
![Screenshot that shows the **Create inbound synchronization rule - Transformations** page with a **Constant transformation** flow added.](media/tutorial-migrate-aadc-aadccp/user-4.png)
-Same steps need to be followed for all object types (user, group and contact). Repeat steps per configured AD Connector / per AD forest.
+1. Select **Next**.
+
+1. Select **Add**.
+
+Follow the same steps for all object types (*user*, *group*, and *contact*). Repeat the steps for each configured AD Connector and Active Directory forest.
+
+## Create a custom user outbound rule
-## Create custom user outbound rule
+1. In the **Direction** dropdown list, select **Outbound**, and then select **Add rule**.
- 1. Select **Outbound** from the drop-down list for Direction and select **Add rule**.
+ ![Screenshot that highlights the selected "Outbound" direction and the "Add new rule" button.](media/tutorial-migrate-aadc-aadccp/user-5.png)
- ![Screenshot that shows the **Outbound** Direction selected and the **Add new rule** button highlighted.](media/tutorial-migrate-aadc-aadccp/user-5.png)
+1. On the **Description** page, do the following:
- 2. On the **Description** page, enter the following and select **Next**:
+ - **Name**: Give the rule a meaningful name.
+ - **Description**: Add a meaningful description.
+ - **Connected System**: Select the Azure AD connector that you're writing the custom sync rule for.
+ - **Connected System Object Type**: Select **User**.
+ - **Metaverse Object Type**: Select **Person**.
+ - **Link Type**: Select **JoinNoFlow**.
+ - **Precedence**: Enter a value that's unique in the system.
+ - **Tag**: Leave this field empty.
- - **Name:** Give the rule a meaningful name
- - **Description:** Add a meaningful description
- - **Connected System:** Choose the Azure AD connector that you're writing the custom sync rule for
- - **Connected System Object Type:** User
- - **Metaverse Object Type:** Person
- - **Link Type:** JoinNoFlow
- - **Precedence:** Provide a value that is unique in the system<br>
- - **Tag:** Leave this empty
+ ![Screenshot of the "Create outbound synchronization rule" pane with properties entered.](media/tutorial-migrate-aadc-aadccp/user-6.png)
- ![Screenshot that shows the **Description** page with properties entered.](media/tutorial-migrate-aadc-aadccp/user-6.png)
+1. Select **Next**.
- 3. On the **Scoping filter** page, choose **cloudNoFlow** equal **True**. Then select **Next**.
+1. On the **Create outbound synchronization rule** pane, under **Add scoping filters**, do the following:
+
+ * **Attribute**: Select **cloudNoFlow**.
+ * **Operator**: Select **EQUAL**.
+ * **Value**: Select **True**.
![Screenshot that shows a custom rule.](media/tutorial-migrate-aadc-aadccp/user-7.png)
- 4. On the **Join** rules page, select **Next**.
- 5. On the **Transformations** page, select **Add**.
+1. Select **Next**.
+
+1. On the **Join** rules pane, select **Next**.
+
+1. On the **Transformations** pane, select **Add**.
-Same steps need to be followed for all object types (user, group and contact).
+Follow the same steps for all object types (*user*, *group*, and *contact*).
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be CP1. To install the agent, follow these steps:
+If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, the agent is CP1. To install the agent, do the following:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
-## Verify agent installation
+## Verify the agent installation
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync
-Use the following steps to configure provisioning:
+To configure the cloud sync setup, do the following:
-1. Sign-in to the Azure AD portal.
-2. Select **Azure Active Directory**
-3. Select **Azure AD Connect**
-4. Select **Manage cloud sync**
+1. Sign in to the Azure AD portal.
+1. Select **Azure Active Directory**.
+1. Select **Azure AD Connect**.
+1. Select the **Manage provisioning (Preview)** link.
- ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot that shows the "Manage provisioning (Preview)" link.](media/how-to-configure/manage-1.png)
-5. Select **New Configuration**
+1. Select **New Configuration**
- ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
+ ![Screenshot that highlights the "New configuration" link.](media/tutorial-single-forest/configure-1.png)
-6. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
+1. On the **Configure** pane, under **Settings**, enter a **Notification email** and then, under **Deploy**, move the selector to **Enable**.
- ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/tutorial-single-forest/configure-2.png)
+ ![Screenshot of the "Configure" pane, with a notification email entered and "Enable" selected.](media/tutorial-single-forest/configure-2.png)
-7. Under **Configure**, select **All users** to change the scope of the configuration rule.
+1. Select **Save**.
- ![Screenshot of Configure screen with "All users" highlighted next to "Scope users".](media/how-to-configure/scope-2.png)
+1. Under **Scope**, select the **All users** link to change the scope of the configuration rule.
+
+ ![Screenshot of the "Configure" pane, with the "All users" link highlighted.](media/how-to-configure/scope-2.png)
-8. On the right, change the scope to include the specific OU you created "OU=CPUsers,DC=contoso,DC=com".
+1. Under **Scope users**, change the scope to include the OU that you created: **OU=CPUsers,DC=contoso,DC=com**.
- ![Screenshot of the Scope users screen highlighting the scope changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
+ ![Screenshot of the "Scope users" page, highlighting the scope that's changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
-9. Select **Done** and **Save**.
-10. The scope should now be set to one organizational unit.
+1. Select **Done** and **Save**.
+
+ The scope should now be set to **1 organizational unit**.
- ![Screenshot of Configure screen with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
+ ![Screenshot of the "Configure" page, with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
-## Verify users are provisioned by cloud sync
+## Verify that users have been set up by cloud sync
-You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. This process may take a few hours to complete. To verify users are provisioning by cloud sync, follow these steps:
+You'll now verify that the users in your on-premises Active Directory have been synchronized and now exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users have been synchronized, do the following:
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
-2. On the left, select **Azure Active Directory**
-3. Select on **Azure AD Connect**
-4. Select on **Manage cloud sync**
-5. Select on **Logs** button
-6. Search for a username to confirm that the user is provisioned by cloud sync
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
+1. On the left pane, select **Azure Active Directory**.
+1. Select **Azure AD Connect**.
+1. Select **Manage cloud sync**.
+1. Select the **Logs** button.
+1. Search for a username to confirm that the user has been set up by cloud sync.
Additionally, you can verify that the user and group exist in Azure AD. ## Start the scheduler
-Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. Now that you've modified the rules, you can restart the scheduler. Use the following steps:
+Azure AD Connect sync synchronizes changes that occur in your on-premises directory by using a scheduler. Now that you've modified the rules, you can restart the scheduler.
-1. On the server that is running Azure AD Connect sync open PowerShell with Administrative Privileges
-2. Run `Set-ADSyncScheduler -SyncCycleEnabled $true`.
-3. Run `Start-ADSyncSyncCycle`, then press <kbd>Enter</kbd>.
+1. On the server that's running Azure AD Connect sync, open PowerShell with administrative privileges.
+1. Run `Set-ADSyncScheduler -SyncCycleEnabled $true`.
+1. Run `Start-ADSyncSyncCycle`, and then select <kbd>Enter</kbd>.
> [!NOTE]
-> If you are running your own custom scheduler for Azure AD Connect sync, then please enable the scheduler.
+> If you're running your own custom scheduler for Azure AD Connect sync, be sure to enable the scheduler.
+
+After the scheduler is enabled, Azure AD Connect stops exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (such as `manager`) is being updated.
-Once the scheduler is enabled, Azure AD Connect will stop exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (such as `manager`) is being updated. In case there's any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
+If there's any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
-## Something went wrong
+## Does your setup work?
-In case the pilot doesn't work as expected, you can go back to the Azure AD Connect sync setup by following the steps below:
+If the pilot doesn't work as you had expected, you can go back to the Azure AD Connect sync setup by doing the following:
-1. Disable provisioning configuration in the Azure portal.
-2. Disable all the custom sync rules created for Cloud Provisioning using the Sync Rule Editor tool. Disabling should cause full sync on all the connectors.
+1. Disable the provisioning configuration in the Azure portal.
+1. Disable all the custom sync rules that were created for cloud provisioning by using the Sync Rule Editor tool. Disabling the rules should result in a full sync of all the connectors.
## Next steps
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-single-forest.md
Title: Tutorial - Integrate a single forest with a single Azure AD tenant
-description: This topic describes the pre-requisites and the hardware requirements cloud sync.
+description: This article describes the prerequisites and the hardware requirements for using Azure AD Connect cloud sync.
# Tutorial: Integrate a single forest with a single Azure AD tenant
-This tutorial walks you through creating a hybrid identity environment using Azure Active Directory (Azure AD) Connect cloud sync.
+This tutorial walks you through creating a hybrid identity environment by using Azure Active Directory (Azure AD) Connect cloud sync.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-single-forest/diagram-2.png)
You can use the environment you create in this tutorial for testing or for getti
## Prerequisites
+Before you begin, set up your environments by doing the following.
+ ### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
-2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
+1. Create a cloud-only global administrator account on your Azure AD tenant.
+
+ This way, you can manage the configuration of your tenant if your on-premises services fail or become unavailable. [Learn how to add a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Complete this step to ensure that you don't get locked out of your tenant.
+
+1. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
### In your on-premises environment
-1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
+1. Identify a domain-joined host server that's running Windows Server 2016 or later, with at least 4 GB of RAM and .NET 4.7.1+ runtime.
+
+1. If there's a firewall between your servers and Azure AD, configure the following items:
-2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used | | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
- | **443** | Handles all outbound communication with the service |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed on the Azure AD portal. |
+ | **80** | Downloads the certificate revocation lists (CRLs) while it validates the TLS/SSL certificate. |
+ | **443** | Handles all outbound communication with the service. |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.
- - If your firewall or proxy allows you to specify safe suffixes, then add connections t to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+
+ - If your firewall or proxy allows you to specify safe suffixes, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
+ - Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
+
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Because these URLs are used to validate certificates for other Microsoft products, you might already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
+If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md
## Configure Azure AD Connect cloud sync
-Use the following steps to configure and start the provisioning:
+To configure provisioning, do the following:
-1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**
-1. Select **Azure AD Connect**
-1. Select **Manage cloud sync**
+1. Sign in to the Azure AD portal.
+1. Select **Azure Active Directory**.
+1. Select **Azure AD Connect**.
+1. Select **Manage cloud sync**.
- ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot that shows the "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**
-
- [![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)](media/tutorial-single-forest/configure-1.png#lightbox)
+1. Select **New Configuration**.
-1. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
+ ![Screenshot of the Azure AD Connect cloud sync page, with the "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png#lightbox)
- [![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)](media/how-to-configure/configure-2.png#lightbox)
+1. On the **Configuration** page, enter a **Notification email**, move the selector to **Enable**, and then select **Save**.
-1. The configuration status should now be **Healthy**.
+ ![Screenshot of the "Edit provisioning configuration" page.](media/how-to-configure/configure-2.png#lightbox)
- [![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)](media/how-to-configure/manage-4.png#lightbox)
+1. The configuration status should now be **Healthy**.
-## Verify users are created and synchronization is occurring
+ ![Screenshot of the "Azure AD Connect cloud sync" page, showing a "Healthy" status.](media/how-to-configure/manage-4.png#lightbox)
-You'll now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. The sync operation may take a few hours to complete. To verify users are synchronized, follow these steps:
+## Verify that users are created and synchronization is occurring
+You'll now verify that the users in your on-premises directory have been synchronized and exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users are synchronized, do the following:
-1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
-2. On the left, select **Azure Active Directory**
-3. Under **Manage**, select **Users**.
-4. Verify that the new users appear in your tenant
+1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
+1. On the left pane, select **Azure Active Directory**.
+1. Under **Manage**, select **Users**.
+1. Verify that the new users are displayed in your tenant.
## Test signing in with one of your users
-1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
-
-1. Sign in with a user account that was created in your tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
+1. Go to the [Microsoft My Apps](https://myapps.microsoft.com) page.
+1. Sign in with a user account that was created in your new tenant. You'll need to sign in by using the following format: *user@domain.onmicrosoft.com*. Use the same password that the user uses to sign in on-premises.
- ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
+ ![Screenshot that shows the My Apps portal with signed-in users.](media/tutorial-single-forest/verify-1.png)
-You've now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
+You have now successfully set up a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud provisioning?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
active-directory Msal Android Shared Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-shared-devices.md
These Microsoft applications support Azure AD's shared device mode:
- [Microsoft Teams](/microsoftteams/platform/) - [Microsoft Managed Home Screen](/mem/intune/apps/app-configuration-managed-home-screen-app) app for Android Enterprise - [Microsoft Edge](/microsoft-edge) (in Public Preview)
+- [Microsoft Power Apps](/power-apps) (in Public Preview)
- [Yammer](/yammer) (in Public Preview) > [!IMPORTANT]
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
To delete a specific federated identity credential, select the **Delete** icon f
- [Create a user-assigned manged identity](../managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md?pivots=identity-mi-methods-azcli#create-a-user-assigned-managed-identity-1) - Find the object ID of the user-assigned managed identity, which you need in the following steps. ## Configure a federated identity credential on a user-assigned managed identity
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
To delete a federated identity credential, select the **Delete** icon for the cr
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. - [Create an app registration](quickstart-register-app.md) in Azure AD. Grant your app access to the Azure resources targeted by your external software workload. - Find the object ID, app (client) ID, or identifier URI of the app, which you need in the following steps. You can find these values in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, get the **Object ID**, **Application (client) ID**, or **Application ID URI** value, which you need in the following steps.
active-directory Add Users Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-administrator.md
Last updated 10/12/2022 --++
active-directory Add Users Information Worker https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/add-users-information-worker.md
Last updated 10/07/2022 --++
active-directory Auditing And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/auditing-and-reporting.md
Last updated 11/24/2022 --++
active-directory B2b Direct Connect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-direct-connect-overview.md
Last updated 10/12/2022 --++
active-directory B2b Fundamentals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-fundamentals.md
Last updated 08/30/2022--++
active-directory B2b Quickstart Add Guest Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-add-guest-users-portal.md
Title: 'Quickstart: Add a guest user and send an invitation - Azure AD' description: Use this quickstart to learn how Azure AD admins can add B2B guest users in the Azure portal and walk through the B2B invitation workflow. --++ Last updated 05/10/2022
active-directory B2b Quickstart Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-quickstart-invite-powershell.md
Title: 'Quickstart: Add a guest user with PowerShell - Azure AD' description: In this quickstart, you learn how to use PowerShell to send an invitation to an external Azure AD B2B collaboration user. You'll use the Microsoft Graph Identity Sign-ins and the Microsoft Graph Users PowerShell modules. --++ Last updated 02/16/2022
active-directory B2b Tutorial Require Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/b2b-tutorial-require-mfa.md
Last updated 01/07/2022 --++
active-directory Bulk Invite Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/bulk-invite-powershell.md
Last updated 11/18/2022 --++
active-directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/claims-mapping.md
Last updated 11/24/2022 --++
active-directory Configure Saas Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/configure-saas-apps.md
Last updated 05/23/2017 --++
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
Last updated 08/05/2022 --++
active-directory Customize Invitation Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/customize-invitation-api.md
Last updated 12/02/2022 --++
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Previously updated : 03/02/2021 Last updated : 01/06/2023 -++
+# Customer intent: As a tenant administrator, I want to set up Facebook as an identity provider for guest user login.
# Add Facebook as an identity provider for External Identities You can add Facebook to your self-service sign-up user flows so that users can sign in to your applications using their own Facebook accounts. To allow users to sign in using Facebook, you'll first need to [enable self-service sign-up](self-service-sign-up-user-flow.md) for your tenant. After you add Facebook as an identity provider, set up a user flow for the application and select Facebook as one of the sign-in options.
-After you've added Facebook as one of your application's sign-in options, on the **Sign in** page, a user can simply enter the email they use to sign in to Facebook, or they can select **Sign-in options** and choose **Sign in with Facebook**. In either case, they'll be redirected to the Facebook login page for authentication.
+After you've added Facebook as one of your application's sign-in options, on the **Sign in** page, a user can simply enter the email they use to sign in to Facebook, or they can select **Sign-in options** and choose **Sign in with Facebook**. In either case, they'll be redirected to the Facebook sign in page for authentication.
![Sign in options for facebook users](media/facebook-federation/sign-in-with-facebook-overview.png)
To use a Facebook account as an [identity provider](identity-providers.md), you
1. Sign in to [Facebook for developers](https://developers.facebook.com/) with your Facebook account credentials.
-2. If you have not already done so, you need to register as a Facebook developer. To do this, select **Get Started** on the upper-right corner of the page, accept Facebook's policies, and complete the registration steps.
+2. If you haven't already done so, you need to register as a Facebook developer. To do this, select **Get Started** on the upper-right corner of the page, accept Facebook's policies, and complete the registration steps.
3. Select **My Apps** and then **Create App**.
-4. Enter a **Display Name** and a valid **Contact Email**.
-5. Select **Create App ID**. This may require you to accept Facebook platform policies and complete an online security check.
-6. Select **Settings** > **Basic**.
-7. Choose a **Category**, for example Business and Pages. This value is required by Facebook, but not used for Azure AD.
-8. At the bottom of the page, select **Add Platform**, and then select **Website**.
-9. In **Site URL**, enter the appropriate URL (noted above).
-10. In **Privacy Policy URL**, enter the URL for the page where you maintain privacy information for your application, for example `http://www.contoso.com`.
-11. Select **Save Changes**.
-12. At the top of the page, copy the value of **App ID**.
-13. Select **Show** and copy the value of **App Secret**. You use both of them to configure Facebook as an identity provider in your tenant. **App Secret** is an important security credential.
-14. Select the plus sign next to **PRODUCTS**, and then select **Set up** under **Facebook Login**.
-15. Under **Facebook Login**, select **Settings**.
-16. In **Valid OAuth redirect URIs**, enter the appropriate URL (noted above).
-17. Select **Save Changes** at the bottom of the page.
-18. To make your Facebook application available to Azure AD, select the Status selector at the top right of the page and turn it **On** to make the Application public, and then select **Switch Mode**. At this point the Status should change from **Development** to **Live**.
+1. **Select an app type** and then **Details**
+1. **Add an app name** and a valid **App contact email**.
+1. Select **Create app**. This may require you to accept Facebook platform policies and complete an online security check.
+1. Select **Settings** > **Basic**.
+1. Choose a **Category**, for example **Business and pages**. This value is required by Facebook, but not used for Azure AD.
+1. At the bottom of the page, select **Add Platform**, and then select **Website**.
+1. In **Site URL**, enter the appropriate URL (noted above).
+1. In **Privacy Policy URL** at the top of the page, enter the URL for the page where you maintain privacy information for your application, for example `http://www.contoso.com`.
+1. Select **Save changes**.
+1. At the top of the page, copy the value of **App ID**.
+1. At the top of the page, select **Show** and copy the value of **App secret**. You use both of them to configure Facebook as an identity provider in your tenant. **App secret** is an important security credential.
+1. In the left menu select **Add Product** next to **Products**, and then select **Set up** under **Facebook Login**.
+1. Under **Facebook Login** in the left, select **Settings**.
+1. In **Valid OAuth redirect URIs**, enter the appropriate URL (noted above).
+1. Select **Save changes** at the bottom of the page.
+1. To make your Facebook application available to Azure AD, select the **App Mode** selector at the top of the page and turn it **Live** to make the Application public.
## Configure a Facebook account as an identity provider Now you'll set the Facebook client ID and client secret, either by entering it in the Azure AD portal or by using PowerShell. You can test your Facebook configuration by signing up via a user flow on an app enabled for self-service sign-up.
Now you'll set the Facebook client ID and client secret, either by entering it i
3. In the left menu, select **External Identities**. 4. Select **All identity providers**, then select **Facebook**. 5. For the **Client ID**, enter the **App ID** of the Facebook application that you created earlier.
-6. For the **Client secret**, enter the **App Secret** that you recorded.
+6. For the **Client secret**, enter the **App secret** that you recorded.
- ![Screenshot showing the Add social identity provider page](media/facebook-federation/add-social-identity-provider-page.png)
+ :::image type="content" source="media/facebook-federation/add-social-identity-provider-page.png" alt-text="Screenshot showing the Add social identity provider page.":::
7. Select **Save**. ### To configure Facebook federation by using PowerShell
Now you'll set the Facebook client ID and client secret, either by entering it i
> Use the client ID and client secret from the app you created above in the Facebook developer console. For more information, see the [New-AzureADMSIdentityProvider](/powershell/module/azuread/new-azureadmsidentityprovider?view=azureadps-2.0-preview&preserve-view=true) article. ## How do I remove Facebook federation?
-You can delete your Facebook federation setup. If you do so, any users who have signed up through user flows with their Facebook accounts will no longer be able to log in.
+You can delete your Facebook federation setup. If you do so, any users who have signed up through user flows with their Facebook accounts will no longer be able to sign in.
### To delete Facebook federation in the Azure AD portal:
-1. Go to the [Azure portal](https://portal.azure.com). In the left pane, select **Azure Active Directory**.
-2. Select **External Identities**.
+1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD tenant.
+2. Under **Azure services**, select **Azure Active Directory**.
+3. In the left menu, select **External Identities**.
3. Select **All identity providers**.
-4. On the **Facebook** line, select the context menu (**...**) and then select **Delete**.
+4. Select the **Facebook** line, and then select **Delete**.
5. Select **Yes** to confirm deletion. ### To delete Facebook federation by using PowerShell:
You can delete your Facebook federation setup. If you do so, any users who have
## Next steps -- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
+- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
+- [SAML/WS-Fed IdP federation](direct-federation.md)
+- [Google federation](google-federation.md)
active-directory Hybrid Cloud To On Premises https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-cloud-to-on-premises.md
Last updated 11/17/2022 --++
active-directory Hybrid On Premises To Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-on-premises-to-cloud.md
Last updated 11/17/2022 --++
active-directory Hybrid Organizations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/hybrid-organizations.md
Last updated 11/23/2022--++
active-directory Invitation Email Elements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invitation-email-elements.md
Last updated 09/30/2022 --++
active-directory Invite Internal Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/invite-internal-users.md
Last updated 03/02/2022 --++
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Last updated 12/16/2022 --++
active-directory Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/microsoft-account.md
Title: Microsoft account (MSA) identity provider in Azure AD
+ Title: Add Microsoft account (MSA) as an identity provider - Azure AD
description: Use Azure AD to enable an external user (guest) to sign in to your Azure AD apps with their Microsoft account (MSA). Previously updated : 09/29/2022 Last updated : 01/12/2023 -+
-#Customer intent: As an Azure AD administrator user, I want to set up invitation flow or a self-service sign-up user flow for guest users, so they can sign into my Azure AD apps with their Microsoft account (MSA).
+#Customer intent: As an Azure AD administrator user, I want to set up an invitation flow or a self-service sign-up user flow for guest users, so they can sign into my Azure AD apps with their Microsoft account (MSA).
# Add Microsoft account (MSA) as an identity provider for External Identities
Microsoft accounts are set up by a user to get access to consumer-oriented Micro
## Guest sign-in using Microsoft accounts
-Microsoft account is available by default in the list of **External Identities** > **All identity providers**. No further configuration is needed to allow guest users to sign in with their Microsoft account using either the invitation flow, or a self-service sign-up user flow.
+Microsoft account is available by default in the list of **External Identities** > **All identity providers**. No further configuration is needed to allow guest users to sign in with their Microsoft account, using either the invitation flow, or a self-service sign-up user flow.
:::image type="content" source="media/microsoft-account/microsoft-account-identity-provider.png" alt-text="Screenshot of Microsoft account in the identity providers list.":::
Microsoft account is an identity provider option for your self-service sign-up u
:::image type="content" source="media/microsoft-account/microsoft-account-user-flow.png" alt-text="Screenshot of the Microsoft account in a self-service sign-up user flow."::: ## Verifying the application's publisher domain
-As of November 2020, new application registrations show up as unverified in the user consent prompt, unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md), ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. ([Learn more](../develop/publisher-verification-overview.md) about this change.) For Azure AD user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or other [Azure AD tenant](azure-ad-account.md) as the identity provider. To meet these new requirements, follow the steps below:
+As of November 2020, new application registrations show up as unverified in the user consent prompt, unless [the application's publisher domain is verified](../develop/howto-configure-publisher-domain.md), ***and*** the companyΓÇÖs identity has been verified with the Microsoft Partner Network and associated with the application. For Azure AD user flows, the publisherΓÇÖs domain appears only when using a Microsoft account or another Azure AD tenant as the identity provider. To meet these new requirements, follow the steps below:
1. [Verify your company identity using your Microsoft Partner Network (MPN) account](/partner-center/verification-responses). This process verifies information about your company and your companyΓÇÖs primary contact. 1. Complete the publisher verification process to associate your MPN account with your app registration using one of the following options:
As of November 2020, new application registrations show up as unverified in the
## Next steps -- [Add Azure Active Directory B2B collaboration users](add-users-administrator.md)-- [Add self-service sign-up to an app](self-service-sign-up-user-flow.md)
+- [Publisher verification overview](../develop/publisher-verification-overview.md)
+- [Add Azure Active Directory (Azure AD) as an identity provider for External Identities](azure-ad-account.md)
active-directory Redemption Experience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/redemption-experience.md
Last updated 12/16/2022--++
active-directory Reset Redemption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/reset-redemption-status.md
Last updated 12/07/2022 --++
active-directory Self Service Sign Up Add Approvals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-add-approvals.md
description: Add API connectors for custom approval workflows in External Identi
- Previously updated : 07/13/2021+ Last updated : 01/09/2023 -++
+# Customer intent: As a tenant administrator, I want to add API connectors for custom approval workflows in self-service sign-up.
# Add a custom approval workflow to self-service sign-up
You need to register your approval system as an application in your Azure AD ten
2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **App registrations**, and then select **New registration**. 4. Enter a **Name** for the application, for example, _Sign-up Approvals_.-
- <!-- ![Register an application for the approval system](./self-service-sign-up-add-approvals/approvals/register-an-approvals-application.png) -->
- 5. Select **Register**. You can leave other fields at their defaults.
- ![Screenshot that highlights the Register button.](media/self-service-sign-up-add-approvals/register-approvals-app.png)
6. Under **Manage** in the left menu, select **API permissions**, and then select **Add a permission**. 7. On the **Request API permissions** page, select **Microsoft Graph**, and then select **Application permissions**. 8. Under **Select permissions**, expand **User**, and then select the **User.ReadWrite.All** check box. This permission allows the approval system to create the user upon approval. Then select **Add permissions**.
- ![Register an application page](media/self-service-sign-up-add-approvals/request-api-permissions.png)
9. On the **API permissions** page, select **Grant admin consent for (your tenant name)**, and then select **Yes**. 10. Under **Manage** in the left menu, select **Certificates & secrets**, and then select **New client secret**. 11. Enter a **Description** for the secret, for example _Approvals client secret_, and select the duration for when the client secret **Expires**. Then select **Add**.
-12. Copy the value of the client secret.
+12. Copy the value of the client secret. Client secret values can be viewed only immediately after creation. Make sure to save the secret when created, before leaving the page.
- ![Copy the client secret for use in the approval system](media/self-service-sign-up-add-approvals/client-secret-value-copy.png)
13. Configure your approval system to use the **Application ID** as the client ID and the **client secret** you generated to authenticate with Azure AD.
Next you'll [create the API connectors](self-service-sign-up-add-api-connector.m
- **Check approval status**. Send a call to the approval system immediately after a user signs-in with an identity provider to check if the user has an existing approval request or has already been denied. If your approval system only does automatic approval decisions, this API connector may not be needed. Example of a "Check approval status" API connector.
- ![Check approval status API connector configuration](./media/self-service-sign-up-add-approvals/check-approval-status-api-connector-config-alt.png)
- **Request approval** - Send a call to the approval system after a user completes the attribute collection page, but before the user account is created, to request approval. The approval request can be automatically granted or manually reviewed. Example of a "Request approval" API connector.
- ![Request approval API connector configuration](./media/self-service-sign-up-add-approvals/create-approval-request-api-connector-config-alt.png)
To create these connectors, follow the steps in [create an API connector](self-service-sign-up-add-api-connector.md#create-an-api-connector).
Now you'll add the API connectors to a self-service sign-up user flow with these
- **After federating with an identity provider during sign-up**: Select your approval status API connector, for example _Check approval status_. - **Before creating the user**: Select your approval request API connector, for example _Request approval_.
- ![Add APIs to the user flow](./media/self-service-sign-up-add-approvals/api-connectors-user-flow-api.png)
+ 6. Select **Save**. ## Control the sign-up flow with API responses
-Your approval system can use its responses when called to control the sign up flow.
+Your approval system can use its responses when called to control the sign-up flow.
### Request and responses for the "Check approval status" API connector
Content-type: application/json
} ```
-The exact claims sent to the API depends on which information is provided by the identity provider. 'email' is always sent.
+The exact claims sent to the API depend on which information is provided by the identity provider. 'email' is always sent.
#### Continuation response for "Check approval status" The **Check approval status** API endpoint should return a continuation response if: -- The user has not previously requested an approval.
+- The user hasn't previously requested an approval.
Example of the continuation response:
Content-type: application/json
} ```
-The exact claims sent to the API depends on which information is collected from the user or is provided by the identity provider.
+The exact claims sent to the API depend on which information is collected from the user or is provided by the identity provider.
#### Continuation response for "Request approval"
Content-type: application/json
## Next steps -- Get started with our [Azure Function quickstart samples](code-samples-self-service-sign-up.md#api-connector-azure-function-quickstarts).-- Checkout the [self-service sign-up for guest users with manual approval sample](code-samples-self-service-sign-up.md#custom-approval-workflows).
+- [Add a self-service sign-up user flow](self-service-sign-up-user-flow.md)
+- [Add an API connector](self-service-sign-up-add-api-connector.md)
+- [Secure your API connector](self-service-sign-up-secure-api-connector.md)
+- [self-service sign-up for guest users with manual approval sample](code-samples-self-service-sign-up.md#custom-approval-workflows).
active-directory Self Service Sign Up User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-user-flow.md
Previously updated : 10/12/2022 Last updated : 01/06/2023 -++
+# Customer intent: As a tenant administrator, I want to set up user flows that allow a user to sign up for an app and create a new guest account.
+ # Add a self-service sign-up user flow to an app
-For applications you build, you can create user flows that allow a user to sign up for an app and create a new guest account. A self-service sign-up user flow defines the series of steps the user will follow during sign-up, the identity providers you'll allow them to use, and the user attributes you want to collect. You can associate one or more applications with a single user flow.
+For applications you build, you can create user flows that allow a user to sign up for an app and create a new guest account. A self-service sign-up user flow defines the series of steps the user will follow during sign-up, the [identity providers](identity-providers.md) you'll allow them to use, and the user attributes you want to collect. You can associate one or more applications with a single user flow.
> [!NOTE] > You can associate user flows with apps built by your organization. User flows can't be used for Microsoft apps, like SharePoint or Teams.
For applications you build, you can create user flows that allow a user to sign
### Add identity providers (optional)
-Azure AD is the default identity provider for self-service sign-up. This means that users are able to sign up by default with an Azure AD account. In your self-service sign-up user flows, you can also include social identity providers like Google and Facebook, Microsoft Account, and Email One-time Passcode. For more information, see these articles:
+Azure AD is the default identity provider for self-service sign-up. This means that users are able to sign up by default with an Azure AD account. In your self-service sign-up user flows, you can also include social identity providers like Google and Facebook, Microsoft Account, and the email one-time passcode feature. For more information, see these articles:
-- [Microsoft Account identity provider](microsoft-account.md)-- [Email one-time passcode authentication](one-time-passcode.md)-- [Add Facebook to your list of social identity providers](facebook-federation.md) - [Add Google to your list of social identity providers](google-federation.md)
+- [Add Facebook to your list of social identity providers](facebook-federation.md)
+- [Add Microsoft account as an identity provider](microsoft-account.md)
+- [Email one-time passcode authentication](one-time-passcode.md)
### Define custom attributes (optional)
Before you can add a self-service sign-up user flow to your applications, you ne
1. Sign in to the [Azure portal](https://portal.azure.com) as an Azure AD administrator. 2. Under **Azure services**, select **Azure Active Directory**.
-3. Select **User settings**, and then under **External users**, select **Manage external collaboration settings**.
-4. Set the **Enable guest self-service sign up via user flows** toggle to **Yes**.
+1. Under **Manage** in the left menu, select **Users**.
+1. Select **User settings**, and then under **External users**, select **Manage external collaboration settings**.
+1. Set the **Enable guest self-service sign up via user flows** toggle to **Yes**.
![Enable guest self-service sign-up](media/self-service-sign-up-user-flow/enable-self-service-sign-up.png) 5. Select **Save**.
active-directory Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/troubleshoot.md
Last updated 08/30/2022 tags: active-directory--++
active-directory Tutorial Bulk Invite https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/tutorial-bulk-invite.md
Last updated 10/24/2022 --++ # Customer intent: As a tenant administrator, I want to send B2B invitations to multiple external users at the same time so that I can avoid having to send individual invitations to each user.
active-directory Use Dynamic Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/use-dynamic-groups.md
Last updated 10/13/2022 --++
active-directory User Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-properties.md
Last updated 01/09/2023--++
active-directory User Token https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/user-token.md
Last updated 12/12/2022 --++
active-directory What Is B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/what-is-b2b.md
Last updated 08/30/2022--++
active-directory Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/whats-new-docs.md
--++
active-directory Active Directory Users Profile Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-users-profile-azure-portal.md
The **Stay signed in?** prompt appears after a user successfully signs in. This
The following diagram shows the user sign-in flow for a managed tenant and federated tenant using the KMSI in prompt. This flow contains smart logic so that the **Stay signed in?** option won't be displayed if the machine learning system detects a high-risk sign-in or a sign-in from a shared device.
-KMSI is only available on the default custom branding. It can't be added to language-specific branding. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process.
+KMSI setting is available in User settings. Some features of SharePoint Online and Office 2010 depend on users being able to choose to remain signed in. If you uncheck the **Show option to remain signed in** option, your users may see other unexpected prompts during the sign-in process.
![Diagram showing the user sign-in flow for a managed vs. federated tenant](media/customize-branding/kmsi-workflow.png)
Details about the sign-in error are found in the **Sign-in logs** in Azure AD. S
* **Sign in error code**: 50140 * **Failure reason**: This error occurred due to "Keep me signed in" interrupt when the user was signing in.
-You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the advanced branding settings. This setting disables the KMSI prompt for all users in your Azure AD directory.
+You can stop users from seeing the interrupt by setting the **Show option to remain signed in** setting to **No** in the user settings. This setting disables the KMSI prompt for all users in your Azure AD directory.
You also can use the [persistent browser session controls in Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md) to prevent users from seeing the KMSI prompt. This option allows you to disable the KMSI prompt for a select group of users (such as the global administrators) without affecting sign-in behavior for everyone else in the directory.
active-directory How To Customize Branding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/how-to-customize-branding.md
When users authenticate into your corporate intranet or web-based applications, Azure Active Directory (Azure AD) provides the identity and access management (IAM) service. You can add company branding that applies to all these sign-in experiences to create a consistent experience for your users.
-The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature.
+The default sign-in experience is the global look and feel that applies across all sign-ins to your tenant. Before you customize any settings, the default Microsoft branding will appear in your sign-in pages. You can customize this default experience with a custom background image or color, favicon, layout, header, and footer. You can also upload a custom CSS.
-For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+The updated experience for adding company branding covered in this article is available as an Azure AD preview feature. To opt in and explore the new experience, go to **Azure AD** > **Preview features** and enable the **Enhanced Company Branding** feature. For more information about previews, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
Instructions for the legacy company branding customization process can be found in the [Customize branding](customize-branding.md) article.
+## User experience
+
+You can customize the sign-in pages when users access your organization's tenant-specific apps. For Microsoft and SaaS applications (multi-tenant apps) such as <https://myapps.microsoft.com>, or <https://outlook.com> the customized sign-in page appears only after the user types their **Email**, or **Phone**, and select **Next**.
+
+Some of the Microsoft applications support the home realm discovery `whr` query string parameter, or a domain variable. With the home realm discovery and domain parameter, the customized sign-in page will appear immediately in the first step.
+
+In the following examples replace the contoso.com with your own tenant name, or verified domain name:
+
+- For Microsoft Outlook `https://outlook.com/contoso.com`
+- For SharePoint online `https://contoso.sharepoint.com`
+- For my app portal `https://myapps.microsoft.com/?whr=contoso.com`
+- Self-service password reset `https://passwordreset.microsoftonline.com/?whr=contoso.com`
+ ## License requirements Adding custom branding requires one of the following licenses:
Azure AD Premium editions are available for customers in China using the worldwi
## Before you begin
-You can customize the sign-in pages when users access your organization's tenant-specific apps, such as `https://outlook.com/woodgrove.com`, or when passing a domain variable, such as `https://passwordreset.microsoftonline.com/?whr=woodgrove.com`.
-
-Custom branding appears after users authenticate for the first time. Users that start the sign-in process at a site like www\.office.com won't see the branding. After the first sign-in, the branding may take at least 15 minutes to appear.
- **All branding elements are optional. Default settings will remain, if left unchanged.** For example, if you specify a banner logo but no background image, the sign-in page shows your logo with a default background image from the destination site such as Microsoft 365. Additionally, sign-in page branding doesn't carry over to personal Microsoft accounts. If your users or guests authenticate using a personal Microsoft account, the sign-in page won't reflect the branding of your organization. **Images have different image and file size requirements.** Take note of the image requirements for each option. You may need to use a photo editor to create the right size images. The preferred image type for all images is PNG, but JPG is accepted.
The process for customizing the experience is the same as the [default sign-in e
- [Learn more about default user permissions in Azure AD](../fundamentals/users-default-permissions.md) -- [Manage the 'stay signed in' prompt](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt)
+- [Manage the 'stay signed in' prompt](active-directory-users-profile-azure-portal.md#learn-about-the-stay-signed-in-prompt)
active-directory Workflows Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/governance/workflows-faqs.md
In this article you will find questions to commonly asked questions about [Lifec
Yes, custom workflows can be configured for members or guests in your tenant. Workflows can run for all types of external guests, external members, internal guests and internal members.
+### Why do I see "Lifecycle Management" instead of "Lifecycle Workflows"?
+For a small portion of our customers, Lifecycle Workflows may still be listed under the former name Lifecycle Management in the audit logs and enterprise applications.
+ ### Do I need to map employeeHireDate in provisioning apps like WorkDay? Yes, key user properties like employeeHireDate and employeeType are supported for user provisioning from HR apps like WorkDay. To use these properties in Lifecycle workflows, you will need to map them in the provisioning process to ensure the values are set. The following is an example of the mapping:
active-directory How To Connect Modify Group Writeback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/how-to-connect-modify-group-writeback.md
This article walks you through the options for modifying the default behaviors o
If the original version of group writeback is already enabled and in use in your environment, all your Microsoft 365 groups have already been written back to Active Directory. Instead of disabling all Microsoft 365 groups, review any use of the previously written-back groups. Disable only those that are no longer needed in on-premises Active Directory.
-### Disable automatic writeback of all Microsoft 365 groups
+### Disable automatic writeback of new Microsoft 365 groups
To configure directory settings to disable automatic writeback of newly created Microsoft 365 groups, use one of these methods:
To configure directory settings to disable automatic writeback of newly created
- Microsoft Graph: Use the [directorySetting](/graph/api/resources/directorysetting?view=graph-rest-beta&preserve-view=true) resource type.
-### Disable writeback for each existing Microsoft 365 group
+### Disable writeback for all existing Microsoft 365 group
+
+To disable writeback of all Microsoft 365 groups that were created before these modifications, use one of the folowing methods:
- Portal: Use the [Microsoft Entra admin portal](../enterprise-users/groups-write-back-portal.md).-- PowerShell: Use the [Microsoft Identity Tools PowerShell module](https://www.powershellgallery.com/packages/MSIdentityTools/2.0.16). For example:
+- PowerShell: Use the [Microsoft Graph PowerShell SDK](/powershell/microsoftgraph/installation?view=graph-powershell-1.0&preserve-view=true). For example:
+
+ ```PowerShell
+ #Import-module
+ Import-module Microsoft.Graph
+
+ #Connect to MgGraph and select the Beta API Version
+ Connect-MgGraph -Scopes Group.ReadWrite.All
+ Select-MgProfile -Name beta
+
+ #List all Microsoft 365 Groups
+ $Groups = Get-MgGroup -All | Where-Object {$_.GroupTypes -like "*unified*"}
+
+ #Disable Microsoft 365 Groups
+ Foreach ($group in $Groups)
+ {
+ Update-MgGroup -GroupId $group.id -WritebackConfiguration @{isEnabled=$false}
+ }
+> We recomend using Microsoft Graph PowerShell SDK with [Windows PowerShell 7](/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.3&preserve-view=true)
- `Get-mggroup -filter "groupTypes/any(c:c eq 'Unified')" | Update-MsIdGroupWritebackConfiguration -WriteBackEnabled $false`
-- Microsoft Graph: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta&preserve-view=true).
+- Microsoft Graph Explorer: Use a [group object](/graph/api/group-update?tabs=http&view=graph-rest-beta&preserve-view=true).
## Delete groups when they're disabled for writeback or soft deleted
active-directory Migrate From Federation To Cloud Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/migrate-from-federation-to-cloud-authentication.md
On your Azure AD Connect server, follow the steps 1- 5 in [Option A](#option-a).
>[!IMPORTANT] > You don't have to convert all domains at the same time. You might choose to start with a test domain on your production tenant or start with your domain that has the lowest number of users.
-**Complete the conversion by using the Azure AD PowerShell module:**
+**Complete the conversion by using the Microsoft Graph PowerShell SDK:**
1. In PowerShell, sign in to Azure AD by using a Global Administrator account.
+ ```powershell
+ Connect-MGGraph -Scopes "Domain.ReadWrite.All", "Directory.AccessAsUser.All"
+ ```
2. To convert the first domain, run the following command: ```powershell
- Set-MsolDomainAuthentication -Authentication Managed -DomainName <domain name>
+ Update-MgDomain -DomainId <domain name> -AuthenticationType "Managed"
```
- See [Set-MsolDomainAuthentication](/powershell/module/msonline/set-msoldomainauthentication)
+ See [Update-MgDomain](https://learn.microsoft.com/powershell/module/microsoft.graph.identity.directorymanagement/update-mgdomain?view=graph-powershell-1.0)
3. In the Azure AD portal, select **Azure Active Directory > Azure AD Connect**. 4. Verify that the domain has been converted to managed by running the following command: ```powershell
- Get-MsolDomain -DomainName <domain name>
+ Get-MgDomainFederationConfiguration -DomainId yourdomain.com
``` ## Complete your migration
If you have Azure AD Connect Health, you can [monitor usage](how-to-connect-heal
If you don't use AD FS for other purposes (that is, for other relying party trusts), you can decommission AD FS at this point.
+### Remove AD FS
+
+For a full list of steps to take to completely remove AD FS from the environment follow the [Active Directory Federation Services (AD FS) decommision guide](https://learn.microsoft.com/windows-server/identity/ad-fs/decommission/adfs-decommission-guide).
+ ## Next steps - [Learn about migrating applications](../manage-apps/migration-resources.md)
active-directory How Manage User Assigned Managed Identities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md
In this article, you learn how to create, list, delete, or assign a role to a us
- If you don't already have an Azure account, [sign up for a free account](https://azure.microsoft.com/free/) before you continue. > [!IMPORTANT] > To modify user permissions when you use an app service principal by using the CLI, you must provide the service principal more permissions in the Azure Active Directory Graph API because portions of the CLI perform GET requests against the Graph API. Otherwise, you might end up receiving an "Insufficient privileges to complete the operation" message. To do this step, go into the **App registration** in Azure AD, select your app, select **API permissions**, and scroll down and select **Azure Active Directory Graph**. From there, select **Application permissions**, and then add the appropriate permissions.
active-directory How To Assign App Role Managed Identity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-assign-app-role-managed-identity-cli.md
In this article, you learn how to assign a managed identity to an application ro
- If you're unfamiliar with managed identities for Azure resources, check out the [overview section](overview.md). **Be sure to review the [difference between a system-assigned and user-assigned managed identity](overview.md#managed-identity-types)**. ## Assign a managed identity access to another application's app role
active-directory How To View Managed Identity Service Principal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/how-to-view-managed-identity-service-principal-cli.md
If you don't already have an Azure account, [sign up for a free account](https:/
- Enable [system assigned identity on a virtual machine](./qs-configure-portal-windows-vm.md#system-assigned-managed-identity) or [application](../../app-service/overview-managed-identity.md#add-a-system-assigned-identity). ## View the service principal
active-directory Howto Assign Access Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/howto-assign-access-cli.md
If you don't already have an Azure account, [sign up for a free account](https:/
- If you're unfamiliar with managed identities for Azure resources, see [What are managed identities for Azure resources?](overview.md). To learn about system-assigned and user-assigned managed identity types, see [Managed identity types](overview.md#managed-identity-types). ## Use Azure RBAC to assign a managed identity access to another resource
active-directory Qs Configure Cli Windows Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vm.md
If you don't already have an Azure account, [sign up for a free account](https:/
- If you're unfamiliar with managed identities for Azure resources, see [What are managed identities for Azure resources?](overview.md). To learn about system-assigned and user-assigned managed identity types, see [Managed identity types](overview.md#managed-identity-types). ## System-assigned managed identity
active-directory Qs Configure Cli Windows Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-cli-windows-vmss.md
If you don't already have an Azure account, [sign up for a free account](https:/
> [!NOTE] > No additional Azure AD directory role assignments required. ## System-assigned managed identity
active-directory Qs Configure Rest Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vm.md
If you don't already have an Azure account, [sign up for a free account](https:/
- If you're unfamiliar with managed identities for Azure resources, see [What are managed identities for Azure resources?](overview.md). To learn about system-assigned and user-assigned managed identity types, see [Managed identity types](overview.md#managed-identity-types). ## System-assigned managed identity
active-directory Qs Configure Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/managed-identities-azure-resources/qs-configure-rest-vmss.md
If you don't already have an Azure account, [sign up for a free account](https:/
> [!NOTE] > No additional Azure AD directory role assignments required. ## System-assigned managed identity
active-directory Concept Activity Logs Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-activity-logs-azure-monitor.md
Previously updated : 12/02/2022 Last updated : 01/12/2023
Once you have your endpoint established, go to **Azure AD** and then **Diagnosti
If you already have an Azure AD license, you need an Azure subscription to set up the storage account and Event Hubs. The Azure subscription comes at no cost, but you have to pay to utilize Azure resources, including the storage account that you use for archival and the Event Hubs that you use for streaming. The amount of data and, thus, the cost incurred, can vary significantly depending on the tenant size.
+Azure Monitor provides the option to exclude whole events, fields, or parts of fields when ingesting logs from Azure AD. Learn more about this cost saving feature in [Data collection transformation in Azure Monitor](../../azure-monitor/essentials/data-collection-transformations.md).
+ ### Storage size for activity logs Every audit log event uses about 2 KB of data storage. Sign in event logs are about 4 KB of data storage. For a tenant with 100,000 users, which would incur about 1.5 million events per day, you would need about 3 GB of data storage per day. Because writes occur in approximately five-minute batches, you can anticipate approximately 9,000 write operations per month. - The following table contains a cost estimate of, depending on the size of the tenant, a general-purpose v2 storage account in West US for at least one year of retention. To create a more accurate estimate for the data volume that you anticipate for your application, use the [Azure storage pricing calculator](https://azure.microsoft.com/pricing/details/storage/blobs/).
The following table contains a cost estimate of, depending on the size of the te
If you want to know for how long the activity data is stored in a Premium tenant, see: [How long does Azure AD store the data?](reference-reports-data-retention.md#how-long-does-azure-ad-store-the-data) --- ### Event Hubs messages for activity logs Events are batched into approximately five-minute intervals and sent as a single message that contains all the events within that timeframe. A message in the Event Hubs has a maximum size of 256 KB. If the total size of all the messages within the timeframe exceeds that volume, multiple messages are sent.
active-directory Concept All Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-all-sign-ins.md
Previously updated : 01/05/2023 Last updated : 01/12/2023
When analyzing authentication details, take note of the following details:
- The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include: - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged. - The **Primary authentication** row isn't initially logged.
+- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.
## Sign-in data used by other services
active-directory Concept Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-reporting-api.md
-- Title: Get started with the Azure AD reporting API | Microsoft Docs
-description: How to get started with the Azure Active Directory reporting API
------- Previously updated : 11/04/2022----
-# Get started with the Azure Active Directory reporting API
-
-Azure Active Directory provides you with several [reports](overview-reports.md), containing useful information such as security information and event management (SIEM) systems, audit, and business intelligence tools. By using the Microsoft Graph API for Azure AD reports, you can gain programmatic access to the data through a set of REST-based APIs. You can call these APIs from various programming languages and tools.
-
-This article provides you with an overview of the reporting API, including ways to access it. If you run into issues, see [how to get support for Azure Active Directory](../fundamentals/active-directory-troubleshooting-support-howto.md).
-
-## Prerequisites
-
-To access the reporting API, with or without user intervention, you need to:
-
-1. Confirm your roles and licenses
-1. Register an application
-1. Grant permissions
-1. Gather configuration settings
-
-For detailed instructions, see the [prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md).
-
-## API Endpoints
-
-Microsoft Graph API endpoints:
-- **Audit logs:** `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits`-- **Sign-in logs:** `https://graph.microsoft.com/v1.0/auditLogs/signIns`-
-Programmatic access APIs:
-- **Security detections:** [Identity Protection risk detections API](/graph/api/resources/identityprotection-root)-- **Tenant provisioning events:** [Provisioning logs API](/graph/api/resources/provisioningobjectsummary)-
-Check out the following helpful resources for Microsoft Graph API:
-- [Audit log API reference](/graph/api/resources/directoryaudit)-- [Sign-in log API reference](/graph/api/resources/signIn)-- [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md)-
-
-## APIs with Microsoft Graph Explorer
-
-You can use the [Microsoft Graph explorer](https://developer.microsoft.com/graph/graph-explorer) to verify your sign-in and audit API data. Sign in to your account using both of the sign-in buttons in the Graph Explorer UI, and set **AuditLog.Read.All** and **Directory.Read.All** permissions for your tenant as shown.
-
-![Graph Explorer](./media/concept-reporting-api/graph-explorer.png)
-
-![Modify permissions UI](./media/concept-reporting-api/modify-permissions.png)
-
-## Use certificates to access the Azure AD reporting API
-
-Use the Azure AD Reporting API with certificates if you plan to retrieve reporting data without user intervention.
-
-For detailed instructions, see [Get data using the Azure AD Reporting API with certificates](tutorial-access-api-with-certificates.md).
-
-## Next steps
-
- * [Prerequisites to access reporting API](howto-configure-prerequisites-for-reporting-api.md)
- * [Get data using the Azure AD Reporting API with certificates](tutorial-access-api-with-certificates.md)
- * [Troubleshoot errors in Azure AD reporting API](troubleshoot-graph-api.md)
active-directory Concept Sign Ins https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/concept-sign-ins.md
Previously updated : 11/04/2022 Last updated : 01/12/2023
When analyzing authentication details, take note of the following details:
- **OATH verification code** is logged as the authentication method for both OATH hardware and software tokens (such as the Microsoft Authenticator app). - The **Authentication details** tab can initially show incomplete or inaccurate data until log information is fully aggregated. Known examples include: - A **satisfied by claim in the token** message is incorrectly displayed when sign-in events are initially logged.
- - The **Primary authentication** row isn't initially logged.
+ - The **Primary authentication** row isn't initially logged.
+- If you're unsure of a detail in the logs, gather the **Request ID** and **Correlation ID** to use for further analyzing or troubleshooting.
## Sign-in data used by other services
active-directory Howto Configure Prerequisites For Reporting Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/howto-configure-prerequisites-for-reporting-api.md
Previously updated : 10/31/2022 Last updated : 01/11/2023 -+ # Prerequisites to access the Azure Active Directory reporting API
-The [Azure Active Directory (Azure AD) reporting APIs](./concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from many programming languages and tools.
+The Azure Active Directory (Azure AD) [reporting APIs](/graph/api/resources/azure-ad-auditlog-overview) provide you with programmatic access to the data through a set of REST APIs. You can call these APIs from many programming languages and tools. The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs.
-The reporting API uses [OAuth](../../api-management/api-management-howto-protect-backend-with-aad.md) to authorize access to the web APIs.
+This article describes how to enable Microsoft Graph to access the Azure AD reporting APIs in the Azure portal and through PowerShell
-To prepare your access to the reporting API, you need to:
+## Roles and license requirements
-1. [Assign roles](#assign-roles)
-2. [License Requirements](#license-requirements)
-3. [Register an application](#register-an-application)
-4. [Grant permissions](#grant-permissions)
-5. [Gather configuration settings](#gather-configuration-settings)
-
-## Assign roles
-
-To get access to the reporting data through the API, you need to have one of the following roles assigned:
+To get access to the reporting data through the API, you need to have one of the following roles:
- Security Reader- - Security Administrator- - Global Administrator
-## License Requirements
-
-In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium license. Azure AD Premium P1 (or above) license is required to access sign-in reports for any Azure AD tenant. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
--
-## Register an application
+In order to access the sign-in reports for a tenant, an Azure AD tenant must have associated Azure AD Premium P1 or P2 license. Alternatively if the directory type is Azure AD B2C, the sign-in reports are accessible through the API without any additional license requirement.
-Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens.
-
-To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) with an Azure administrator account that is also a member of the **Global Administrator** directory role in your Azure AD tenant.
+Registration is needed even if you're accessing the reporting API using a script. The registration gives you an **Application ID**, which is required for the authorization calls and enables your code to receive tokens. To configure your directory to access the Azure AD reporting API, you must sign in to the [Azure portal](https://portal.azure.com) in one of the required roles.
> [!IMPORTANT]
-> Applications running under credentials with administrator privileges can be very powerful, so please be sure to keep the application's ID and secret credentials in a secure location.
+> Applications running under credentials with administrator privileges can be very powerful, so be sure to keep the application's ID and secret credentials in a secure location.
>
+## Enable the Microsoft Graph API through the Azure portal
-**To register an Azure AD application:**
-
-1. In the [Azure portal](https://portal.azure.com), select **Azure Active Directory** from the left navigation pane.
-
- ![Screenshot shows Azure Active Directory selected from the Azure portal menu.](./media/howto-configure-prerequisites-for-reporting-api/01.png)
+To enable your application to access Microsoft Graph without user intervention, you'll need to register your application with Azure AD, then grant permissions to the Microsoft Graph API. This article covers the steps to follow in the Azure portal.
-2. In the **Azure Active Directory** page, select **App registrations**.
+### Register an Azure AD application
- ![Screenshot shows App registrations selected from the Manage menu.](./media/howto-configure-prerequisites-for-reporting-api/02.png)
+1. In the [Azure portal](https://portal.azure.com), go to **Azure Active Directory** > **App registrations**.
+1. Select **New registration**.
-3. From the **App registrations** page, select **New registration**.
+ ![Screenshot of the App registrations page, with the New registration button highlighted.](./media/howto-configure-prerequisites-for-reporting-api/new-app-registration.png)
- ![Screenshot shows New registration selected.](./media/howto-configure-prerequisites-for-reporting-api/03.png)
-
-4. The **Registration an Application** page:
+1. On the **Registration an Application** page:
+ 1. Give the application a **Name** such as `Reporting API application`.
+ 1. For **Supported accounts type**, select **Accounts in this organizational directory only**.
+ 1. In the **Redirect URI** section, select **Web** from the list and type `https://localhost`.
+ 1. Select **Register**.
![Screenshot shows the Register an application page where you can enter the values in this step.](./media/howto-configure-prerequisites-for-reporting-api/04.png)
- a. In the **Name** textbox, type `Reporting API application`.
-
- b. For **Supported accounts type**, select **Accounts in this organizational only**.
-
- c. In the **Redirect URL** select **Web** textbox, type `https://localhost`.
-
- d. Select **Register**.
-
+### Grant permissions
-## Grant permissions
+To access the Azure AD reporting API, you must grant your app *Read directory data* and *Read all audit log data* permissions for the Microsoft Graph API.
-To access the Azure AD reporting API, you must grant your app the following two permissions:
+1. **Azure Active Directory** > **App Registrations**> **API permissions** and select **Add a permission**.
-| API | Permission |
-| | |
-| Microsoft Graph | Read directory data |
-| Microsoft Graph | Read all audit log data |
+ ![Screenshot of the API permissions menu option and Add permissions button.](./media/howto-configure-prerequisites-for-reporting-api/api-permissions-new-permission.png)
-![Screenshot shows where you can select Add a permission in the A P I permissions pane.](./media/howto-configure-prerequisites-for-reporting-api/api-permissions.png)
+1. Select **Microsoft Graph** > **Application permissions**.
+1. Add **Directory.ReadAll** and **AuditLog.Read.All**, then select the **Add permissions** button.
+ - If you need more permissions to run the queries you need, you can add them now or modify the permissions as needed in Microsoft Graph.
+ - For more information, see [Work with Graph Explorer](/graph/graph-explorer/graph-explorer-features).
-The following section lists the steps for API setting.
+ ![Screenshot shows the Request API permissions page where you can select Application permissions.](./media/howto-configure-prerequisites-for-reporting-api/directory-read-all.png)
-**To grant your application permissions to use the APIs:**
+1. On the **Reporting API Application - API Permissions** page, select **Grant admin consent for Default Directory**.
+ ![Screenshot shows the Reporting API Application API permissions page where you can select Grant admin consent.](./media/howto-configure-prerequisites-for-reporting-api/api-permissions-grant-consent.png)
-1. Select **API permissions**, and then select **Add a permission**.
+## Access reports using Microsoft Graph Explorer
- ![Screenshot shows the A P I Permissions page where you can select Add a permission.](./media/howto-configure-prerequisites-for-reporting-api/add-api-permission.png)
+Once you have the app registration configured, you can run activity log queries in Microsoft Graph.
-2. On the **Request API permissions page** page, locate **Microsoft Graph**.
+1. Sign in to https://graph.microsoft.com using the **Security Reader** role. You may need to confirm that you're signed into the appropriate role. Select your profile icon in the upper-right corner of Microsoft Graph.
+1. Use one of the following queries to start using Microsoft Graph for accessing activity logs:
+ - GET `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits`
+ - GET `https://graph.microsoft.com/v1.0/auditLogs/signIns`
+ - For more information on Microsoft Graph queries for activity logs, see [Activity reports API overview](/graph/api/resources/azuread-auditlog-overview)
- ![Screenshot shows the Request A P I permissions page where you can select Azure Active Directory Graph.](./media/howto-configure-prerequisites-for-reporting-api/select-microsoft-graph-api.png)
+ ![Screenshot of an activity log GET query in Microsoft Graph.](./media/howto-configure-prerequisites-for-reporting-api/graph-sample-get-query.png)
-3. On the **Required permissions** page, select **Application Permissions**. Select the **Directory** checkbox, and then select **Directory.ReadAll**. Select the **AuditLog** checkbox, and then select **AuditLog.Read.All**. Select **Add permissions**.
+## Access reports using Microsoft Graph PowerShell
- ![Screenshot shows the Request A P I permissions page where you can select Application permissions.](./media/howto-configure-prerequisites-for-reporting-api/select-permissions.png)
+To use PowerShell to access the Azure AD reporting API, you'll need to gather a few configuration settings. These settings were created as a part of the [app registration process](#register-an-azure-ad-application).
-4. On the **Reporting API Application - API Permissions** page, select **Grant admin consent**.
-
- ![Screenshot shows the Reporting A P I Application A P I permissions page where you can select Grant admin consent.](./media/howto-configure-prerequisites-for-reporting-api/grant-admin-consent.png)
--
-## Gather configuration settings
-
-This section shows you how to get the following settings from your directory:
--- Domain name-- Client ID
+- Tenant ID
+- Client app ID
- Client secret or certificate You need these values when configuring calls to the reporting API. We recommend using a certificate because it's more secure.
-### Get your domain name
-
-**To get your domain name:**
-
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory selected from the Azure portal menu to get domain name.](./media/howto-configure-prerequisites-for-reporting-api/01.png)
-
-2. On the **Azure Active Directory** page, select **Custom domain names**.
-
- ![Screenshot shows Custom domain names selected from Azure Active Directory.](./media/howto-configure-prerequisites-for-reporting-api/09.png)
-
-3. Copy your domain name from the list of domains.
--
-### Get your application's client ID
-
-**To get your application's client ID:**
-
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory selected from the Azure portal menu to get application's client ID.](./media/howto-configure-prerequisites-for-reporting-api/01.png)
-
-2. Select your application from the **App Registrations** page.
-
-3. From the application page, navigate to **Application ID** and select **Click to copy**.
-
- ![Screenshot shows the Reporting A P I Application page where you can copy the Application I D.](./media/howto-configure-prerequisites-for-reporting-api/11.png)
--
-### Get your application's client secret
-
-**To get your application's client secret:**
-
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory selected from the Azure portal menu to get application's client secret.](./media/howto-configure-prerequisites-for-reporting-api/01.png)
-
-2. Select your application from the **App Registrations** page.
-
-3. Select **Certificates and Secrets** on the **API Application** page, in the **Client Secrets** section, select **+ New Client Secret**.
-
- ![Screenshot shows the Certificates & secrets page where you can add a client secret.](./media/howto-configure-prerequisites-for-reporting-api/12.png)
-
-4. On the **Add a client secret** page, add:
-
- a. In the **Description** textbox, type `Reporting API`.
-
- b. As **Expires**, select **In 2 years**.
-
- c. Select **Save**.
-
- d. Copy the key value.
-
-### Upload the certificate of your application
-
-**To upload certificate:**
-
-1. In the [Azure portal](https://portal.azure.com), on the left navigation pane, select **Azure Active Directory**.
-
- ![Screenshot shows Azure Active Directory selected from the Azure portal menu to upload the certificate.](./media/howto-configure-prerequisites-for-reporting-api/01.png)
-
-2. On the **Azure Active Directory** page, select **App Registration**.
-3. From the application page, select your application.
-4. Select **Certificates & secrets**.
-5. Select **Upload certificate**.
-6. Select the file icon, go to a certificate, and then select **Add**.
-
- ![Screenshot shows uploading the certificate.](./media/howto-configure-prerequisites-for-reporting-api/upload-certificate.png)
-
-## Troubleshoot errors in the reporting API
-
-This section lists the common error messages you may run into while accessing activity reports using the Microsoft Graph API and steps for their resolution.
-
-### Error: Failed to get user roles from Microsoft Graph
-
- Sign into your account using both sign-in buttons in the Graph Explorer UI to avoid getting an error when trying to sign in using Graph Explorer.
-
-![Graph Explorer](./media/troubleshoot-graph-api/graph-explorer.png)
+1. Go to **Azure Active Directory** > **App Registrations**.
+1. Copy the **Directory (tenant) ID**.
+1. Copy the **Application (client) ID**.
+1. Go to **App Registration** > Select your application > **Certificates & secrets** > **Certificates** > **Upload certificate** and upload your certificate's public key file.
+ - If you don't have a certificate to upload, follow the steps outlined in the [Create a self-signed certificate to authenticate your application](../develop/howto-create-self-signed-certificate.md) article.
-### Error: Failed to do premium license check from Microsoft Graph
+Next you'll authenticate with the configuration settings you just gathered. Open PowerShell and run the following command, replacing the placeholders with your information.
-If you run into this error message while trying to access sign-ins using Graph Explorer, choose **Modify Permissions** underneath your account on the left nav, and select **Tasks.ReadWrite** and **Directory.Read.All**.
+```powershell
+Connect-MgGraph -ClientID YOUR_APP_ID -TenantId YOUR_TENANT_ID -CertificateName YOUR_CERT_SUBJECT ## Or -CertificateThumbprint instead of -CertificateName
+```
-![Modify permissions UI](./media/troubleshoot-graph-api/modify-permissions.png)
+Microsoft Graph PowerShell cmdlets:
-### Error: Tenant isn't B2C or tenant doesn't have premium license
+- **Audit logs:** `Get-MgAuditLogDirectoryAudit`
+- **Sign-in logs:** `Get-MgAuditLogSignIn`
+- **Provisioning logs:** `Get-MgAuditLogProvisioning`
+- Explore the full list of [reporting-related Microsoft Graph PowerShell cmdlets](/powershell/module/microsoft.graph.reports).
-Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) license. If you see this error message while accessing sign-ins, make sure that your tenant is licensed with an Azure AD P1 license.
+Programmatic access APIs:
+- **Security detections:** [Identity Protection risk detections API](/graph/api/resources/identityprotection-root)
+- **Tenant provisioning events:** [Provisioning logs API](/graph/api/resources/provisioningobjectsummary)
-### Error: The allowed roles doesn't include User.
+### Troubleshoot errors in Azure Active Directory reporting API
- Avoid errors trying to access audit logs or sign-in using the API. Make sure your account is part of the **Security Reader** or **Reports Reader** role in your Azure Active Directory tenant.
+**500 HTTP internal server error while accessing Microsoft Graph beta endpoint**: We don't currently support the Microsoft Graph beta endpoint - make sure to access the activity logs using the Microsoft Graph v1.0 endpoint.
+- GET `https://graph.microsoft.com/v1.0/auditLogs/directoryAudits`
+- GET `https://graph.microsoft.com/v1.0/auditLogs/signIns`
-### Error: Application missing Azure AD 'Read directory data' permission
+**Error: Neither tenant is B2C or tenant doesn't have premium license**: Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) license. If you see this error message while accessing sign-ins, make sure that your tenant is licensed with an Azure AD P1 license.
-### Error: Application missing Microsoft Graph API 'Read all audit log data' permission
+**Error: User isn't in the allowed roles**: If you see this error message while trying to access audit logs or sign-ins using the API, make sure that your account is part of the **Security Reader** or **Reports Reader** role in your Azure Active Directory tenant.
-Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
+**Error: Application missing Azure AD 'Read directory data' or 'Read all audit log data' permission**: Revisit the **[Grant permissions](#grant-permissions)** section of this article to ensure the permissions are properly set.
## Next steps
-* [Get data using the Azure Active Directory reporting API with certificates](tutorial-access-api-with-certificates.md)
+* [Get started with Azure Active Directory Identity Protection and Microsoft Graph](../identity-protection/howto-identity-protection-graph-api.md)
* [Audit API reference](/graph/api/resources/directoryaudit)
-* [Sign-in activity report API reference](/graph/api/resources/signin)
+* [Sign-in API reference](/graph/api/resources/signin)
active-directory Quickstart Access Log With Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/quickstart-access-log-with-graph-api.md
# Quickstart: Access Azure AD logs with the Microsoft Graph API
-With the information in the Azure AD sign-ins log, you can figure out what happened if a sign-in of a user failed. This quickstart shows how to you can access the sign-ins log using the Graph API.
+With the information in the Azure Active Directory (Azure AD) sign-in logs, you can figure out what happened if a sign-in of a user failed. This quickstart shows you how to access the sign-ins log using the Graph API.
## Prerequisites To complete the scenario in this quickstart, you need: -- **Access to an Azure AD tenant** - If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- **A test account called Isabella Simonsen** - If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users-azure-active-directory.md#add-a-new-user).
+- **Access to an Azure AD tenant**: If you don't have access to an Azure AD tenant, see [Create your Azure free account today](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- **A test account called Isabella Simonsen**: If you don't know how to create a test account, see [Add cloud-based users](../fundamentals/add-users-azure-active-directory.md#add-a-new-user).
+- **Access to the reporting API**: If you haven't configured access yet, see [How to configure the prerequisites for the reporting API](howto-configure-prerequisites-for-reporting-api.md).
## Perform a failed sign-in
active-directory Troubleshoot Graph Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/troubleshoot-graph-api.md
-- Title: 'Troubleshoot errors in Azure Active Directory reporting API | Microsoft Docs'
-description: Provides you with a resolution to errors while calling Azure Active Directory Reporting APIs.
------- Previously updated : 11/01/2022------
-# Troubleshoot errors in Azure Active Directory reporting API
-
-This article lists the common error messages you may run into while accessing activity reports using the Microsoft Graph API and steps for their resolution.
-
-### 500 HTTP internal server error while accessing Microsoft Graph V2 endpoint
-
-We don't currently support the Microsoft Graph v2 endpoint - make sure to access the activity logs using the Microsoft Graph v1 endpoint.
-
-### Error: Neither tenant is B2C or tenant doesn't have premium license
-
-Accessing sign-in reports requires an Azure Active Directory premium 1 (P1) license. If you see this error message while accessing sign-ins, make sure that your tenant is licensed with an Azure AD P1 license.
-
-### Error: User isn't in the allowed roles
-
-If you see this error message while trying to access audit logs or sign-ins using the API, make sure that your account is part of the **Security Reader** or **Reports Reader** role in your Azure Active Directory tenant.
-
-### Error: Application missing Azure AD 'Read directory data' permission
-
-Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
-
-### Error: Application missing Microsoft Graph API 'Read all audit log data' permission
-
-Follow the steps in the [Prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md) to ensure your application is running with the right set of permissions.
-
-## Next Steps
-
-[Use the audit API reference](/graph/api/resources/directoryaudit)
-[Use the sign-in activity report API reference](/graph/api/resources/signin)
active-directory Tutorial Access Api With Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/tutorial-access-api-with-certificates.md
- Title: Tutorial for AD Reporting API with certificates | Microsoft Docs
-description: This tutorial explains how to use the Azure AD Reporting API with certificate credentials to get data from directories without user intervention.
------- Previously updated : 10/31/2022-----
-# Customer intent: As a developer, I want to learn how to access the Azure AD reporting API using certificates so that I can create an application that does not require user intervention to access reports.
---
-# Tutorial: Get data using the Azure Active Directory reporting API with certificates
-
-The [Azure Active Directory (Azure AD) reporting APIs](concept-reporting-api.md) provide you with programmatic access to the data through a set of REST-based APIs. You can call these APIs from various programming languages and tools. If you want to access the Azure AD Reporting API without user intervention, you must configure your access to use certificates.
-
-In this tutorial, you learn how to use a test certificate to access the MS Graph API for reporting. We don't recommend using test certificates in a production environment.
-
-## Prerequisites
-
-1. To access sign-in data, make sure you have an Azure AD tenant with a premium (P1/P2) license. See [Getting started with Azure Active Directory Premium](../fundamentals/active-directory-get-started-premium.md) to upgrade your Azure AD edition. If you didn't have any activities data prior to the upgrade, it will take a couple of days for the data to show up in the reports after you upgrade to a premium license.
-
-2. Create or switch to a user account in the **Global Administrator**, **Security Administrator**, **Security Reader** or **Reports Reader** role for the tenant.
-
-3. Complete the [prerequisites to access the Azure Active Directory reporting API](howto-configure-prerequisites-for-reporting-api.md).
-
-4. Download and install [Azure AD PowerShell V2](https://github.com/Azure/azure-docs-powershell-azuread/blob/master/docs-conceptual/azureadps-2.0/install-adv2.md).
-
-5. Install [MSCloudIdUtils](https://www.powershellgallery.com/packages/MSCloudIdUtils/). This module provides several utility cmdlets including:
- - The Microsoft Authentication Library libraries needed for authentication
- - Access tokens from user, application keys, and certificates using Microsoft Authentication Library
- - Graph API handling paged results
-
-6. If it's your first time using the module run **Install-MSCloudIdUtilsModule**, otherwise import it using the **Import-Module** PowerShell command. Your session should look similar to this screen:
- ![Windows PowerShell](./media/tutorial-access-api-with-certificates/module-install.png)
-
-7. Use the **New-SelfSignedCertificate** PowerShell commandlet to create a test certificate.
-
- ```
- $cert = New-SelfSignedCertificate -Subject "CN=MSGraph_ReportingAPI" -CertStoreLocation "Cert:\CurrentUser\My" -KeyExportPolicy Exportable -KeySpec Signature -KeyLength 2048 -KeyAlgorithm RSA -HashAlgorithm SHA256
- ```
-
-8. Use the **Export-Certificate** commandlet to export it to a certificate file.
-
- ```
- Export-Certificate -Cert $cert -FilePath "C:\Reporting\MSGraph_ReportingAPI.cer"
-
- ```
-
-## Get data using the Azure Active Directory reporting API with certificates
-
-1. Go to the [Azure portal](https://portal.azure.com) > **Azure Active Directory** > **App registrations** and choose your application from the list.
-
-2. From the Application registration area, select **Certificates & secrets** under the **Manage** section, and then select **Upload Certificate**.
-
-3. Select the certificate file from the previous step and select **Add**.
-
-4. Note the Application ID, and the thumbprint of the certificate you registered with your application. To find the thumbprint, from your application page in the portal, go to **Certificates & secrets** under **Manage** section. The thumbprint will be under the **Certificates** list.
-
-5. Open the application manifest in the inline manifest editor and verify the *keyCredentials* property is updated with your new certificate information as shown below -
-
- ```
- "keyCredentials": [
- {
- "customKeyIdentifier": "$base64Thumbprint", //base64 encoding of the certificate hash
- "keyId": "$keyid", //GUID to identify the key in the manifest
- "type": "AsymmetricX509Cert",
- "usage": "Verify",
- "value": "$base64Value" //base64 encoding of the certificate raw data
- }
- ]
- ```
-6. Now, you can get an access token for the MS Graph API using this certificate. Use the **Get-MSCloudIdMSGraphAccessTokenFromCert** cmdlet from the MSCloudIdUtils PowerShell module, passing in the Application ID and the thumbprint you obtained from the previous step.
-
- ![Screenshot shows a PowerShell window with a command that creates an access token.](./media/tutorial-access-api-with-certificates/getaccesstoken.png)
-
-7. Use the access token in your PowerShell script to query the Graph API. Use the **Invoke-MSCloudIdMSGraphQuery** cmdlet from the MSCloudIDUtils to enumerate the `signins` and `directoryAudits` endpoint. This cmdlet handles multi-paged results, and sends those results to the PowerShell pipeline.
-
-8. Query the `directoryAudits` endpoint to retrieve the audit logs.
-
- ![Screenshot shows a PowerShell window with a command to query the directoryAudits endpoint using the access token from earlier in this procedure.](./media/tutorial-access-api-with-certificates/query-directoryAudits.png)
-
-9. Query the `signins` endpoint to retrieve the sign-in logs.
-
- ![Screenshot shows a PowerShell window with a command to query the signins endpoint using the access token from earlier in this procedure.](./media/tutorial-access-api-with-certificates/query-signins.png)
-
-10. You can now choose to export this data to a CSV and save to a SIEM system. You can also wrap your script in a scheduled task to get Azure AD data from your tenant periodically without having to store application keys in the source code.
-
-## Next steps
-
-* [Get a first impression of the reporting APIs](concept-reporting-api.md)
-* [Audit API reference](/graph/api/resources/directoryaudit)
-* [Sign-in activity report API reference](/graph/api/resources/signin)
active-directory Workbook Legacy Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/workbook-legacy-authentication.md
++
+ Title: Sign-ins using legacy authentication workbook in Azure AD | Microsoft Docs
+description: Learn how to use the sign-ins using legacy authentication workbook.
+++++++ Last updated : 11/01/2022++++++
+# Sign-ins using legacy authentication workbook
+
+Have you ever wondered how you can determine whether it's safe to turn off legacy authentication in your tenant? The sign-ins using legacy authentication workbook helps you to answer this question.
+
+This article gives you an overview of this workbook.
++
+## Description
+
+![Screenshot of workbook thumbnail.](./media/workbook-legacy-authentication/sign-ins-legacy-auth.png)
+
+Azure AD supports several of the most widely used authentication and authorization protocols including legacy authentication. Legacy authentication refers to basic authentication, which was once a widely used industry-standard method for passing user name and password information through a client to an identity provider.
+
+Examples of applications that commonly or only use legacy authentication are:
+
+- Microsoft Office 2013 or older.
+
+- Apps using legacy auth with mail protocols like POP, IMAP, and SMTP AUTH.
++
+Single-factor authentication (for example, username and password) doesnΓÇÖt provide the required level of protection for todayΓÇÖs computing environments. Passwords are bad as they're easy to guess and humans are bad at choosing good passwords.
++
+Unfortunately, legacy authentication:
+
+- Doesn't support multi-factor authentication (MFA) or other strong authentication methods.
+
+- Makes it impossible for your organization to move to passwordless authentication.
+
+To improve the security of your Azure AD tenant and experience of your users, you should disable legacy authentication. However, important user experiences in your tenant might depend on legacy authentication. Before shutting off legacy authentication, you may want to find those cases so you can migrate them to more secure authentication.
+
+The sign-ins using legacy authentication workbook lets you see all legacy authentication sign-ins in your environment so you can find and migrate critical workflows to more secure authentication methods before you shut off legacy authentication.
+
+
+
+
+## Sections
+
+With this workbook, you can distinguish between interactive and non-interactive sign-ins. This workbook highlights which legacy authentication protocols are used throughout your tenant.
+
+The data collection consists of three steps:
+
+1. Select a legacy authentication protocol, and then select an application to filter by users accessing that application.
+
+2. Select a user to see all their legacy authentication sign-ins to the selected app.
+
+3. View all legacy authentication sign-ins for the user to understand how legacy authentication is being used.
+++
+
++
+## Filters
++
+This workbook supports multiple filters:
++
+- Time range (up to 90 days)
+
+- User principal name
+
+- Application
+
+- Status of the sign-in (success or failure)
++
+![Filter options](./media/workbook-legacy-authentication/filter-options.png)
++
+## Best practices
++
+- For guidance on blocking legacy authentication in your environment, see [Block legacy authentication to Azure AD with conditional access](../conditional-access/block-legacy-authentication.md).
+
+- Many email protocols that once relied on legacy authentication now support more secure modern authentication methods. If you see legacy email authentication protocols in this workbook, consider migrating to modern authentication for email instead. For more information, see [Deprecation of Basic authentication in Exchange Online](/exchange/clients-and-mobile-in-exchange-online/deprecation-of-basic-authentication-exchange-online).
+
+- Some clients can use both legacy authentication or modern authentication depending on client configuration. If you see ΓÇ£modern mobile/desktop clientΓÇ¥ or ΓÇ£browserΓÇ¥ for a client in the Azure AD logs, it's using modern authentication. If it has a specific client or protocol name, such as ΓÇ£Exchange ActiveSyncΓÇ¥, it's using legacy authentication to connect to Azure AD. The client types in conditional access, and the Azure AD reporting page in the Azure portal demarcate modern authentication clients and legacy authentication clients for you, and only legacy authentication is captured in this workbook.
++
+## Next steps
+
+- To learn more about identity protection, see [What is identity protection](../identity-protection/overview-identity-protection.md).
+
+- For more information about Azure AD workbooks, see [How to use Azure AD workbooks](howto-use-azure-monitor-workbooks.md).
aks Api Server Authorized Ip Ranges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/api-server-authorized-ip-ranges.md
The *API server authorized IP ranges* feature has the following limitations:
- The *API server authorized IP ranges* feature was moved out of preview in October 2019. For clusters created after the feature was moved out of preview, this feature is only supported on the *Standard* SKU load balancer. Any existing clusters on the *Basic* SKU load balancer with the *API server authorized IP ranges* feature enabled will continue to work as is. However, these clusters cannot be migrated to a *Standard* SKU load balancer. Existing clusters will continue to work if the Kubernetes version and control plane are upgraded. - The *API server authorized IP ranges* feature isn't supported on private clusters.-- When using this feature with clusters that use [Node Public IP](use-multiple-node-pools.md#assign-a-public-ip-per-node-for-your-node-pools), the node pools using Node Public IP must use public IP prefixes. The public IP prefixes must be added as authorized ranges.
+- When using this feature with clusters that use [Node Public IP](use-node-public-ips.md), the node pools using Node Public IP must use public IP prefixes. The public IP prefixes must be added as authorized ranges.
## Overview of API server authorized IP ranges
For more information about the API server and other cluster components, see [Kub
## Create an AKS cluster with API server authorized IP ranges enabled
-Create a cluster using the [az aks create][az-aks-create] and specify the *`--api-server-authorized-ip-ranges`* parameter to provide a list of authorized IP address ranges. These IP address ranges are usually address ranges used by your on-premises networks or public IPs. When you specify a CIDR range, start with the first IP address in the range. For example, *137.117.106.90/29* is a valid range, but make sure you specify the first IP address in the range, such as *137.117.106.88/29*.
+Create a cluster using the [`az aks create`][az-aks-create] and specify the *`--api-server-authorized-ip-ranges`* parameter to provide a list of authorized IP address ranges. These IP address ranges are usually address ranges used by your on-premises networks or public IPs. When you specify a CIDR range, start with the first IP address in the range. For example, *137.117.106.90/29* is a valid range, but make sure you specify the first IP address in the range, such as *137.117.106.88/29*.
> [!IMPORTANT] > By default, your cluster uses the [Standard SKU load balancer][standard-sku-lb] which you can use to configure the outbound gateway. When you enable API server authorized IP ranges during cluster creation, the public IP for your cluster is also allowed by default in addition to the ranges you specify. If you specify *""* or no value for *`--api-server-authorized-ip-ranges`*, API server authorized IP ranges will be disabled. Note that if you're using PowerShell, use *`--api-server-authorized-ip-ranges=""`* (with equals sign) to avoid any parsing issues.
az aks create \
## Update a cluster's API server authorized IP ranges
-To update the API server authorized IP ranges on an existing cluster, use [az aks update][az-aks-update] command and use the *`--api-server-authorized-ip-ranges`*, *`--load-balancer-outbound-ip-prefixes`*, *`--load-balancer-outbound-ips`*, or *`--load-balancer-outbound-ip-prefixes`* parameters.
+To update the API server authorized IP ranges on an existing cluster, use [`az aks update`][az-aks-update] command and use the *`--api-server-authorized-ip-ranges`*, *`--load-balancer-outbound-ip-prefixes`*, *`--load-balancer-outbound-ips`*, or *`--load-balancer-outbound-ip-prefixes`* parameters.
The following example updates API server authorized IP ranges on the cluster named *myAKSCluster* in the resource group named *myResourceGroup*. The IP address range to authorize is *73.140.245.0/24*:
You can also use *0.0.0.0/32* when specifying the *`--api-server-authorized-ip-r
## Disable authorized IP ranges
-To disable authorized IP ranges, use [az aks update][az-aks-update] and specify an empty range to disable API server authorized IP ranges. For example:
+To disable authorized IP ranges, use [`az aks update`][az-aks-update] and specify an empty range to disable API server authorized IP ranges. For example:
```azurecli-interactive az aks update \
az aks update \
## Find existing authorized IP ranges
-To find IP ranges that have been authorized, use [az aks show][az-aks-show] and specify the cluster's name and resource group. For example:
+To find IP ranges that have been authorized, use [`az aks show`][az-aks-show] and specify the cluster's name and resource group. For example:
```azurecli-interactive az aks show \
aks Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-ad-rbac.md
kubectl create namespace dev
> [!NOTE] > In Kubernetes, *Roles* define the permissions to grant, and *RoleBindings* apply them to desired users or groups. These assignments can be applied to a given namespace, or across the entire cluster. For more information, see [Using Kubernetes RBAC authorization][rbac-authorization].
+>
+> If the user you grant the Kubernetes RBAC binding for is in the same Azure AD tenant, assign permissions based on the *userPrincipalName (UPN)*. If the user is in a different Azure AD tenant, query for and use the *objectId* property instead.
3. Create a Role for the *dev* namespace, which grants full permissions to the namespace. In production environments, you can specify more granular permissions for different users or groups. Create a file named `role-dev-namespace.yaml` and paste the following YAML manifest:
aks Azure Cni Overlay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking | | Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency | | Kubernetes Network Policies | Azure Network Policies, Calico | Calico |
-| OS platforms supported | Linux only | Linux only |
+| OS platforms supported | Linux and Windows | Linux only |
## IP address planning
Use the traditional VNet option when:
## Limitations with Azure CNI Overlay
-The overlay solution has the following limitations today
+The overlay solution has the following limitations:
-* You can't deploy multiple overlay clusters on the same subnet.
* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay. * You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
For any AKS clusters created or upgraded after March 2022 Azure Kubernetes Servi
To verify if TLS Bootstrapping is enabled on your cluster browse to the following paths:
-* On a Linux node: */var/lib/kubelet/bootstrap-kubeconfig*
+* On a Linux node: */var/lib/kubelet/bootstrap-kubeconfig* or */host/var/lib/kubelet/bootstrap-kubeconfig*
* On a Windows node: *C:\k\bootstrap-config* To access agent nodes, see [Connect to Azure Kubernetes Service cluster nodes for maintenance or troubleshooting][aks-node-access] for more information.
aks Concepts Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-security.md
Container security protects the entire end-to-end pipeline from build to the app
The Secure Supply Chain includes the build environment and registry. Kubernetes includes security components, such as *pod security standards* and *Secrets*. Meanwhile, Azure includes components like Active Directory, Microsoft Defender for Containers, Azure Policy, Azure Key Vault, network security groups and orchestrated cluster upgrades. AKS combines these security components to:+ * Provide a complete Authentication and Authorization story. * Leverage AKS Built-in Azure Policy to secure your applications. * End-to-End insight from build through your application with Microsoft Defender for Containers. * Keep your AKS cluster running the latest OS security updates and Kubernetes releases. * Provide secure pod traffic and access to sensitive credentials.
-This article introduces the core concepts that secure your applications in AKS:
--- [Security concepts for applications and clusters in Azure Kubernetes Service (AKS)](#security-concepts-for-applications-and-clusters-in-azure-kubernetes-service-aks)
- - [Build security](#build-security)
- - [Registry security](#registry-security)
- - [Cluster security](#cluster-security)
- - [Node security](#node-security)
- - [Compute isolation](#compute-isolation)
- - [Cluster upgrades](#cluster-upgrades)
- - [Cordon and drain](#cordon-and-drain)
- - [Network security](#network-security)
- - [Azure network security groups](#azure-network-security-groups)
- - [Application Security](#application-security)
- - [Kubernetes Secrets](#kubernetes-secrets)
- - [Next steps](#next-steps)
+This article introduces the core concepts that secure your applications in AKS.
## Build Security
-As the entry point for the Supply Chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that will break development. It is about looking at the "Vendor Status" to segment based on vulnerabilities that are actionable by the development teams. Also leverage "Grace Periods" to allow developers time to remediate identified issues.
+As the entry point for the Supply Chain, it is important to conduct static analysis of image builds before they are promoted down the pipeline. This includes vulnerability and compliance assessment. It is not about failing a build because it has a vulnerability, as that will break development. It is about looking at the "Vendor Status" to segment based on vulnerabilities that are actionable by the development teams. Also leverage "Grace Periods" to allow developers time to remediate identified issues.
## Registry Security
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
A minimum value for maximum pods per node is enforced to guarantee space for sys
> [!NOTE] > The minimum value in the table above is strictly enforced by the AKS service. You can not set a maxPods value lower than the minimum shown as doing so can prevent the cluster from starting.
-* **Azure CLI**: Specify the `--max-pods` argument when you deploy a cluster with the [az aks create][az-aks-create] command. The maximum value is 250.
+* **Azure CLI**: Specify the `--max-pods` argument when you deploy a cluster with the [`az aks create`][az-aks-create] command. The maximum value is 250.
* **Resource Manager template**: Specify the `maxPods` property in the [ManagedClusterAgentPoolProfile] object when you deploy a cluster with a Resource Manager template. The maximum value is 250. * **Azure portal**: Change the `Max pods per node` field in the node pool settings when creating a cluster or adding a new node pool.
$ az network vnet subnet list \
/subscriptions/<guid>/resourceGroups/myVnet/providers/Microsoft.Network/virtualNetworks/myVnet/subnets/default ```
-Use the [az aks create][az-aks-create] command with the `--network-plugin azure` argument to create a cluster with advanced networking. Update the `--vnet-subnet-id` value with the subnet ID collected in the previous step:
+Use the [`az aks create`][az-aks-create] command with the `--network-plugin azure` argument to create a cluster with advanced networking. Update the `--vnet-subnet-id` value with the subnet ID collected in the previous step:
```azurecli-interactive az aks create \
A drawback with the traditional CNI is the exhaustion of pod IP addresses as the
The [prerequisites][prerequisites] already listed for Azure CNI still apply, but there are a few additional limitations:
-* Only linux node clusters and node pools are supported.
* AKS Engine and DIY clusters are not supported. * Azure CLI version `2.37.0` or later.
az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepoo
Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below: ### Get the YAML file
-1. Download or grep the file named container-azm-ms-agentconfig.yaml from [github][github].
+1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`. 3. Save the file.
aks Egress Outboundtype https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-outboundtype.md
Title: Customize user-defined routes (UDR) in Azure Kubernetes Service (AKS)
-description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS)
+ Title: Customize cluster egress with outbound types in Azure Kubernetes Service (AKS)
+description: Learn how to configure outbound types in Azure Kubernetes Service (AKS)
Last updated 06/29/2020-++ #Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
-# Customize cluster egress with a User-Defined Route
+# Customize cluster egress with outbound types in Azure Kubernetes Service (AKS)
Egress from an AKS cluster can be customized to fit specific scenarios. By default, AKS will provision a Standard SKU Load Balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress.
-This article walks through how to customize a cluster's egress route to support custom network scenarios, such as those which disallows public IPs and requires the cluster to sit behind a network virtual appliance (NVA).
-
-## Prerequisites
-* Azure CLI version 2.0.81 or greater
-* API version of `2020-01-01` or greater
-
+This article covers the various types of outbound connectivity that are available in AKS Clusters.
## Limitations
-* OutboundType can only be defined at cluster create time and can't be updated afterwards.
+* Outbound type can only be defined at cluster create time and can't be updated afterwards.
+ * Reconfiguring outbound type is now supported in preview; see below.
* Setting `outboundType` requires AKS clusters with a `vm-set-type` of `VirtualMachineScaleSets` and `load-balancer-sku` of `Standard`.
-* Setting `outboundType` to a value of `UDR` requires a user-defined route with valid outbound connectivity for the cluster.
-* Setting `outboundType` to a value of `UDR` implies the ingress source IP routed to the load-balancer may **not match** the cluster's outgoing egress destination address.
## Overview of outbound types in AKS
-An AKS cluster can be customized with a unique `outboundType` of type `loadBalancer` or `userDefinedRouting`.
+An AKS cluster can be configured with three different categories of outbound type: load balancer, NAT gateway, or user-defined routing.
> [!IMPORTANT] > Outbound type impacts only the egress traffic of your cluster. For more information, see [setting up ingress controllers](ingress-basic.md).
Below is a network topology deployed in AKS clusters by default, which use an `o
![Diagram shows ingress I P and egress I P, where the ingress I P directs traffic to a load balancer, which directs traffic to and from an internal cluster and other traffic to the egress I P, which directs traffic to the Internet, M C R, Azure required services, and the A K S Control Plane.](media/egress-outboundtype/outboundtype-lb.png)
+For more information, see [using a standard load balancer in AKS](load-balancer-standard.md) for more information.
+
+### Outbound type of `managedNatGateway` or `userAssignedNatGateway`
+
+If `managedNatGateway` or `userAssignedNatGateway` are selected for `outboundType`, AKS relies on [Azure Networking NAT gateway](/azure/virtual-network/nat-gateway/manage-nat-gateway) for cluster egress.
+
+- `managedNatGateway` is used when using managed virtual networks, and tells AKS to provision a NAT gateway and attach it to the cluster subnet.
+- `userAssignedNatGateway` is used when using bring-your-own virtual networking, and requires that a NAT gateway has been provisioned before cluster creation.
+
+NAT gateway has significantly improved handling of SNAT ports when compared to Standard Load Balancer.
+
+For more information, see [using NAT Gateway with AKS](nat-gateway.md) for more information.
+ ### Outbound type of userDefinedRouting > [!NOTE]
If `userDefinedRouting` is set, AKS won't automatically configure egress paths.
The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
-#### Load balancer creation with userDefinedRouting
+For more information, see [configuring cluster egress via user-defined routing](egress-udr.md) for more information.
-AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for *inbound* requests and a backend pool for *inbound* requests. Inbound rules are configured by the Azure cloud provider, but **no outbound public IP address or outbound rules** are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.
+## Updating `outboundType` after cluster creation (PREVIEW)
-Azure load balancers [don't incur a charge until a rule is placed](https://azure.microsoft.com/pricing/details/load-balancer/).
+Changing the outbound type after cluster creation will deploy or remove resources as required to put the cluster into the new egress configuration.
-## Deploy a cluster with outbound type of UDR and Azure Firewall
+Migration is only supported between `loadBalancer`, `managedNATGateway` (if using a managed virtual network), and `userDefinedNATGateway` (if using a custom virtual network).
-To illustrate the application of a cluster with outbound type using a user-defined route, a cluster can be configured on a virtual network with an Azure Firewall on its own subnet. See this example on the [restrict egress traffic with Azure firewall example](limit-egress-traffic.md#restrict-egress-traffic-using-azure-firewall).
+> [!WARNING]
+> Changing the outbound type on a cluster is disruptive to network connectivity and will result in a change of the cluster's egress IP address. If any firewall rules have been configured to restrict traffic from the cluster, they will need to be updated to match the new egress IP address.
-> [!IMPORTANT]
-> Outbound type of UDR requires there is a route for 0.0.0.0/0 and next hop destination of NVA (Network Virtual Appliance) in the route table.
-> The route table already has a default 0.0.0.0/0 to Internet, without a Public IP to SNAT just adding this route will not provide you egress. AKS will validate that you don't create a 0.0.0.0/0 route pointing to the Internet but instead to NVA or gateway, etc.
-> When using an outbound type of UDR, a load balancer public IP address for **inbound requests** is not created unless a service of type *loadbalancer* is configured. A public IP address for **outbound requests** is never created by AKS if an outbound type of UDR is set.
-## Next steps
+### Install the aks-preview Azure CLI extension
+
+`aks-preview` version 0.5.113 is required.
+
+To install the `aks-preview` extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
-See [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md).
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+### Register the 'AKS-OutBoundTypeMigrationPreview' feature flag
+
+Register the `AKS-OutBoundTypeMigrationPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "AKS-OutBoundTypeMigrationPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Update a cluster to use a new outbound type
+
+Run the following command to change a cluster's outbound configuration:
+
+```azurecli-interactive
+az aks update -g <resourceGroup> -n <clusterName> --outbound-type <loadBalancer|managedNATGateway|userAssignedNATGateway>
+```
+
+## Next steps
-See [how to create, change, or delete a route table](../virtual-network/manage-route-table.md).
+- [Configure standard load balancing in an AKS cluster](load-balancer-standard.md)
+- [Configure NAT gateway in an AKS cluster](nat-gateway.md)
+- [Configure user-defined routing in an AKS cluster](egress-udr.md)
+- [NAT gateway documentation](/azure/aks/nat-gateway)
+- [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md).
+- [Manage route tables](../virtual-network/manage-route-table.md).
<!-- LINKS - internal --> [az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
aks Egress Udr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/egress-udr.md
+
+ Title: Customize user-defined routes (UDR) in Azure Kubernetes Service (AKS)
+description: Learn how to define a custom egress route in Azure Kubernetes Service (AKS)
++ Last updated : 06/29/2020+++
+#Customer intent: As a cluster operator, I want to define my own egress paths with user-defined routes. Since I define this up front I do not want AKS provided load balancer configurations.
++
+# Customize cluster egress with a user-defined routing table
+
+Egress from an AKS cluster can be customized to fit specific scenarios. By default, AKS will provision a Standard SKU Load Balancer to be set up and used for egress. However, the default setup may not meet the requirements of all scenarios if public IPs are disallowed or additional hops are required for egress.
+
+This article walks through how to customize a cluster's egress route to support custom network scenarios, such as those which disallows public IPs and requires the cluster to sit behind a network virtual appliance (NVA).
+
+## Prerequisites
+* Azure CLI version 2.0.81 or greater
+* API version of `2020-01-01` or greater
+
+## Limitations
+* Setting `outboundType` requires AKS clusters with a `vm-set-type` of `VirtualMachineScaleSets` and `load-balancer-sku` of `Standard`.
+* Setting `outboundType` to a value of `UDR` requires a user-defined route with valid outbound connectivity for the cluster.
+* Setting `outboundType` to a value of `UDR` implies the ingress source IP routed to the load-balancer may **not match** the cluster's outgoing egress destination address.
+
+## Overview
+
+> [!NOTE]
+> Using outbound type is an advanced networking scenario and requires proper network configuration.
+
+If `userDefinedRouting` is set, AKS won't automatically configure egress paths. The egress setup must be done by you.
+
+The AKS cluster must be deployed into an existing virtual network with a subnet that has been previously configured because when not using standard load balancer (SLB) architecture, you must establish explicit egress. As such, this architecture requires explicitly sending egress traffic to an appliance like a firewall, gateway, proxy or to allow the Network Address Translation (NAT) to be done by a public IP assigned to the standard load balancer or appliance.
+
+#### Load balancer creation with userDefinedRouting
+
+AKS clusters with an outbound type of UDR receive a standard load balancer (SLB) only when the first Kubernetes service of type 'loadBalancer' is deployed. The load balancer is configured with a public IP address for *inbound* requests and a backend pool for *inbound* requests. Inbound rules are configured by the Azure cloud provider, but **no outbound public IP address or outbound rules** are configured as a result of having an outbound type of UDR. Your UDR will still be the only source for egress traffic.
+
+Azure load balancers [don't incur a charge until a rule is placed](https://azure.microsoft.com/pricing/details/load-balancer/).
+
+## Deploy a cluster with outbound type of UDR and Azure Firewall
+
+To illustrate the application of a cluster with outbound type using a user-defined route, a cluster can be configured on a virtual network with an Azure Firewall on its own subnet. See this example on the [restrict egress traffic with Azure firewall example](limit-egress-traffic.md#restrict-egress-traffic-using-azure-firewall).
+
+> [!IMPORTANT]
+> Outbound type of UDR requires there is a route for 0.0.0.0/0 and next hop destination of NVA (Network Virtual Appliance) in the route table.
+> The route table already has a default 0.0.0.0/0 to Internet, without a Public IP to SNAT just adding this route will not provide you egress. AKS will validate that you don't create a 0.0.0.0/0 route pointing to the Internet but instead to NVA or gateway, etc.
+> When using an outbound type of UDR, a load balancer public IP address for **inbound requests** is not created unless a service of type *loadbalancer* is configured. A public IP address for **outbound requests** is never created by AKS if an outbound type of UDR is set.
+
+## Next steps
+
+See [Azure networking UDR overview](../virtual-network/virtual-networks-udr-overview.md).
+
+See [how to create, change, or delete a route table](../virtual-network/manage-route-table.md).
+
+<!-- LINKS - internal -->
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[byo-route-table]: configure-kubenet.md#bring-your-own-subnet-and-route-table-with-kubenet
aks Howto Deploy Java Liberty App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/howto-deploy-java-liberty-app.md
This article uses the Azure Marketplace offer for Open/WebSphere Liberty to acce
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] * This article requires at least version 2.31.0 of Azure CLI. If using Azure Cloud Shell, the latest version is already installed. * If running the commands in this guide locally (instead of Azure Cloud Shell):
aks Quick Kubernetes Deploy Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-bicep.md
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
### [Azure CLI](#tab/azure-cli) * This article requires version 2.20.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
aks Quick Kubernetes Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-cli.md
This quickstart assumes a basic understanding of Kubernetes concepts. For more i
To learn more about creating a Windows Server node pool, see [Create an AKS cluster that supports Windows Server containers](quick-windows-container-deploy-cli.md). - This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
aks Quick Kubernetes Deploy Rm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-kubernetes-deploy-rm-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
### [Azure CLI](#tab/azure-cli) * This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
aks Quick Windows Container Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/learn/quick-windows-container-deploy-cli.md
This article assumes a basic understanding of Kubernetes concepts. For more info
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.64 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
az aks create \
After a few minutes, the command completes and returns JSON-formatted information about the cluster. Occasionally the cluster can take longer than a few minutes to provision. Allow up to 10 minutes in these cases.
+## Add a Windows node pool
+By default, an AKS cluster is created with a node pool that can run Linux containers. Use the `az aks nodepool add` command to add an additional node pool that can run Windows Server containers alongside the Linux node pool.
+
+AKS supports Windows Server 2019 and Windows Server 2022 node pools. For Kubernetes versions "1.25.0" and higher, Windows Server 2022 is the default operating system. For earlier versions, Windows Server 2019 will be the default operating system.
+
+Use the `az aks nodepool add` command to add a Windows nodepool:
+
+```azurecli
+az aks nodepool add \
+ --resource-group myResourceGroup \
+ --cluster-name myAKSCluster \
+ --os-type Windows \
+ --name npwin \
+ --node-count 1
+```
+
+The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`. The OS SKU was not specified so the nodepool will be set to the default operating system based on the Kubernetes version of the cluster.
++ ## Add a Windows Server 2019 node pool
-By default, an AKS cluster is created with a node pool that can run Linux containers. Use `az aks nodepool add` command to add an additional node pool that can run Windows Server containers alongside the Linux node pool.
+When creating a Windows node pool, the default operating system will be Windows Server 2019 for Kubernetes versions below "1.25.0". To use Windows Server 2019 nodes when not default, you will need to specify an OS SKU type of `Windows2019`.
```azurecli-interactive az aks nodepool add \ --resource-group myResourceGroup \ --cluster-name myAKSCluster \ --os-type Windows \
+ --os-sku Windows2019 \
--name npwin \ --node-count 1 ```
-The above command creates a new node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
+The above command creates a new Windows Server 2019 node pool named *npwin* and adds it to the *myAKSCluster*. The above command also uses the default subnet in the default vnet created when running `az aks create`.
## Add a Windows Server 2022 node pool
-When creating a Windows node pool, the default operating system will be Windows Server 2019. To use Windows Server 2022 nodes, you will need to specify an OS SKU type of `Windows2022`.
+When creating a Windows node pool, the default operating system will be Windows Server 2022 for Kubernetes versions "1.25.0" and higher. To use Windows Server 2022 nodes when not default, you will need to specify an OS SKU type of `Windows2022`.
> [!NOTE] > Windows Server 2022 requires Kubernetes version "1.23.0" or higher.
-Use `az aks nodepool add` command to add a Windows Server 2022 node pool:
+Use the `az aks nodepool add` command to add a Windows Server 2022 node pool:
```azurecli az aks nodepool add \
aks Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/nat-gateway.md
To create an AKS cluster with a user-assigned NAT Gateway, use `--outbound-type
--assign-identity $IDENTITY_ID ```
-## Disable OutboundNAT for Windows
+## Disable OutboundNAT for Windows (preview)
Windows OutboundNAT can cause certain connection and communication issues with your AKS pods. Some of these issues include:
Windows OutboundNAT can cause certain connection and communication issues with y
Windows enables OutboundNAT by default. You can now manually disable OutboundNAT when creating new Windows agent pools.
+> [!NOTE]
+> OutboundNAT can only be disabled on Windows Server 2019 nodepools.
+ ### Prerequisites * You need to use `aks-preview` and register the feature flag.
aks Use Multiple Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-multiple-node-pools.md
Last updated 05/16/2022
# Create and manage multiple node pools for a cluster in Azure Kubernetes Service (AKS)
-In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a [system node pool][use-system-pool]. To support applications that have different compute or storage demands, you can create additional *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and tunnelfront. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use these additional user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.
+In Azure Kubernetes Service (AKS), nodes of the same configuration are grouped together into *node pools*. These node pools contain the underlying VMs that run your applications. The initial number of nodes and their size (SKU) is defined when you create an AKS cluster, which creates a [system node pool][use-system-pool]. To support applications that have different compute or storage demands, you can create additional *user node pools*. System node pools serve the primary purpose of hosting critical system pods such as CoreDNS and `konnectivity`. User node pools serve the primary purpose of hosting your application pods. However, application pods can be scheduled on system node pools if you wish to only have one pool in your AKS cluster. User node pools are where you place your application-specific pods. For example, use these additional user node pools to provide GPUs for compute-intensive applications, or access to high-performance SSD storage.
> [!NOTE] > This feature enables higher control over how to create and manage multiple node pools. As a result, separate commands are required for create/update/delete. Previously cluster operations through `az aks create` or `az aks update` used the managedCluster API and were the only options to change your control plane and a single node pool. This feature exposes a separate operation set for agent pools through the agentPool API and require use of the `az aks nodepool` command set to execute operations on an individual node pool.
The following limitations apply when you create and manage AKS clusters that sup
* You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster. * System pools must contain at least one node, and user node pools may contain zero or more nodes. * The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature isn't supported with Basic SKU load balancers.
-* The AKS cluster must use virtual machine scale sets for the nodes.
+* The AKS cluster must use Virtual Machine Scale Sets for the nodes.
* You can't change the VM size of a node pool after you create it. * The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters. * All node pools must reside in the same virtual network.
The following limitations apply when you create and manage AKS clusters that sup
> [!IMPORTANT] > If you run a single system node pool for your AKS cluster in a production environment, we recommend you use at least three nodes for the node pool.
-To get started, create an AKS cluster with a single node pool. The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *eastus* region. An AKS cluster named *myAKSCluster* is then created using the [az aks create][az-aks-create] command.
+To get started, create an AKS cluster with a single node pool. The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *eastus* region. An AKS cluster named *myAKSCluster* is then created using the [`az aks create`][az-aks-create] command.
> [!NOTE] > The *Basic* load balancer SKU is **not supported** when using multiple node pools. By default, AKS clusters are created with the *Standard* load balancer SKU from the Azure CLI and Azure portal.
It takes a few minutes to create the cluster.
> [!NOTE] > To ensure your cluster operates reliably, you should run at least 2 (two) nodes in the default node pool, as essential system services are running across this node pool.
-When the cluster is ready, use the [az aks get-credentials][az-aks-get-credentials] command to get the cluster credentials for use with `kubectl`:
+When the cluster is ready, use the [`az aks get-credentials`][az-aks-get-credentials] command to get the cluster credentials for use with `kubectl`:
```azurecli-interactive az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
## Add a node pool
-The cluster created in the previous step has a single node pool. Let's add a second node pool using the [az aks nodepool add][az-aks-nodepool-add] command. The following example creates a node pool named *mynodepool* that runs *3* nodes:
+The cluster created in the previous step has a single node pool. Let's add a second node pool using the [`az aks nodepool add`][az-aks-nodepool-add] command. The following example creates a node pool named *mynodepool* that runs *3* nodes:
```azurecli-interactive az aks nodepool add \
az aks nodepool add \
> [!NOTE] > The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
-To see the status of your node pools, use the [az aks node pool list][az-aks-nodepool-list] command and specify your resource group and cluster name:
+To see the status of your node pools, use the [`az aks node pool list`][az-aks-nodepool-list] command and specify your resource group and cluster name:
```azurecli-interactive az aks nodepool list --resource-group myResourceGroup --cluster-name myAKSCluster
A workload may require splitting a cluster's nodes into separate pools for logic
* All subnets assigned to node pools must belong to the same virtual network. * System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
-* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error-out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
+* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original CIDR block. AKS will error-out on the agent pool add now though we originally allowed it. The `aks-preview` Azure CLI extension (version 0.5.66+) now supports running `az aks update -g <resourceGroup> -n <clusterName>` without any optional arguments. This command will perform an update operation without making any changes, which can recover a cluster stuck in a failed state.
* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets. * Windows nodes will SNAT traffic to the new subnets until the node pool is reimaged. * Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
The commands in this section explain how to upgrade a single specific node pool.
> [!NOTE] > The node pool OS image version is tied to the Kubernetes version of the cluster. You will only get OS image upgrades, following a cluster upgrade.
-Since there are two node pools in this example, we must use [az aks nodepool upgrade][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [az aks get-upgrades][az-aks-get-upgrades]
+Since there are two node pools in this example, we must use [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] to upgrade a node pool. To see the available upgrades use [`az aks get-upgrades`][az-aks-get-upgrades]
```azurecli-interactive az aks get-upgrades --resource-group myResourceGroup --name myAKSCluster ```
-Let's upgrade the *mynodepool*. Use the [az aks nodepool upgrade][az-aks-nodepool-upgrade] command to upgrade the node pool, as shown in the following example:
+Let's upgrade the *mynodepool*. Use the [`az aks nodepool upgrade`][az-aks-nodepool-upgrade] command to upgrade the node pool, as shown in the following example:
```azurecli-interactive az aks nodepool upgrade \
az aks nodepool upgrade \
--no-wait ```
-List the status of your node pools again using the [az aks node pool list][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Upgrading* state to *KUBERNETES_VERSION*:
+List the status of your node pools again using the [`az aks node pool list`][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Upgrading* state to *KUBERNETES_VERSION*:
```azurecli az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
As your application workload demands change, you may need to scale the number of
<!--If you scale down, nodes are carefully [cordoned and drained][kubernetes-drain] to minimize disruption to running applications.-->
-To scale the number of nodes in a node pool, use the [az aks node pool scale][az-aks-nodepool-scale] command. The following example scales the number of nodes in *mynodepool* to *5*:
+To scale the number of nodes in a node pool, use the [`az aks node pool scale`][az-aks-nodepool-scale] command. The following example scales the number of nodes in *mynodepool* to *5*:
```azurecli-interactive az aks nodepool scale \
az aks nodepool scale \
--no-wait ```
-List the status of your node pools again using the [az aks node pool list][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Scaling* state with a new count of *5* nodes:
+List the status of your node pools again using the [`az aks node pool list`][az-aks-nodepool-list] command. The following example shows that *mynodepool* is in the *Scaling* state with a new count of *5* nodes:
```azurecli az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
AKS offers a separate feature to automatically scale node pools with a feature c
## Delete a node pool
-If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynodepool* created in the previous steps:
+If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [`az aks node pool delete`][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynodepool* created in the previous steps:
> [!CAUTION] > When you delete a node pool, AKS doesn't perform cordon and drain, and there are no recovery options for data loss that may occur when you delete a node pool. If pods can't be scheduled on other node pools, those applications become unavailable. Make sure you don't delete a node pool when in-use applications don't have data backups or the ability to run on other node pools in your cluster. To minimize the disruption of rescheduling pods currently running on the node pool you are going to delete, perform a cordon and drain on all nodes in the node pool before deleting. For more information, see [cordon and drain node pools][cordon-and-drain].
If you no longer need a pool, you can delete it and remove the underlying VM nod
az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name mynodepool --no-wait ```
-The following example output from the [az aks node pool list][az-aks-nodepool-list] command shows that *mynodepool* is in the *Deleting* state:
+The following example output from the [`az aks node pool list`][az-aks-nodepool-list] command shows that *mynodepool* is in the *Deleting* state:
```azurecli az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
As your application workloads demands, you may associate node pools to capacity
For more information on the capacity reservation groups, please refer to [Capacity Reservation Groups][capacity-reservation-groups].
-Associating a node pool with an existing capacity reservation group can be done using [az aks nodepool add][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag" The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated.
+Associating a node pool with an existing capacity reservation group can be done using [`az aks nodepool add`][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag" The capacity reservation group should already exist, otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated.
```azurecli-interactive az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG ```
-Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
+Associating a system node pool with an existing capacity reservation group can be done using [`az aks create`][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
```azurecli-interactive az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
az aks delete -g MyRG --cluster-name MyMC
## Specify a VM size for a node pool
-In the previous examples to create a node pool, a default VM size was used for the nodes created in the cluster. A more common scenario is for you to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory, or a node pool that provides GPU support. In the next step, you [use taints and tolerations](#setting-nodepool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.
+In the previous examples to create a node pool, a default VM size was used for the nodes created in the cluster. A more common scenario is for you to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory, or a node pool that provides GPU support. In the next step, you [use taints and tolerations](#setting-node-pool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.
In the following example, create a GPU-based node pool that uses the *Standard_NC6* VM size. These VMs are powered by the NVIDIA Tesla K80 card. For information on available VM sizes, see [Sizes for Linux virtual machines in Azure][vm-sizes].
-Create a node pool using the [az aks node pool add][az-aks-nodepool-add] command again. This time, specify the name *gpunodepool*, and use the `--node-vm-size` parameter to specify the *Standard_NC6* size:
+Create a node pool using the [`az aks node pool add`][az-aks-nodepool-add] command again. This time, specify the name *gpunodepool*, and use the `--node-vm-size` parameter to specify the *Standard_NC6* size:
```azurecli-interactive az aks nodepool add \
az aks nodepool add \
--no-wait ```
-The following example output from the [az aks node pool list][az-aks-nodepool-list] command shows that *gpunodepool* is *Creating* nodes with the specified *VmSize*:
+The following example output from the [`az aks node pool list`][az-aks-nodepool-list] command shows that *gpunodepool* is *Creating* nodes with the specified *VmSize*:
```azurecli az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
When creating a node pool, you can add taints, labels, or tags to that node pool
> [!IMPORTANT] > Adding taints, labels, or tags to nodes should be done for the entire node pool using `az aks nodepool`. Applying taints, labels, or tags to individual nodes in a node pool using `kubectl` is not recommended.
-### Setting nodepool taints
+### Setting node pool taints
-To create a node pool with a taint, use [az aks nodepool add][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
+To create a node pool with a taint, use [`az aks nodepool add`][az-aks-nodepool-add]. Specify the name *taintnp* and use the `--node-taints` parameter to specify *sku=gpu:NoSchedule* for the taint.
```azurecli-interactive az aks nodepool add \
az aks nodepool add \
--no-wait ```
-The following example output from the [az aks nodepool list][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*:
+The following example output from the [`az aks nodepool list`][az-aks-nodepool-list] command shows that *taintnp* is *Creating* nodes with the specified *nodeTaints*:
```azurecli az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
Events:
Only pods that have this toleration applied can be scheduled on nodes in *taintnp*. Any other pod would be scheduled in the *nodepool1* node pool. If you create additional node pools, you can use additional taints and tolerations to limit what pods can be scheduled on those node resources.
-### Setting nodepool labels
+### Setting node pool labels
For more information on using labels with node pools, see [Use labels in an Azure Kubernetes Service (AKS) cluster][use-labels].
-### Setting nodepool Azure tags
+### Setting node pool Azure tags
For more information on using Azure tags with node pools, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
Edit these values as need to update, add, or delete node pools as needed:
} ```
-Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
+Deploy this template using the [`az deployment group create`][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
```azurecli-interactive az deployment group create \
az deployment group create \
It may take a few minutes to update your AKS cluster depending on the node pool settings and operations you define in your Resource Manager template.
-## Assign a public IP per node for your node pools
-
-AKS nodes don't require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
-
-First, create a new resource group.
-
-```azurecli-interactive
-az group create --name myResourceGroup2 --location eastus
-```
-
-Create a new AKS cluster and attach a public IP for your nodes. Each of the nodes in the node pool receives a unique public IP. You can verify this by looking at the Virtual Machine Scale Set instances.
-
-```azurecli-interactive
-az aks create -g MyResourceGroup2 -n MyManagedCluster -l eastus --enable-node-public-ip
-```
-
-For existing AKS clusters, you can also add a new node pool, and attach a public IP for your nodes.
-
-```azurecli-interactive
-az aks nodepool add -g MyResourceGroup2 --cluster-name MyManagedCluster -n nodepool2 --enable-node-public-ip
-```
-
-### Use a public IP prefix
-
-There are a number of [benefits to using a public IP prefix][public-ip-prefix-benefits]. AKS supports using addresses from an existing public IP prefix for your nodes by passing the resource ID with the flag `node-public-ip-prefix` when creating a new cluster or adding a node pool.
-
-First, create a public IP prefix using [az network public-ip prefix create][az-public-ip-prefix-create]:
-
-```azurecli-interactive
-az network public-ip prefix create --length 28 --location eastus --name MyPublicIPPrefix --resource-group MyResourceGroup3
-```
-
-View the output, and take note of the `id` for the prefix:
-
-```output
-{
- ...
- "id": "/subscriptions/<subscription-id>/resourceGroups/myResourceGroup3/providers/Microsoft.Network/publicIPPrefixes/MyPublicIPPrefix",
- ...
-}
-```
-
-Finally, when creating a new cluster or adding a new node pool, use the flag `node-public-ip-prefix` and pass in the prefix's resource ID:
-
-```azurecli-interactive
-az aks create -g MyResourceGroup3 -n MyManagedCluster -l eastus --enable-node-public-ip --node-public-ip-prefix /subscriptions/<subscription-id>/resourcegroups/MyResourceGroup3/providers/Microsoft.Network/publicIPPrefixes/MyPublicIPPrefix
-```
-
-### Locate public IPs for nodes
-
-You can locate the public IPs for your nodes in various ways:
-
-* Use the Azure CLI command [az vmss list-instance-public-ips][az-list-ips].
-* Use [PowerShell or Bash commands][vmss-commands].
-* You can also view the public IPs in the Azure portal by viewing the instances in the Virtual Machine Scale Set.
-
-> [!Important]
-> The [node resource group][node-resource-group] contains the nodes and their public IPs. Use the node resource group when executing commands to find the public IPs for your nodes.
-
-```azurecli
-az vmss list-instance-public-ips -g MC_MyResourceGroup2_MyManagedCluster_eastus -n YourVirtualMachineScaleSetName
-```
- ## Clean up resources In this article, you created an AKS cluster that includes GPU-based nodes. To reduce unnecessary cost, you may want to delete the *gpunodepool*, or the whole AKS cluster.
-To delete the GPU-based node pool, use the [az aks nodepool delete][az-aks-nodepool-delete] command as shown in following example:
+To delete the GPU-based node pool, use the [`az aks nodepool delete`][az-aks-nodepool-delete] command as shown in following example:
```azurecli-interactive az aks nodepool delete -g myResourceGroup --cluster-name myAKSCluster --name gpunodepool ```
-To delete the cluster itself, use the [az group delete][az-group-delete] command to delete the AKS resource group:
+To delete the cluster itself, use the [`az group delete`][az-group-delete] command to delete the AKS resource group:
```azurecli-interactive az group delete --name myResourceGroup --yes --no-wait
az group delete --name myResourceGroup2 --yes --no-wait
* Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your AKS applications.
+* Use [instance-level public IP addresses](use-node-public-ips.md) to make your nodes able to serve traffic directly.
+ <!-- EXTERNAL LINKS --> [kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
aks Use Node Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-node-public-ips.md
+
+ Title: Use instance-level public IPs in Azure Kubernetes Service (AKS)
+description: Learn how to manage instance-level public IPs Azure Kubernetes Service (AKS)
++ Last updated : 1/12/2023++++
+# Use instance-level public IPs in Azure Kubernetes Service (AKS)
+
+AKS nodes don't require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
+
+First, create a new resource group.
+
+```azurecli-interactive
+az group create --name myResourceGroup2 --location eastus
+```
+
+Create a new AKS cluster and attach a public IP for your nodes. Each of the nodes in the node pool receives a unique public IP. You can verify this by looking at the Virtual Machine Scale Set instances.
+
+```azurecli-interactive
+az aks create -g MyResourceGroup2 -n MyManagedCluster -l eastus --enable-node-public-ip
+```
+
+For existing AKS clusters, you can also add a new node pool, and attach a public IP for your nodes.
+
+```azurecli-interactive
+az aks nodepool add -g MyResourceGroup2 --cluster-name MyManagedCluster -n nodepool2 --enable-node-public-ip
+```
+
+## Use a public IP prefix
+
+There are a number of [benefits to using a public IP prefix][public-ip-prefix-benefits]. AKS supports using addresses from an existing public IP prefix for your nodes by passing the resource ID with the flag `node-public-ip-prefix` when creating a new cluster or adding a node pool.
+
+First, create a public IP prefix using [az network public-ip prefix create][az-public-ip-prefix-create]:
+
+```azurecli-interactive
+az network public-ip prefix create --length 28 --location eastus --name MyPublicIPPrefix --resource-group MyResourceGroup3
+```
+
+View the output, and take note of the `id` for the prefix:
+
+```output
+{
+ ...
+ "id": "/subscriptions/<subscription-id>/resourceGroups/myResourceGroup3/providers/Microsoft.Network/publicIPPrefixes/MyPublicIPPrefix",
+ ...
+}
+```
+
+Finally, when creating a new cluster or adding a new node pool, use the flag `node-public-ip-prefix` and pass in the prefix's resource ID:
+
+```azurecli-interactive
+az aks create -g MyResourceGroup3 -n MyManagedCluster -l eastus --enable-node-public-ip --node-public-ip-prefix /subscriptions/<subscription-id>/resourcegroups/MyResourceGroup3/providers/Microsoft.Network/publicIPPrefixes/MyPublicIPPrefix
+```
+
+## Locate public IPs for nodes
+
+You can locate the public IPs for your nodes in various ways:
+
+* Use the Azure CLI command [`az vmss list-instance-public-ips`][az-list-ips].
+* Use [PowerShell or Bash commands][vmss-commands].
+* You can also view the public IPs in the Azure portal by viewing the instances in the Virtual Machine Scale Set.
+
+> [!Important]
+> The [node resource group][node-resource-group] contains the nodes and their public IPs. Use the node resource group when executing commands to find the public IPs for your nodes.
+
+```azurecli
+az vmss list-instance-public-ips -g MC_MyResourceGroup2_MyManagedCluster_eastus -n YourVirtualMachineScaleSetName
+```
+
+## Use public IP tags on node public IPs (PREVIEW)
+
+Public IP tags can be utilized on node public IPs to utilize the [Azure Routing Preference](/azure/virtual-network/ip-services/routing-preference-overview.md) feature.
++
+### Requirements
+
+* AKS version 1.24 or greater is required.
+* Version 0.5.115 of the aks-preview extension is required.
+
+### Install the aks-preview Azure CLI extension
+
+To install the aks-preview extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+### Register the 'NodePublicIPTagsPreview' feature flag
+
+Register the `NodePublicIPTagsPreview` feature flag by using the [`az feature register`][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "NodePublicIPTagsPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [`az feature show`][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "NodePublicIPTagsPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [`az provider register`][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Create a new cluster using routing preference internet
+
+```azurecli-interactive
+az aks create -n <clusterName> -l <location> -g <resourceGroup> \
+ --enable-node-public-ip \
+ --node-public-ip-tags RoutingPreference=Internet
+```
+
+### Add a node pool with routing preference internet
+
+```azurecli-interactive
+az aks nodepool add --cluster-name <clusterName> -n <nodepoolName> -l <location> -g <resourceGroup> \
+ --enable-node-public-ip \
+ --node-public-ip-tags RoutingPreference=Internet
+```
+
+## Allow host port connections and add node pools to application security groups (PREVIEW)
+
+AKS nodes utilizing node public IPs that host services on their host address need to have an NSG rule added to allow the traffic. Adding the desired ports in the node pool configuration will create the appropriate allow rules in the cluster network security group.
+
+If a network security group is in place on the subnet with a cluster using bring-your-own virtual network, an allow rule must be added to that network security group. This can be limited to the nodes in a given node pool by adding the node pool to an [application security group](/azure/virtual-network/network-security-groups-overview#application-security-groups) (ASG). A managed ASG will be created by default in the managed resource group if allowed host ports are specified. Nodes can also be added to one or more custom ASGs by specifying the resource ID of the NSG(s) in the node pool parameters.
+
+### Host port specification format
+
+When specifying the list of ports to allow, use a comma-separate list with entries in the format of `port/protocol` or `startPort-endPort/protocol`.
+
+Examples:
+
+- 80/tcp
+- 80/tcp,443/tcp
+- 53/udp,80/tcp
+- 50000-60000/tcp
++
+### Requirements
+
+* AKS version 1.24 or greater is required.
+* Version 0.5.110 of the aks-preview extension is required.
+
+### Install the aks-preview Azure CLI extension
+
+To install the aks-preview extension, run the following command:
+
+```azurecli
+az extension add --name aks-preview
+```
+
+Run the following command to update to the latest version of the extension released:
+
+```azurecli
+az extension update --name aks-preview
+```
+
+### Register the 'NodePublicIPNSGControlPreview' feature flag
+
+Register the `NodePublicIPNSGControlPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "NodePublicIPNSGControlPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "NodePublicIPNSGControlPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Create a new cluster with allowed ports and application security groups
+
+```azurecli-interactive
+az aks create \
+ --resource-group <resourceGroup> \
+ --name <clusterName> \
+ --nodepool-name <nodepoolName> \
+ --nodepool-allowed-host-ports 80/tcp,443/tcp,53/udp,40000-60000/tcp,40000-50000/udp\
+ --nodepool-asg-ids "<asgId>,<asgId>"
+```
+
+### Add a new node pool with allowed ports and application security groups
+
+```azurecli-interactive
+az aks nodepool add \
+ --resource-group <resourceGroup> \
+ --cluster-name <clusterName> \
+ --name <nodepoolName> \
+ --nodepool-allowed-host-ports 80/tcp,443/tcp,53/udp,40000-60000/tcp,40000-50000/udp\
+ --nodepool-asg-ids "<asgId>,<asgId>"
+```
+
+### Update the allowed ports and application security groups for a node pool
+
+```azurecli-interactive
+az aks nodepool update \
+ --resource-group <resourceGroup> \
+ --cluster-name <clusterName> \
+ --name <nodepoolName> \
+ --nodepool-allowed-host-ports 80/tcp,443/tcp,53/udp,40000-60000/tcp,40000-50000/udp\
+ --nodepool-asg-ids "<asgId>,<asgId>"
+```
+
+## Automatically assign host ports for pod workloads (PREVIEW)
+
+When public IPs are configured on nodes, host ports can be utilized to allow pods to directly receive traffic without having to configure a load balancer service. This is especially useful in scenarios like gaming, where the ephemeral nature of the node IP and port is not a problem because a matchmaker service at a well-known hostname can provide the correct host and port to use at connection time. However, because only one process on a host can be listening on the same port, using applications with host ports can lead to problems with scheduling. To avoid this issue, AKS provides the ability to have the system dynamically assign an available port at scheduling time, preventing conflicts.
+
+> [!WARNING]
+> Pod host port traffic will be blocked by the default NSG rules in place on the cluster. This feature should be combined with allowing host ports on the node pool to allow traffic to flow.
++
+### Requirements
+
+* AKS version 1.24 or greater is required.
+
+### Register the 'PodHostPortAutoAssignPreview' feature flag
+
+Register the `PodHostPortAutoAssignPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
+
+```azurecli-interactive
+az feature register --namespace "Microsoft.ContainerService" --name "PodHostPortAutoAssignPreview"
+```
+
+It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command:
+
+```azurecli-interactive
+az feature show --namespace "Microsoft.ContainerService" --name "PodHostPortAutoAssignPreview"
+```
+
+When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
+
+```azurecli-interactive
+az provider register --namespace Microsoft.ContainerService
+```
+
+### Automatically assign a host port to a pod
+
+Triggering host port auto assignment is done by deploying a workload without any host ports and applying the `kubernetes.azure.com/assign-hostports-for-containerports` annotation with the list of ports that need host port assignments. The value of the annotation should be specified as a comma-separated list of entries like `port/protocol`, where the port is an individual port number that is defined in the Pod spec and the protocol is `tcp` or `udp`.
+
+Ports will be assigned from the range `40000-59999` and will be unique across the cluster. The assigned ports will also be added to environment variables inside the pod so that the application can determine what ports were assigned.
+
+Here is an example `echoserver` deployment, showing the mapping of host ports for ports 8080 and 8443:
+
+```yml
+apiVersion: apps/v1
+kind: Deployment
+metadata:
+ name: echoserver-hostport
+ labels:
+ app: echoserver-hostport
+spec:
+ replicas: 3
+ selector:
+ matchLabels:
+ app: echoserver-hostport
+ template:
+ metadata:
+ annotations:
+ kubernetes.azure.com/assign-hostports-for-containerports: 8080/tcp,8443/tcp
+ labels:
+ app: echoserver-hostport
+ spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - name: echoserver-hostport
+ image: k8s.gcr.io/echoserver:1.10
+ ports:
+ - name: http
+ containerPort: 8080
+ protocol: TCP
+ - name: https
+ containerPort: 8443
+ protocol: TCP
+```
+
+When the deployment is applied, the `hostPort` entries will be in the YAML of the individual pods:
+
+```shell
+$ kubectl describe pod echoserver-hostport-75dc8d8855-4gjfc
+<cut for brevity>
+Containers:
+ echoserver-hostport:
+ Container ID: containerd://d0b75198afe0612091f412ee7cf7473f26c80660143a96b459b3e699ebaee54c
+ Image: k8s.gcr.io/echoserver:1.10
+ Image ID: k8s.gcr.io/echoserver@sha256:cb5c1bddd1b5665e1867a7fa1b5fa843a47ee433bbb75d4293888b71def53229 Ports: 8080/TCP, 8443/TCP
+ Host Ports: 46645/TCP, 49482/TCP
+ State: Running
+ Started: Thu, 12 Jan 2023 18:02:50 +0000
+ Ready: True
+ Restart Count: 0
+ Environment:
+ echoserver-hostport_PORT_8443_TCP_HOSTPORT: 49482
+ echoserver-hostport_PORT_8080_TCP_HOSTPORT: 46645
+```
+
+## Next steps
+
+* Learn about [using multiple node pools in AKS](use-multiple-node-pools.md).
+
+* Learn about [using standard load balancers in AKS](load-balancer-standard.md)
+
+<!-- EXTERNAL LINKS -->
+
+[kubernetes-drain]: https://kubernetes.io/docs/tasks/administer-cluster/safely-drain-node/
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-taint]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#taint
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[kubernetes-labels]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
+[kubernetes-label-syntax]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set
+[capacity-reservation-groups]:/azure/virtual-machines/capacity-reservation-associate-virtual-machine-scale-set
+
+<!-- INTERNAL LINKS -->
+[arm-sku-vm1]: ../virtual-machines/dpsv5-dpdsv5-series.md
+[arm-sku-vm2]: ../virtual-machines/dplsv5-dpldsv5-series.md
+[arm-sku-vm3]: ../virtual-machines/epsv5-epdsv5-series.md
+[aks-quickstart-windows-cli]: ./learn/quick-windows-container-deploy-cli.md
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-get-upgrades]: /cli/azure/aks#az_aks_get_upgrades
+[az-aks-nodepool-add]: /cli/azure/aks/nodepool#az_aks_nodepool_add
+[az-aks-nodepool-list]: /cli/azure/aks/nodepool#az_aks_nodepool_list
+[az-aks-nodepool-update]: /cli/azure/aks/nodepool#az_aks_nodepool_update
+[az-aks-nodepool-upgrade]: /cli/azure/aks/nodepool#az_aks_nodepool_upgrade
+[az-aks-nodepool-scale]: /cli/azure/aks/nodepool#az_aks_nodepool_scale
+[az-aks-nodepool-delete]: /cli/azure/aks/nodepool#az_aks_nodepool_delete
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[az-extension-add]: /cli/azure/extension#az_extension_add
+[az-extension-update]: /cli/azure/extension#az_extension_update
+[az-feature-register]: /cli/azure/feature#az-feature-register
+[az-feature-list]: /cli/azure/feature#az-feature-list
+[az-feature-show]: /cli/azure/feature#az-feature-show
+[az-provider-register]: /cli/azure/provider#az_provider_register
+[az-group-create]: /cli/azure/group#az_group_create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[az-deployment-group-create]: /cli/azure/deployment/group#az_deployment_group_create
+[az-aks-nodepool-add]: /cli/azure/aks#az_aks_nodepool_add
+[enable-fips-nodes]: enable-fips-nodes.md
+[gpu-cluster]: gpu-cluster.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[operator-best-practices-advanced-scheduler]: operator-best-practices-advanced-scheduler.md
+[quotas-skus-regions]: quotas-skus-regions.md
+[supported-versions]: supported-kubernetes-versions.md
+[tag-limitation]: ../azure-resource-manager/management/tag-resources.md
+[taints-tolerations]: operator-best-practices-advanced-scheduler.md#provide-dedicated-nodes-using-taints-and-tolerations
+[vm-sizes]: ../virtual-machines/sizes.md
+[use-system-pool]: use-system-pools.md
+[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
+[vmss-commands]: ../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md#public-ipv4-per-virtual-machine
+[az-list-ips]: /cli/azure/vmss#az_vmss_list_instance_public_ips
+[reduce-latency-ppg]: reduce-latency-ppg.md
+[public-ip-prefix-benefits]: ../virtual-network/ip-services/public-ip-address-prefix.md
+[az-public-ip-prefix-create]: /cli/azure/network/public-ip/prefix#az_network_public_ip_prefix_create
+[node-image-upgrade]: node-image-upgrade.md
+[use-tags]: use-tags.md
+[use-labels]: use-labels.md
+[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
+[internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet
+[drain-nodes]: resize-node-pool.md#drain-the-existing-nodes
aks Use System Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-system-pools.md
In this article, you learned how to create and manage system node pools in an AK
[kubernetes-label-syntax]: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set <!-- INTERNAL LINKS -->
-[aks-taints]: use-multiple-node-pools.md#setting-nodepool-taints
+[aks-taints]: use-multiple-node-pools.md#setting-node-pool-taints
[aks-windows]: windows-container-cli.md [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [az-aks-create]: /cli/azure/aks#az-aks-create
aks Use Windows Hpc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-windows-hpc.md
spec:
- name: powershell image: mcr.microsoft.com/powershell:lts-nanoserver-1809 securityContext:
- privileged: true
windowsOptions: hostProcess: true runAsUserName: "NT AUTHORITY\\SYSTEM"
api-management Api Management Get Started Publish Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-publish-versions.md
When you create multiple versions, the Azure portal creates a *version set*, whi
You can interact directly with version sets by using the Azure CLI: To see all your version sets, run the [az apim api versionset list](/cli/azure/apim/api/versionset#az-apim-api-versionset-list) command:
api-management Api Management Get Started Revise Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-get-started-revise-api.md
In this tutorial, you learn how to:
To begin using Azure CLI: Use this procedure to create and update a release.
api-management Api Management Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-aad.md
For an overview of options to secure the developer portal, see [Authentication a
- [Import and publish](import-and-publish.md) an API in the Azure API Management instance. [!INCLUDE [premium-dev-standard.md](../../includes/api-management-availability-premium-dev-standard.md)]
api-management Api Management Howto Add Products https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-add-products.md
In this tutorial, you learn how to:
To begin using Azure CLI: To create a product, run the [az apim product create](/cli/azure/apim/product#az-apim-product-create) command:
api-management Api Management Howto Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-ip-addresses.md
In the Developer, Basic, Standard, and Premium tiers of API Management, the publ
* The service subscription is [suspended](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) or [warned](https://github.com/Azure/azure-resource-manager-rpc/blob/master/v1.0/subscription-lifecycle-api-reference.md#subscription-states) (for example, for nonpayment) and then reinstated. * (Developer and Premium tiers) Azure Virtual Network is added to or removed from the service. * (Developer and Premium tiers) API Management service is switched between external and internal VNet deployment mode.
+* (Developer and Premium tiers) API Management service is moved to a different subnet.
* (Premium tier) [Availability zones](../reliability/migrate-api-mgt.md) are enabled, added, or removed. * (Premium tier) In [multi-regional deployments](api-management-howto-deploy-multi-region.md), the regional IP address changes if a region is vacated and then reinstated.
api-management Api Management Howto Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-howto-properties.md
Once the named value is created, you can edit it by selecting the name. If you c
To begin using Azure CLI: To add a named value, use the [az apim nv create](/cli/azure/apim/nv#az-apim-nv-create) command:
api-management Get Started Create Service Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/get-started-create-service-instance-cli.md
This quickstart describes the steps for creating a new API Management instance u
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.11.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
api-management Graphql Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/graphql-api.md
If you want to import a GraphQL schema and set up field resolvers using REST or
- An existing API Management instance. [Create one if you haven't already](get-started-create-service-instance.md). - A GraphQL API. - Azure CLI
- [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
- Azure PowerShell
api-management How To Event Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-event-grid.md
In this article, you subscribe to Event Grid events in your API Management insta
:::image type="content" source="media/how-to-event-grid/event-grid-viewer-intro.png" alt-text="API Management events in Event Grid viewer"::: - If you don't already have an API Management service, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md) - Enable a [system-assigned managed identity](api-management-howto-use-managed-service-identity.md#create-a-system-assigned-managed-identity) in your API Management instance. - Create a [resource group](../azure-resource-manager/management/manage-resource-groups-portal.md#create-resource-groups) if you don't have one in which to deploy the sample endpoint.
api-management Import Api From Oas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-api-from-oas.md
In this article, you learn how to:
* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). * Azure CLI
- [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
* Azure PowerShell
api-management Import Soap Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/import-soap-api.md
In this article, you learn how to:
* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md). * Azure CLI
- [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
* Azure PowerShell
api-management Mock Api Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-api-responses.md
Although not required for this example, you can configure more settings for an A
To begin using Azure CLI: To add an operation to your test API, run the [az apim api operation create](/cli/azure/apim/api/operation#az-apim-api-operation-create) command:
app-service App Service Ip Restrictions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-ip-restrictions.md
Follow the procedure as outlined in the preceding section, but with the followin
Specify the **IP Address Block** in Classless Inter-Domain Routing (CIDR) notation for both the IPv4 and IPv6 addresses. To specify an address, you can use something like *1.2.3.4/32*, where the first four octets represent your IP address and */32* is the mask. The IPv4 CIDR notation for all addresses is 0.0.0.0/0. To learn more about CIDR notation, see [Classless Inter-Domain Routing](https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing).
+> [!NOTE]
+> IP-based access restriction rules only handle virtual network address ranges when your app is in an App Service Environment. If your app is in the multi-tenant service, you need to use **service endpoints** to restrict traffic to select subnets in your virtual network.
+ #### Set a service endpoint-based rule * For step 4, in the **Type** drop-down list, select **Virtual Network**.
app-service Management Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/management-addresses.md
> This article is about the App Service Environment v2 which is used with Isolated App Service plans > ## Summary
app-service Monitor Instances Health Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/monitor-instances-health-check.md
After providing your application's Health check path, you can monitor the health
## Limitations - Health check can be enabled for **Free** and **Shared** App Service Plans so you can have metrics on the site's health and setup alerts, but because **Free** and **Shared** sites can't scale out, any unhealthy instances won't be replaced. You should scale up to the **Basic** tier or higher so you can scale out to 2 or more instances and utilize the full benefit of Health check. This is recommended for production-facing applications as it will increase your app's availability and performance.-- Health check should not be enabled on Premium Functions sites. Due to the rapid scaling of Premium Functions, the health check requests can cause unnecessary fluctuations in HTTP traffic. Premium Functions have their own internal health probes that are used to inform scaling decisions.-- ## Frequently Asked Questions
app-service Quickstart Html Uiex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html-uiex.md
This quickstart shows how to deploy a basic HTML+CSS site to <abbr title="An HTTP-based service for hosting web applications, REST APIs, and mobile back-end applications.">Azure App Service</abbr>. You'll complete this quickstart in [Cloud Shell](../cloud-shell/overview.md), but you can also run these commands locally with [Azure CLI](/cli/azure/install-azure-cli). ## 1. Prepare your environment
app-service Quickstart Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-html.md
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Download the sample
app-service Quickstart Multi Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-multi-container.md
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] This article requires version 2.0.32 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
app-service Cli Backup Schedule Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-backup-schedule-restore.md
This sample script creates a web app in App Service with its related resources.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Configure Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-configure-custom-domain.md
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Configure Ssl Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-configure-ssl-certificate.md
This sample script creates an app in App Service with its related resources, the
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Connect To Documentdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-documentdb.md
This sample script creates an Azure Cosmos DB account using Azure Cosmos DB for
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Connect To Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-redis.md
This sample script creates an Azure Cache for Redis and an App Service app. It t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Connect To Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-sql.md
This sample script creates a database in Azure SQL Database and an App Service a
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Connect To Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-connect-to-storage.md
This sample script creates an Azure storage account and an App Service app. It t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Continuous Deployment Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-github.md
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Continuous Deployment Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-continuous-deployment-vsts.md
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Deploy Ftp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-ftp.md
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Deploy Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-github.md
This sample script creates an app in App Service with its related resources. It
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Deploy Local Git https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-local-git.md
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Deploy Privateendpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-privateendpoint.md
This sample script creates an app in App Service with its related resources, and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
app-service Cli Deploy Staging Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-deploy-staging-environment.md
This sample script creates an app in App Service with an additional deployment s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Integrate App Service With Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-integrate-app-service-with-application-gateway.md
This sample script creates an Azure App Service web app, an Azure Virtual Networ
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Linux Acr Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-acr-aspnetcore.md
This sample script creates a resource group, a Linux App Service plan, and an ap
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Linux Docker Aspnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-linux-docker-aspnetcore.md
This sample script creates a resource group, a Linux App Service plan, and an ap
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-monitor.md
This sample script creates a resource group, App Service plan, and app, and conf
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Scale High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-scale-high-availability.md
This sample script creates a resource group, two App Service plans, two apps, a
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Cli Scale Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/cli-scale-manual.md
This sample script creates a resource group, an App Service plan, and an app. It
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
app-service Tutorial Auth Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-auth-aad.md
To complete this tutorial:
- <a href="https://git-scm.com/" target="_blank">Install Git</a> - <a href="https://dotnet.microsoft.com/download/dotnet-core/3.1" target="_blank">Install the latest .NET Core 3.1 SDK</a> ## Create local .NET Core app
app-service Tutorial Connect Msi Azure Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-azure-database.md
What you will learn:
Prepare your environment for the Azure CLI. ## 1. Grant database access to Azure AD user
app-service Tutorial Connect Msi Sql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-sql-database.md
To debug your app using SQL Database as the back end, make sure that you've allo
Prepare your environment for the Azure CLI. ## 1. Grant database access to Azure AD user
app-service Tutorial Custom Container https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-custom-container.md
Completing this tutorial incurs a small charge in your Azure account for the con
This tutorial requires version 2.0.80 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed. - Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). - Install [Docker](https://docs.docker.com/get-started/#setup), which you use to build Docker images. Installing Docker might require a computer restart. After installing Docker, open a terminal window and verify that the docker is installed:
app-service Tutorial Ruby Postgres App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-ruby-postgres-app.md
To complete this tutorial:
- [Install Ruby on Rails 5.1](https://guides.rubyonrails.org/v5.1/getting_started.html) - [Install and run PostgreSQL](https://www.postgresql.org/download/) ## Prepare local Postgres
app-service Tutorial Troubleshoot Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-troubleshoot-monitor.md
To complete this tutorial, you'll need:
- [Git](https://git-scm.com/) ## Create Azure resources
application-gateway Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/quick-create-cli.md
You can also complete this quickstart using [Azure PowerShell](quick-create-powe
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Redirect External Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-external-site-cli.md
In this article, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Redirect Http To Https Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-http-to-https-cli.md
In this article, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Redirect Internal Site Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-internal-site-cli.md
In this article, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Tutorial Ingress Controller Add On Existing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-existing.md
In this tutorial, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create a resource group
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
In this tutorial, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create a resource group
application-gateway Tutorial Manage Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-manage-web-traffic-cli.md
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Tutorial Multiple Sites Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-multiple-sites-cli.md
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Tutorial Ssl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ssl-cli.md
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Tutorial Url Redirect Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-redirect-cli.md
If you prefer, you can complete this tutorial using [Azure PowerShell](tutorial-
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
application-gateway Tutorial Url Route Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-url-route-cli.md
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
attestation Basic Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/basic-concepts.md
Below are some basic concepts related to Microsoft Azure Attestation.
Attestation provider belongs to Azure resource provider named Microsoft.Attestation. The resource provider is a service endpoint that provides Azure Attestation REST contract and is deployed using [Azure Resource Manager](../azure-resource-manager/management/overview.md). Each attestation provider honors a specific, discoverable policy. Attestation providers get created with a default policy for each attestation type (note that VBS enclave has no default policy). See [examples of an attestation policy](policy-examples.md) for more details on the default policy for SGX.
-### Regional shared provider
-
-Azure Attestation provides a regional shared provider in every available region. Customers can choose to use the regional shared provider for attestation, or create their own providers with custom policies. The shared providers are accessible by any Azure AD user and the policy associated with it cannot be altered.
-
-| Region | Attest Uri |
-|--|--|
-| East US | `https://sharedeus.eus.attest.azure.net` |
-| West US | `https://sharedwus.wus.attest.azure.net` |
-| UK South | `https://shareduks.uks.attest.azure.net` |
-| UK West| `https://sharedukw.ukw.attest.azure.net ` |
-| Canada East | `https://sharedcae.cae.attest.azure.net` |
-| Canada Central | `https://sharedcac.cac.attest.azure.net` |
-| North Europe | `https://sharedneu.neu.attest.azure.net` |
-| West Europe| `https://sharedweu.weu.attest.azure.net` |
-| US East 2 | `https://sharedeus2.eus2.attest.azure.net` |
-| Central US | `https://sharedcus.cus.attest.azure.net` |
-| North Central US | `https://sharedncus.ncus.attest.azure.net` |
-| South Central US | `https://sharedscus.scus.attest.azure.net` |
-| Australia East | `https://sharedeau.eau.attest.azure.net` |
-| Australia SouthEast | `https://sharedsau.sau.attest.azure.net` |
-| South East Asia | `https://sharedsasia.sasia.attest.azure.net` |
-| Japan East | `https://sharedjpe.jpe.attest.azure.net` |
-| Switzerland North | `https://sharedswn.swn.attest.azure.net` |
-| US Gov Virginia | `https://sharedugv.ugv.attest.azure.us` |
-| US Gov Arizona | `https://shareduga.uga.attest.azure.us` |
-| Central US EUAP | `https://sharedcuse.cuse.attest.azure.net` |
-| East US2 EUAP | `https://sharedeus2e.eus2e.attest.azure.net` |
- ## Attestation request Attestation request is a serialized JSON object sent by client application to attestation provider.
attestation Custom Tcb Baseline Enforcement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/custom-tcb-baseline-enforcement.md
# Custom TCB baseline enforcement for SGX attestation
-Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](../security/fundamentals/trusted-hardware-identity-management.md) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM lags the latest baseline offered by Intel and is expected to remain at tcbEvaluationDataNumber 10.
+Microsoft Azure Attestation is a unified solution for attesting different types of Trusted Execution Environments (TEEs) such as [Intel® Software Guard Extensions](https://www.intel.com/content/www/us/en/architecture-and-technology/software-guard-extensions.html) (SGX) enclaves. While attesting SGX enclaves, Azure Attestation validates the evidence against Azure default Trusted Computing Base (TCB) baseline. The default TCB baseline is provided by an Azure service named [Trusted Hardware Identity Management](/azure/security/fundamentals/trusted-hardware-identity-management) (THIM) and includes collateral fetched from Intel like certificate revocation lists (CRLs), Intel certificates, Trusted Computing Base (TCB) information and Quoting Enclave identity (QEID). The default TCB baseline from THIM might lag the latest baseline offered by Intel. This is to prevent any attestation failure scenarios for ACC customers who require more time for patching platform software (PSW) updates.
-The custom TCB baseline enforcement feature in Azure Attestation will enable you to perform SGX attestation against a desired TCB baseline, as opposed to the Azure default TCB baseline which is applied across [Azure Confidential Computing](../confidential-computing/index.yml) (ACC) fleet today.
+The custom TCB baseline enforcement feature in Azure Attestation will empower you to perform SGX attestation against a desired TCB baseline. It is always recommended for [Azure Confidential Computing](/azure/confidential-computing/overview) (ACC) SGX customers to install the latest PSW version supported by Intel and configure their SGX attestation policy with the latest TCB baseline supported by Azure.
## Why use custom TCB baseline enforcement feature? We recommend Azure Attestation users to use the custom TCB baseline enforcement feature for performing SGX attestation. The feature will be helpful in the following scenarios:
-**To perform SGX attestation against newer TCB offered by Intel** ΓÇô Security conscious customers can perform timely roll out of platform software (PSW) updates as recommended by Intel and use the custom baseline enforcement feature to perform their SGX attestation against the newer TCB versions supported by Intel
+**To perform SGX attestation against a newer TCB offered by Intel** ΓÇô Customers can perform timely roll out of platform software (PSW) updates as recommended by Intel and use the custom baseline enforcement feature to perform their SGX attestation against the newer TCB versions supported by Intel
**To perform platform software (PSW) updates at your own cadence** ΓÇô Customers who prefer to update PSW at their own cadence, can use custom baseline enforcement feature to perform SGX attestation against the older TCB baseline, until the PSW updates are rolled out
-## Default TCB baseline used by Azure Attestation when no custom TCB baseline is configured by users
+## Default TCB baseline currently referred by Azure Attestation when no custom TCB baseline is configured by users
``` TCB identifier: ΓÇ£azuredefaultΓÇ¥
c:[type=="x-ms-attestation-type"] => issue(type="tee", value=c.value);
``` ## Key considerations:-- It is always recommended to install the latest PSW version supported by Intel and configure attestation policy with the latest TCB identifier available in Azure - If the PSW version of ACC node is lower than the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will fail - If the PSW version of ACC node is greater than or equal to the minimum PSW version of the TCB baseline configured in SGX attestation policy, attestation scenarios will pass - For customers who do not configure a custom TCB baseline in attestation policy, attestation will be performed against the Azure default TCB baseline-- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline
+- For customers using an attestation policy without configurationrules section, attestation will be performed against the Azure default TCB baseline
attestation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/overview.md
Azure Attestation is the preferred choice for attesting TEEs as it offers the fo
- Unified framework for attesting multiple environments such as TPMs, SGX enclaves and VBS enclaves - Allows creation of custom attestation providers and configuration of policies to restrict token generation-- Offers [regional shared providers](basic-concepts.md#regional-shared-provider) which can attest with no configuration from users - Protects its data while-in use with implementation in an SGX enclave - Highly available service
azure-app-configuration Howto Backup Config Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-backup-config-store.md
In this tutorial, you'll create a secondary store in the `centralus` region and
- [.NET Core SDK](https://dotnet.microsoft.com/download). - This tutorial requires version 2.3.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-app-configuration Howto Leverage Json Content Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-leverage-json-content-type.md
In this tutorial, you'll learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.10.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-app-configuration Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-create-service.md
This sample script creates a new instance of Azure App Configuration in a new re
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-app-configuration Cli Delete Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-delete-service.md
This sample script deletes an instance of Azure App Configuration.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-app-configuration Cli Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-export.md
This sample script exports key-values from an Azure App Configuration store.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-app-configuration Cli Import https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-import.md
This sample script imports key-value settings to an Azure App Configuration stor
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-app-configuration Cli Work With Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/scripts/cli-work-with-keys.md
This sample script shows how to:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Sample script
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
This article describes how to create the Azure Arc data controller in direct con
Before you begin, verify that you have completed the prerequisites in [Deploy data controller - direct connect mode - prerequisites](create-data-controller-direct-prerequisites.md). ## Deploy Arc data controller
azure-arc Monitor Grafana Kibana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/monitor-grafana-kibana.md
Kibana and Grafana web dashboards are provided to bring insight and clarity to the Kubernetes namespaces being used by Azure Arc-enabled data services. To access Kibana and Grafana web dashboards view service endpoints check [Azure Data Studio dashboards](./azure-data-studio-dashboards.md) documentation. ## Monitor Azure SQL managed instances on Azure Arc
azure-arc Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/release-notes.md
This article highlights capabilities, features, and enhancements recently released or improved for Azure Arc-enabled data services.
+## January 13, 2023
+
+### Image tag
+
+`v1.15.0_2023-01-10`
+
+For complete release version information, see [Version log](version-log.md#january-13-2023).
+
+New for this release:
+
+- Arc data
+ - Kafka separate mode - Description of this change and all customer and developer impacts are enumerated in the linked feature.
+
+- Arc-SQL MI
+ - Time series functions are available.
+ ## December 13, 2022 ### Image tag
azure-arc Version Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/version-log.md
This article identifies the component versions with each release of Azure Arc-enabled data services.
+## January 13, 2023
+
+|Component|Value|
+|--|--|
+|Container images tag |`v1.15.0_2023-01-10`|
+|CRD names and version|`datacontrollers.arcdata.microsoft.com`: v1beta1, v1 through v6<br/>`exporttasks.tasks.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`kafkas.arcdata.microsoft.com`: v1beta1, v1beta2<br/>`monitors.arcdata.microsoft.com`: v1beta1, v1, v2<br/>`sqlmanagedinstances.sql.arcdata.microsoft.com`: v1beta1, v1 through v7<br/>`postgresqls.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3<br/>`sqlmanagedinstancerestoretasks.tasks.sql.arcdata.microsoft.com`: v1beta1, v1<br/>`failovergroups.sql.arcdata.microsoft.com`: v1beta1, v1beta2, v1 through v2<br/>`activedirectoryconnectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1<br/>`sqlmanagedinstancereprovisionreplicatask.tasks.sql.arcdata.microsoft.com`: v1beta1<br/>`telemetrycollectors.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3 *use to be otelcollectors*<br/>`telemetryrouters.arcdata.microsoft.com`: v1beta1, v1beta2, v1beta3, v1beta4<br/>`sqlmanagedinstancemonitoringprofiles.arcdata.microsoft.com`: v1beta1, v1beta2<br/>|
+|Azure Resource Manager (ARM) API version|2022-06-15-preview|
+|`arcdata` Azure CLI extension version|1.4.9 ([Download](https://aka.ms/az-cli-arcdata-ext))|
+|Arc-enabled Kubernetes helm chart extension version|1.14.0|
+|Azure Arc Extension for Azure Data Studio<br/>`arc`<br/>`azcli`|*No Changes*<br/>1.7.0 ([Download](https://aka.ms/ads-arcdata-ext))</br>1.7.0 ([Download](https://aka.ms/ads-azcli-ext))|
+ ## December 13, 2022 |Component|Value|
azure-arc Manage Vm Extensions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/manage-vm-extensions-cli.md
This article shows you how to deploy, upgrade, update, and uninstall VM extensio
> [!NOTE] > Azure Arc-enabled servers does not support deploying and managing VM extensions to Azure virtual machines. For Azure VMs, see the following [VM extension overview](../../virtual-machines/extensions/overview.md) article. ## Install the Azure CLI extension
azure-cache-for-redis Create Manage Cache https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-cache.md
In this scenario, you learn how to create an Azure Cache for Redis. You then lea
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-cache-for-redis Create Manage Premium Cache Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/scripts/create-manage-premium-cache-cluster.md
In this scenario, you learn how to create a 6 GB Premium tier Azure Cache for Re
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Event Hub Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-event-hub-cosmos-db.md
To complete this tutorial, you must have the following installed:
- [Java Developer Kit](/azure/developer/java/fundamentals/java-support-on-azure), version 8 - [Apache Maven](https://maven.apache.org), version 3.0 or above - [Azure Functions Core Tools](https://www.npmjs.com/package/azure-functions-core-tools) version 2.6.666 or above > [!IMPORTANT] > The `JAVA_HOME` environment variable must be set to the install location of the JDK to complete this tutorial.
azure-functions Functions Cli Create App Service Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-app-service-plan.md
This Azure Functions sample script creates a function app, which is a container
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Create Function App Connect To Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-cosmos-db.md
This Azure Functions sample script creates a function app and connects the funct
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Create Function App Connect To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-connect-to-storage-account.md
This Azure Functions sample script creates a function app and connects the funct
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Create Function App Github Continuous https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-function-app-github-continuous.md
This Azure Functions sample script creates a function app using the [Consumption
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Create Premium Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-premium-plan.md
This Azure Functions sample script creates a function app, which is a container
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Create Serverless Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-serverless-python.md
This Azure Functions sample script creates a function app, which is a container
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Create Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-create-serverless.md
This Azure Functions sample script creates a function app, which is a container
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-functions Functions Cli Mount Files Storage Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/scripts/functions-cli-mount-files-storage-linux.md
This Azure Functions sample script creates a function app using the [Consumption
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-government Compare Azure Government Global Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/compare-azure-government-global-azure.md
Application Insights (part of Azure Monitor) enables the same features in both A
**Visual Studio** - In Azure Government, you can enable monitoring on your ASP.NET, ASP.NET Core, Java, and Node.js based applications running on Azure App Service. For more information, see [Application monitoring for Azure App Service overview](../azure-monitor/app/azure-web-apps.md). In Visual Studio, go to Tools|Options|Accounts|Registered Azure Clouds|Add New Azure Cloud and select Azure US Government as the Discovery endpoint. After that, adding an account in File|Account Settings will prompt you for which cloud you want to add from.
-**SDK endpoint modifications** - In order to send data from Application Insights to an Azure Government region, you'll need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](../azure-monitor/app/create-new-resource.md#application-insights-overriding-default-endpoints).
+**SDK endpoint modifications** - In order to send data from Application Insights to an Azure Government region, you'll need to modify the default endpoint addresses that are used by the Application Insights SDKs. Each SDK requires slightly different modifications, as described in [Application Insights overriding default endpoints](../azure-monitor/app/create-new-resource.md#override-default-endpoints).
**Firewall exceptions** - Application Insights uses several IP addresses. You might need to know these addresses if the app that you're monitoring is hosted behind a firewall. For more information, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md) from where you can download Azure Government IP addresses.
azure-monitor Agent Linux Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agent-linux-troubleshoot.md
A clean reinstall of the agent fixes most issues. This task might be the first s
| NOT_DEFINED | Because the necessary dependencies aren't installed, the auoms auditd plug-in won't be installed. Installation of auoms failed. Install package auditd. | | 2 | Invalid option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. | | 3 | No option provided to the shell bundle. Run `sudo sh ./omsagent-*.universal*.sh --help` for usage. |
-| 4 | Invalid package type *or* invalid proxy settings. The omsagent-*rpm*.sh packages can only be installed on RPM-based systems. The omsagent-*deb*.sh packages can only be installed on Debian-based systems. We recommend that you use the universal installer from the [latest release](../vm/monitor-virtual-machine.md#agents). Also review to verify your proxy settings. |
+| 4 | Invalid package type *or* invalid proxy settings. The omsagent-*rpm*.sh packages can only be installed on RPM-based systems. The omsagent-*deb*.sh packages can only be installed on Debian-based systems. We recommend that you use the universal installer from the [latest release](agent-linux.md#agent-install-package). Also review to verify your proxy settings. |
| 5 | The shell bundle must be executed as root *or* there was a 403 error returned during onboarding. Run your command by using `sudo`. | | 6 | Invalid package architecture *or* there was a 200 error returned during onboarding. The omsagent-\*x64.sh packages can only be installed on 64-bit systems. The omsagent-\*x86.sh packages can only be installed on 32-bit systems. Download the correct package for your architecture from the [latest release](https://github.com/Microsoft/OMS-Agent-for-Linux/releases/latest). | | 17 | Installation of OMS package failed. Look through the command output for the root failure. |
Below the output plug-in, uncomment the following section by removing the `#` in
1. Review the section [Update proxy settings](agent-manage.md#update-proxy-settings) to verify you've properly configured the agent to communicate through a proxy server.
-1. Double-check that the endpoints outlined in the Azure Monitor [network firewall requirements](./log-analytics-agent.md#firewall-requirements) list are added to an allow list correctly. If you use Azure Automation, the necessary network configuration steps are also linked above.
+1. Double-check that the endpoints outlined in the Azure Monitor [network firewall requirements](./log-analytics-agent.md#firewall-requirements) list are added to an allowlist correctly. If you use Azure Automation, the necessary network configuration steps are also linked above.
## Issue: You receive a 403 error when trying to onboard
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
To create the data collection rule in the Azure portal:
> [!NOTE] > It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule. +
+### Sample log queries
+
+- **Count the IIS log entries by URL for the host www.contoso.com.**
+
+ ```kusto
+ W3CIISLog
+ | where csHost=="www.contoso.com"
+ | summarize count() by csUriStem
+ ```
+
+- **Review the total bytes received by each IIS machine.**
+
+ ```kusto
+ W3CIISLog
+ | summarize sum(csBytes) by Computer
+ ```
++
+## Sample alert rule
+
+- **Create an alert rule on any record with a return status of 500.**
+
+ ```kusto
+ W3CIISLog
+ | where scStatus==500
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
++ ## Troubleshoot Use the following steps to troubleshoot collection of IIS logs.
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
To complete this procedure, you need:
This step will create a new custom table, which is any table name that ends in \_CL. Currently a direct REST call to the table management endpoint is used to create a table. The script at the end of this section is the input to the REST call.
-The table created in the script has two columns TimeGenerated: datetime and RawData: string, which is the default schema for a custom text log. If you know your final schema, then you can add columns in the script before creating the table. If you do not, columns can always be added in the log analytics table UI.
+The table created in the script has two columns TimeGenerated: datetime and RawData: string, which is the default schema for a custom text log. If you know your final schema, then you can add columns in the script before creating the table. If you don't, columns can always be added in the log analytics table UI.
-The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure Portal, press the Cloud Shell button, and select PowerShell. If this is your first-time using Azure Cloud PowerShell, you will need to walk through the one-time configuration wizard.
+The easiest way to make the REST call is from an Azure Cloud PowerShell command line (CLI). To open the shell, go to the Azure portal, press the Cloud Shell button, and select PowerShell. If this is your first-time using Azure Cloud PowerShell, you will need to walk through the one-time configuration wizard.
Copy and paste the following script in to PowerShell to create the table in your workspace. Make sure to replace the {subscription}, {resource group}, {workspace name}, and {table name} in the script. Make sure that there are no extra blanks at the beginning or end of the parameters
To create the data collection rule in the Azure portal:
> [!NOTE] > It can take up to 5 minutes for data to be sent to the destinations after you create the data collection rule.+
+### Sample log queries
+The column names used here are for example only. The column names for your log will most likely be different.
+
+- **Count the number of events by code.**
+
+ ```kusto
+ MyApp_CL
+ | summarize count() by code
+ ```
+
+### Sample alert rule
+
+- **Create an alert rule on any error event.**
+
+ ```kusto
+ MyApp_CL
+ | where status == "Error"
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
+++ ## Troubleshoot Use the following steps to troubleshoot collection of text logs.
azure-monitor Api Filtering Sampling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/api-filtering-sampling.md
ASP.NET **Core/Worker service apps**
> [!NOTE] > Adding a processor by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
-For apps written by using [ASP.NET Core](asp-net-core.md#adding-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class.
+For apps written by using [ASP.NET Core](asp-net-core.md#add-telemetry-processors) or [WorkerService](worker-service.md#add-telemetry-processors), adding a new telemetry processor is done by using the `AddApplicationInsightsTelemetryProcessor` extension method on `IServiceCollection`, as shown. This method is called in the `ConfigureServices` method of your `Startup.cs` class.
```csharp public void ConfigureServices(IServiceCollection services)
ASP.NET **Core/Worker service apps: Load your initializer**
> [!NOTE] > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications or if you're using the Microsoft.ApplicationInsights.WorkerService SDK.
-For apps written using [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
+For apps written using [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) or [WorkerService](worker-service.md#add-telemetry-initializers), adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown. Accomplish this step in the `Startup.ConfigureServices` method.
```csharp using Microsoft.ApplicationInsights.Extensibility;
azure-monitor App Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/app-map.md
An alternate method for ASP.NET Web apps is to instantiate the initializer in co
**ASP.NET Core apps: Load an initializer to TelemetryConfiguration**
-For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, to add a new `TelemetryInitializer` instance, you add it to the Dependency Injection container, as shown. You do this step in the `ConfigureServices` method of your `Startup.cs` class.
+For [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) applications, to add a new `TelemetryInitializer` instance, you add it to the Dependency Injection container, as shown. You do this step in the `ConfigureServices` method of your `Startup.cs` class.
```csharp using Microsoft.ApplicationInsights.Extensibility;
azure-monitor Asp Net Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md
Title: Azure Application Insights for ASP.NET Core applications | Microsoft Docs
+ Title: Application Insights for ASP.NET Core applications | Microsoft Docs
description: Monitor ASP.NET Core web applications for availability, performance, and usage. ms.devlang: csharp
Application Insights can collect the following telemetry from your ASP.NET Core
> * Heartbeats > * Logs
-We'll use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions from [here](./worker-service.md).
+We'll use an [MVC application](/aspnet/core/tutorials/first-mvc-app) example. If you're using the [Worker Service](/aspnet/core/fundamentals/host/hosted-services#worker-service-template), use the instructions in [Application Insights for Worker Service applications](./worker-service.md).
-> [!NOTE]
-> A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. [Learn more](opentelemetry-overview.md).
+A preview [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. For more information, see [OpenTelemetry overview](opentelemetry-overview.md).
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-> [!NOTE]
-> You can also use the Microsoft.Extensions.Logging.ApplicationInsights package to capture logs. For more information, see [Application Insights logging with .NET](ilogger.md). For an example, see [Console application](ilogger.md#console-application).
+You can also use the Microsoft.Extensions.Logging.ApplicationInsights package to capture logs. For more information, see [Application Insights logging with .NET](ilogger.md). For an example, see [Console application](ilogger.md#console-application).
## Supported scenarios
The [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Micro
* **Operating system**: Windows, Linux, or Mac * **Hosting method**: In process or out of process * **Deployment method**: Framework dependent or self-contained
-* **Web server**: IIS (Internet Information Server) or Kestrel
-* **Hosting platform**: The Web Apps feature of Azure App Service, Azure VM, Docker, Azure Kubernetes Service (AKS), and so on
+* **Web server**: Internet Information Server (IIS) or Kestrel
+* **Hosting platform**: The Web Apps feature of Azure App Service, Azure Virtual Machines, Docker, and Azure Kubernetes Service (AKS)
* **.NET Core version**: All officially [supported .NET Core versions](https://dotnet.microsoft.com/download/dotnet-core) that aren't in preview * **IDE**: Visual Studio, Visual Studio Code, or command line > [!NOTE]
-> - ASP.NET Core 6.0 requires [Application Insights 2.19.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.19.0) or later
+> ASP.NET Core 6.0 requires [Application Insights 2.19.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.19.0) or later.
## Prerequisites
+You need:
+ - A functioning ASP.NET Core application. If you need to create an ASP.NET Core application, follow this [ASP.NET Core tutorial](/aspnet/core/getting-started/). - A valid Application Insights connection string. This string is required to send any telemetry to Application Insights. If you need to create a new Application Insights resource to get a connection string, see [Create an Application Insights resource](./create-new-resource.md).
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
1. Open your project in Visual Studio.
-2. Go to **Project** > **Add Application Insights Telemetry**.
+1. Go to **Project** > **Add Application Insights Telemetry**.
-3. Choose **Azure Application Insights**, then select **Next**.
+1. Select **Azure Application Insights** > **Next**.
-4. Choose your subscription and Application Insights instance (or create a new instance with **Create new**), then select **Next**.
+1. Choose your subscription and Application Insights instance. Or you can create a new instance with **Create new**. Select **Next**.
-5. Add or confirm your Application Insights connection string (this should be prepopulated based on your selection in the previous step), then select **Finish**.
+1. Add or confirm your Application Insights connection string. It should be prepopulated based on your selection in the previous step. Select **Finish**.
-6. After you add Application Insights to your project, check to confirm that you're using the latest stable release of the SDK. Go to **Project** > **Manage NuGet Packages...** > **Microsoft.ApplicationInsights.AspNetCore**. If you need to, select **Update**.
+1. After you add Application Insights to your project, check to confirm that you're using the latest stable release of the SDK. Go to **Project** > **Manage NuGet Packages** > **Microsoft.ApplicationInsights.AspNetCore**. If you need to, select **Update**.
- :::image type="content" source="./media/asp-net-core/update-nuget-package.png" alt-text="Screenshot showing where to select the Application Insights package for update.":::
+ :::image type="content" source="./media/asp-net-core/update-nuget-package.png" alt-text="Screenshot that shows where to select the Application Insights package for update.":::
## Enable Application Insights server-side telemetry (no Visual Studio)
-1. Install the [Application Insights SDK NuGet package for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore)
+1. Install the [Application Insights SDK NuGet package for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore).
We recommend that you always use the latest stable version. Find full release notes for the SDK on the [open-source GitHub repo](https://github.com/Microsoft/ApplicationInsights-dotnet/releases).
- The following code sample shows the changes to be added to your project's `.csproj` file.
+ The following code sample shows the changes to add to your project's `.csproj` file:
```xml <ItemGroup>
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
</ItemGroup> ```
-2. Add `AddApplicationInsightsTelemetry()` to your `startup.cs` or `program.cs` class (depending on your .NET Core version)
+1. Add `AddApplicationInsightsTelemetry()` to your `startup.cs` or `program.cs` class. The choice depends on your .NET Core version.
### [ASP.NET Core 6 and later](#tab/netcorenew)
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
-3. Set up the connection string
+1. Set up the connection string.
- Although you can provide a connection string as part of the `ApplicationInsightsServiceOptions` argument to AddApplicationInsightsTelemetry, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
+ Although you can provide a connection string as part of the `ApplicationInsightsServiceOptions` argument to `AddApplicationInsightsTelemetry`, we recommend that you specify the connection string in configuration. The following code sample shows how to specify a connection string in `appsettings.json`. Make sure `appsettings.json` is copied to the application root folder during publishing.
```json {
For Visual Studio for Mac, use the [manual guidance](#enable-application-insight
} ```
- Alternatively, specify the connection string in the "APPLICATIONINSIGHTS_CONNECTION_STRING" environment variable or "ApplicationInsights:ConnectionString" in the JSON configuration file.
+ Alternatively, specify the connection string in the `APPLICATIONINSIGHTS_CONNECTION_STRING` environment variable or `ApplicationInsights:ConnectionString` in the JSON configuration file.
For example: * `SET ApplicationInsights:ConnectionString = <Copy connection string from Application Insights Resource Overview>`
-
* `SET APPLICATIONINSIGHTS_CONNECTION_STRING = <Copy connection string from Application Insights Resource Overview>`
-
- * Typically, `APPLICATIONINSIGHTS_CONNECTION_STRING` is used in [Azure Web Apps](./azure-web-apps.md?tabs=net), but it can also be used in all places where this SDK is supported.
+ * Typically, `APPLICATIONINSIGHTS_CONNECTION_STRING` is used in [Web Apps](./azure-web-apps.md?tabs=net). It can also be used in all places where this SDK is supported.
> [!NOTE]
- > An connection string specified in code wins over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which wins over other options.
+ > A connection string specified in code wins over the environment variable `APPLICATIONINSIGHTS_CONNECTION_STRING`, which wins over other options.
### User secrets and other configuration providers
-If you want to store the connection string in ASP.NET Core user secrets or retrieve it from another configuration provider, you can use the overload with a `Microsoft.Extensions.Configuration.IConfiguration` parameter. For example, `services.AddApplicationInsightsTelemetry(Configuration);`.
-In Microsoft.ApplicationInsights.AspNetCore version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) and later, calling `services.AddApplicationInsightsTelemetry()` automatically reads the connection string from `Microsoft.Extensions.Configuration.IConfiguration` of the application. There's no need to explicitly provide the `IConfiguration`.
+If you want to store the connection string in ASP.NET Core user secrets or retrieve it from another configuration provider, you can use the overload with a `Microsoft.Extensions.Configuration.IConfiguration` parameter. An example parameter is `services.AddApplicationInsightsTelemetry(Configuration);`.
-If `IConfiguration` has loaded configuration from multiple providers, then `services.AddApplicationInsightsTelemetry` prioritizes configuration from `appsettings.json`, irrespective of the order in which providers are added. Use the `services.AddApplicationInsightsTelemetry(IConfiguration)` method to read configuration from IConfiguration without this preferential treatment for `appsettings.json`.
+In `Microsoft.ApplicationInsights.AspNetCore` version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) and later, calling `services.AddApplicationInsightsTelemetry()` automatically reads the connection string from `Microsoft.Extensions.Configuration.IConfiguration` of the application. There's no need to explicitly provide `IConfiguration`.
+
+If `IConfiguration` has loaded configuration from multiple providers, then `services.AddApplicationInsightsTelemetry` prioritizes configuration from `appsettings.json`, irrespective of the order in which providers are added. Use the `services.AddApplicationInsightsTelemetry(IConfiguration)` method to read configuration from `IConfiguration` without this preferential treatment for `appsettings.json`.
## Run your application
Run your application and make requests to it. Telemetry should now flow to Appli
### Live Metrics
-[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. It might take a few minutes for telemetry to appear in the portal and analytics, but Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like Requests, Dependencies, and Traces.
+[Live Metrics](./live-stream.md) can be used to quickly verify if Application Insights monitoring is configured correctly. It might take a few minutes for telemetry to appear in the portal and analytics, but Live Metrics shows CPU usage of the running process in near real time. It can also show other telemetry like requests, dependencies, and traces.
### ILogger logs
-The default configuration collects `ILogger` `Warning` logs and more severe logs. Review [How do I customize ILogger logs collection?](#how-do-i-customize-ilogger-logs-collection) for more information.
+The default configuration collects `ILogger` `Warning` logs and more severe logs. For more information, see [How do I customize ILogger logs collection?](#how-do-i-customize-ilogger-logs-collection).
### Dependencies
-Dependency collection is enabled by default. [This](asp-net-dependencies.md#automatically-tracked-dependencies) article explains the dependencies that are automatically collected, and also contain steps to do manual tracking.
+Dependency collection is enabled by default. [Dependency tracking in Application Insights](asp-net-dependencies.md#automatically-tracked-dependencies) explains the dependencies that are automatically collected and also contains steps to do manual tracking.
### Performance counters Support for [performance counters](./performance-counters.md) in ASP.NET Core is limited:
-* SDK versions 2.4.1 and later collect performance counters if the application is running in Azure Web Apps (Windows).
+* SDK versions 2.4.1 and later collect performance counters if the application is running in Web Apps (Windows).
* SDK versions 2.7.1 and later collect performance counters if the application is running in Windows and targets `NETSTANDARD2.0` or later.
-* For applications targeting the .NET Framework, all versions of the SDK support performance counters.
-* SDK Versions 2.8.0 and later support cpu/memory counter in Linux. No other counter is supported in Linux. The recommended way to get system counters in Linux (and other non-Windows environments) is by using [EventCounters](#eventcounter).
+* For applications that target the .NET Framework, all versions of the SDK support performance counters.
+* SDK versions 2.8.0 and later support the CPU/memory counter in Linux. No other counter is supported in Linux. To get system counters in Linux and other non-Windows environments, use [EventCounters](#eventcounter).
### EventCounter
The preceding steps are enough to help you start collecting server-side telemetr
@inject Microsoft.ApplicationInsights.AspNetCore.JavaScriptSnippet JavaScriptSnippet ```
-2. In `_Layout.cshtml`, insert `HtmlHelper` at the end of the `<head>` section but before any other script. If you want to report any custom JavaScript telemetry from the page, inject it after this snippet:
+1. In `_Layout.cshtml`, insert `HtmlHelper` at the end of the `<head>` section but before any other script. If you want to report any custom JavaScript telemetry from the page, inject it after this snippet:
```cshtml @Html.Raw(JavaScriptSnippet.FullScript) </head> ```
-As an alternative to using the `FullScript`, the `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
+As an alternative to using `FullScript`, `ScriptBody` is available starting in Application Insights SDK for ASP.NET Core version 2.14. Use `ScriptBody` if you need to control the `<script>` tag to set a Content Security Policy:
```cshtml <script> // apply custom changes to this script tag.
The `.cshtml` file names referenced earlier are from a default MVC application t
If your project doesn't include `_Layout.cshtml`, you can still add [client-side monitoring](./website-monitoring.md) by adding the JavaScript snippet to an equivalent file that controls the `<head>` of all pages within your app. Alternatively, you can add the snippet to multiple pages, but we don't recommend it. > [!NOTE]
-> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you are required to remove auto-injection as described above and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk).
+> JavaScript injection provides a default configuration experience. If you require [configuration](./javascript.md#configuration) beyond setting the connection string, you're required to remove auto-injection as described and manually add the [JavaScript SDK](./javascript.md#add-the-javascript-sdk).
## Configure the Application Insights SDK
You can customize the Application Insights SDK for ASP.NET Core to change the de
> [!NOTE] > In ASP.NET Core applications, changing configuration by modifying `TelemetryConfiguration.Active` isn't supported.
-### Using ApplicationInsightsServiceOptions
+### Use ApplicationInsightsServiceOptions
You can modify a few common settings by passing `ApplicationInsightsServiceOptions` to `AddApplicationInsightsTelemetry`, as in this example:
This table has the full list of `ApplicationInsightsServiceOptions` settings:
|Setting | Description | Default ||-|-
-|EnablePerformanceCounterCollectionModule | Enable/Disable `PerformanceCounterCollectionModule` | true
-|EnableRequestTrackingTelemetryModule | Enable/Disable `RequestTrackingTelemetryModule` | true
-|EnableEventCounterCollectionModule | Enable/Disable `EventCounterCollectionModule` | true
-|EnableDependencyTrackingTelemetryModule | Enable/Disable `DependencyTrackingTelemetryModule` | true
-|EnableAppServicesHeartbeatTelemetryModule | Enable/Disable `AppServicesHeartbeatTelemetryModule` | true
-|EnableAzureInstanceMetadataTelemetryModule | Enable/Disable `AzureInstanceMetadataTelemetryModule` | true
-|EnableQuickPulseMetricStream | Enable/Disable LiveMetrics feature | true
-|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling | true
-|EnableHeartbeat | Enable/Disable Heartbeats feature, which periodically (15-min default) sends a custom metric named 'HeartbeatState' with information about the runtime like .NET Version, Azure Environment information, if applicable, etc. | true
-|AddAutoCollectedMetricExtractor | Enable/Disable AutoCollectedMetrics extractor, which is a TelemetryProcessor that sends pre-aggregated metrics about Requests/Dependencies before sampling takes place. | true
-|RequestCollectionOptions.TrackExceptions | Enable/Disable reporting of unhandled Exception tracking by the Request collection module. | false in NETSTANDARD2.0 (because Exceptions are tracked with ApplicationInsightsLoggerProvider), true otherwise.
-|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling will cause the following settings to be ignored; `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, `EnableAppServicesHeartbeatTelemetryModule` | true
+|EnablePerformanceCounterCollectionModule | Enable/Disable `PerformanceCounterCollectionModule`. | True
+|EnableRequestTrackingTelemetryModule | Enable/Disable `RequestTrackingTelemetryModule`. | True
+|EnableEventCounterCollectionModule | Enable/Disable `EventCounterCollectionModule`. | True
+|EnableDependencyTrackingTelemetryModule | Enable/Disable `DependencyTrackingTelemetryModule`. | True
+|EnableAppServicesHeartbeatTelemetryModule | Enable/Disable `AppServicesHeartbeatTelemetryModule`. | True
+|EnableAzureInstanceMetadataTelemetryModule | Enable/Disable `AzureInstanceMetadataTelemetryModule`. | True
+|EnableQuickPulseMetricStream | Enable/Disable LiveMetrics feature. | True
+|EnableAdaptiveSampling | Enable/Disable Adaptive Sampling. | True
+|EnableHeartbeat | Enable/Disable the heartbeats feature. It periodically (15-min default) sends a custom metric named `HeartbeatState` with information about the runtime like .NET version and Azure environment information, if applicable. | True
+|AddAutoCollectedMetricExtractor | Enable/Disable the `AutoCollectedMetrics extractor`. This telemetry processor sends pre-aggregated metrics about requests/dependencies before sampling takes place. | True
+|RequestCollectionOptions.TrackExceptions | Enable/Disable reporting of unhandled exception tracking by the request collection module. | False in NETSTANDARD2.0 (because exceptions are tracked with `ApplicationInsightsLoggerProvider`). True otherwise.
+|EnableDiagnosticsTelemetryModule | Enable/Disable `DiagnosticsTelemetryModule`. Disabling will cause the following settings to be ignored: `EnableHeartbeat`, `EnableAzureInstanceMetadataTelemetryModule`, and `EnableAppServicesHeartbeatTelemetryModule`. | True
For the most current list, see the [configurable settings in `ApplicationInsightsServiceOptions`](https://github.com/microsoft/ApplicationInsights-dotnet/blob/develop/NETCORE/src/Shared/Extensions/ApplicationInsightsServiceOptions.cs). ### Configuration recommendation for Microsoft.ApplicationInsights.AspNetCore SDK 2.15.0 and later
-In Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later, we recommend configuring every setting available in `ApplicationInsightsServiceOptions`, including **ConnectionString** using the application's `IConfiguration` instance. The settings must be under the section "ApplicationInsights", as shown in the following example. The following section from appsettings.json configures the connection string and disables adaptive sampling and performance counter collection.
+In Microsoft.ApplicationInsights.AspNetCore SDK version [2.15.0](https://www.nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore/2.15.0) and later, configure every setting available in `ApplicationInsightsServiceOptions`, including `ConnectionString`. Use the application's `IConfiguration` instance. The settings must be under the section `ApplicationInsights`, as shown in the following example. The following section from *appsettings.json* configures the connection string and disables adaptive sampling and performance counter collection.
```json {
The Application Insights SDK for ASP.NET Core supports both fixed-rate and adapt
For more information, see [Configure adaptive sampling for ASP.NET Core applications](./sampling.md#configuring-adaptive-sampling-for-aspnet-core-applications).
-### Adding TelemetryInitializers
+### Add TelemetryInitializers
-When you want to enrich telemetry with additional information, use [telemetry initializers](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer).
+When you want to enrich telemetry with more information, use [telemetry initializers](./api-filtering-sampling.md#addmodify-properties-itelemetryinitializer).
Add any new `TelemetryInitializer` to the `DependencyInjection` container as shown in the following code. The SDK automatically picks up any `TelemetryInitializer` that's added to the `DependencyInjection` container.
var app = builder.Build();
``` > [!NOTE]
-> `builder.Services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, the following is required: `builder.Services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });`
+> `builder.Services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, `builder.Services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });` is required.
### [ASP.NET Core 5 and earlier](#tab/netcoreold)
public void ConfigureServices(IServiceCollection services)
``` > [!NOTE]
-> `services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, the following is required: `services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });`
+> `services.AddSingleton<ITelemetryInitializer, MyCustomTelemetryInitializer>();` works for simple initializers. For others, `services.AddSingleton(new MyCustomTelemetryInitializer() { fieldName = "myfieldName" });` is required.
-
-### Removing TelemetryInitializers
-By default, telemetry initializers are present. To remove all or specific telemetry initializers, use the following sample code *after* calling `AddApplicationInsightsTelemetry()`.
+### Remove TelemetryInitializers
+
+By default, telemetry initializers are present. To remove all or specific telemetry initializers, use the following sample code *after* you call `AddApplicationInsightsTelemetry()`.
### [ASP.NET Core 6 and later](#tab/netcorenew)
public void ConfigureServices(IServiceCollection services)
-### Adding telemetry processors
+### Add telemetry processors
-You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer). Use the following example.
+You can add custom telemetry processors to `TelemetryConfiguration` by using the extension method `AddApplicationInsightsTelemetryProcessor` on `IServiceCollection`. You use telemetry processors in [advanced filtering scenarios](./api-filtering-sampling.md#itelemetryprocessor-and-itelemetryinitializer). Use the following example:
### [ASP.NET Core 6 and later](#tab/netcorenew)
public void ConfigureServices(IServiceCollection services)
-### Configuring or removing default TelemetryModules
+### Configure or remove default TelemetryModules
Application Insights automatically collects telemetry about specific workloads without requiring manual tracking by user. By default, the following automatic-collection modules are enabled. These modules are responsible for automatically collecting telemetry. You can disable or configure them to alter their default behavior.
-* `RequestTrackingTelemetryModule`: Collects RequestTelemetry from incoming web requests
-* `DependencyTrackingTelemetryModule`: Collects [DependencyTelemetry](./asp-net-dependencies.md) from outgoing http calls and sql calls
-* `PerformanceCollectorModule`: Collects Windows PerformanceCounters
-* `QuickPulseTelemetryModule`: Collects telemetry for showing in Live Metrics portal
-* `AppServicesHeartbeatTelemetryModule`: Collects heart beats (which are sent as custom metrics), about Azure App Service environment where application is hosted
-* `AzureInstanceMetadataTelemetryModule`: Collects heart beats (which are sent as custom metrics), about Azure VM environment where application is hosted
-* `EventCounterCollectionModule`: Collects [EventCounters](eventcounters.md); this module is a new feature and is available in SDK version 2.8.0 and later
+* `RequestTrackingTelemetryModule`: Collects RequestTelemetry from incoming web requests.
+* `DependencyTrackingTelemetryModule`: Collects [DependencyTelemetry](./asp-net-dependencies.md) from outgoing HTTP calls and SQL calls.
+* `PerformanceCollectorModule`: Collects Windows PerformanceCounters.
+* `QuickPulseTelemetryModule`: Collects telemetry to show in the Live Metrics portal.
+* `AppServicesHeartbeatTelemetryModule`: Collects heartbeats (which are sent as custom metrics), about the App Service environment where the application is hosted.
+* `AzureInstanceMetadataTelemetryModule`: Collects heartbeats (which are sent as custom metrics), about the Azure VM environment where the application is hosted.
+* `EventCounterCollectionModule`: Collects [EventCounters](eventcounters.md). This module is a new feature and is available in SDK version 2.8.0 and later.
-To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example.
+To configure any default `TelemetryModule`, use the extension method `ConfigureTelemetryModule<T>` on `IServiceCollection`, as shown in the following example:
### [ASP.NET Core 6 and later](#tab/netcorenew)
public void ConfigureServices(IServiceCollection services)
-In versions 2.12.2 and later, [`ApplicationInsightsServiceOptions`](#using-applicationinsightsserviceoptions) includes an easy option to disable any of the default modules.
+In versions 2.12.2 and later, [`ApplicationInsightsServiceOptions`](#use-applicationinsightsserviceoptions) includes an easy option to disable any of the default modules.
-### Configuring a telemetry channel
+### Configure a telemetry channel
The default [telemetry channel](./telemetry-channels.md) is `ServerTelemetryChannel`. The following example shows how to override it.
public void ConfigureServices(IServiceCollection services)
> [!NOTE]
-> See [Flushing data](api-custom-events-metrics.md#flushing-data) if you want to flush the buffer--for example, if you are using the SDK in an application that shuts down.
+> If you want to flush the buffer, see [Flushing data](api-custom-events-metrics.md#flushing-data). For example, you might need to flush the buffer if you're using the SDK in an application that shuts down.
### Disable telemetry dynamically
public void Configure(IApplicationBuilder app, IHostingEnvironment env, Telemetr
-The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular auto collection module, see [Remove the telemetry module](#configuring-or-removing-default-telemetrymodules).
+The preceding code sample prevents the sending of telemetry to Application Insights. It doesn't prevent any automatic collection modules from collecting telemetry. If you want to remove a particular autocollection module, see [Remove the telemetry module](#configure-or-remove-default-telemetrymodules).
## Frequently asked questions
+This section provides answers to common questions.
+ ### Does Application Insights support ASP.NET Core 3.X? Yes. Update to [Application Insights SDK for ASP.NET Core](https://nuget.org/packages/Microsoft.ApplicationInsights.AspNetCore) version 2.8.0 or later. Earlier versions of the SDK don't support ASP.NET Core 3.X.
Also, if you're [enabling server-side telemetry based on Visual Studio](#enable-
### How can I track telemetry that's not automatically collected?
-Get an instance of `TelemetryClient` by using constructor injection, and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` or `TelemetryConfiguration` instances in an ASP.NET Core application. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with rest of the telemetry. Creating a new `TelemetryClient` instance is recommended only if it needs a configuration that's separate from the rest of the telemetry.
+Get an instance of `TelemetryClient` by using constructor injection and call the required `TrackXXX()` method on it. We don't recommend creating new `TelemetryClient` or `TelemetryConfiguration` instances in an ASP.NET Core application. A singleton instance of `TelemetryClient` is already registered in the `DependencyInjection` container, which shares `TelemetryConfiguration` with the rest of the telemetry. Create a new `TelemetryClient` instance only if it needs a configuration that's separate from the rest of the telemetry.
The following example shows how to track more telemetry from a controller.
public class HomeController : Controller
} ```
-For more information about custom data reporting in Application Insights, see [Application Insights custom metrics API reference](./api-custom-events-metrics.md). A similar approach can be used for sending custom metrics to Application Insights using the [GetMetric API](./get-metric.md).
+For more information about custom data reporting in Application Insights, see [Application Insights custom metrics API reference](./api-custom-events-metrics.md). A similar approach can be used for sending custom metrics to Application Insights by using the [GetMetric API](./get-metric.md).
### How do I customize ILogger logs collection?
-By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights` as shown in the following code.
-The following configuration allows Application Insights to capture all `Information` logs and more severe logs.
+By default, only `Warning` logs and more severe logs are automatically captured. To change this behavior, explicitly override the logging configuration for the provider `ApplicationInsights`, as shown in the following code. The following configuration allows Application Insights to capture all `Information` logs and more severe logs.
```json {
For more information, see [ILogger configuration](ilogger.md#logging-level).
### Some Visual Studio templates used the UseApplicationInsights() extension method on IWebHostBuilder to enable Application Insights. Is this usage still valid?
-The extension method `UseApplicationInsights()` is still supported, but it's marked as obsolete in Application Insights SDK version 2.8.0 and later. It will be removed in the next major version of the SDK. To enable Application Insights telemetry, we recommend using `AddApplicationInsightsTelemetry()` because it provides overloads to control some configuration. Also, in ASP.NET Core 3.X apps, `services.AddApplicationInsightsTelemetry()` is the only way to enable Application Insights.
+The extension method `UseApplicationInsights()` is still supported, but it's marked as obsolete in Application Insights SDK version 2.8.0 and later. It will be removed in the next major version of the SDK. To enable Application Insights telemetry, use `AddApplicationInsightsTelemetry()` because it provides overloads to control some configuration. Also, in ASP.NET Core 3.X apps, `services.AddApplicationInsightsTelemetry()` is the only way to enable Application Insights.
### I'm deploying my ASP.NET Core application to Web Apps. Should I still enable the Application Insights extension from Web Apps?
If the SDK is installed at build time as shown in this article, you don't need t
* All operating systems, including Windows, Linux, and Mac. * All publish modes, including self-contained or framework dependent. * All target frameworks, including the full .NET Framework.
- * All hosting options, including Web Apps, VMs, Linux, containers, Azure Kubernetes Service, and non-Azure hosting.
- * All .NET Core versions including preview versions.
+ * All hosting options, including Web Apps, VMs, Linux, containers, AKS, and non-Azure hosting.
+ * All .NET Core versions, including preview versions.
* You can see telemetry locally when you're debugging from Visual Studio. * You can track more custom telemetry by using the `TrackXXX()` API. * You have full control over the configuration.
If the SDK is installed at build time as shown in this article, you don't need t
Yes. Feature support for the SDK is the same in all platforms, with the following exceptions: * The SDK collects [event counters](./eventcounters.md) on Linux because [performance counters](./performance-counters.md) are only supported in Windows. Most metrics are the same.
-* Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel:
+* Although `ServerTelemetryChannel` is enabled by default, if the application is running in Linux or macOS, the channel doesn't automatically create a local storage folder to keep telemetry temporarily if there are network issues. Because of this limitation, telemetry is lost when there are temporary network or server issues. To work around this issue, configure a local folder for the channel.
### [ASP.NET Core 6.0](#tab/netcore6)
This limitation isn't applicable from version [2.15.0](https://www.nuget.org/pac
### Is this SDK supported for the new .NET Core 3.X Worker Service template applications?
-This SDK requires `HttpContext`. Therefore, it doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
+This SDK requires `HttpContext`. It doesn't work in any non-HTTP applications, including the .NET Core 3.X Worker Service applications. To enable Application Insights in such applications by using the newly released Microsoft.ApplicationInsights.WorkerService SDK, see [Application Insights for Worker Service applications (non-HTTP applications)](worker-service.md).
## Troubleshooting
This SDK requires `HttpContext`. Therefore, it doesn't work in any non-HTTP appl
## Open-source SDK
-* [Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
+[Read and contribute to the code](https://github.com/microsoft/ApplicationInsights-dotnet).
For the latest updates and bug fixes, see the [release notes](./release-notes.md). ## Next steps
-* [Explore user flows](./usage-flows.md) to understand how users navigate through your app.
+* [Explore user flows](./usage-flows.md) to understand how users move through your app.
* [Configure a snapshot collection](./snapshot-debugger.md) to see the state of source code and variables at the moment an exception is thrown. * [Use the API](./api-custom-events-metrics.md) to send your own events and metrics for a detailed view of your app's performance and usage. * Use [availability tests](./monitor-web-app-availability.md) to check your app constantly from around the world.
-* [Dependency Injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection)
+* Learn about [dependency injection in ASP.NET Core](/aspnet/core/fundamentals/dependency-injection).
azure-monitor Asp Net Dependencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md
To have this data displayed in the dependency charts in Application Insights, se
Alternatively, `TelemetryClient` provides the extension methods `StartOperation` and `StopOperation`, which can be used to manually track dependencies as shown in [Outgoing dependencies tracking](custom-operations-tracking.md#outgoing-dependencies-tracking).
-If you want to switch off the standard dependency tracking module, remove the reference to `DependencyTrackingTelemetryModule` in [ApplicationInsights.config](../../azure-monitor/app/configuration-with-applicationinsights-config.md) for ASP.NET applications. For ASP.NET Core applications, follow the instructions in [Application Insights for ASP.NET Core applications](asp-net-core.md#configuring-or-removing-default-telemetrymodules).
+If you want to switch off the standard dependency tracking module, remove the reference to `DependencyTrackingTelemetryModule` in [ApplicationInsights.config](../../azure-monitor/app/configuration-with-applicationinsights-config.md) for ASP.NET applications. For ASP.NET Core applications, follow the instructions in [Application Insights for ASP.NET Core applications](asp-net-core.md#configure-or-remove-default-telemetrymodules).
## Track AJAX calls from webpages
A list of the latest [currently supported modules](https://github.com/microsoft/
* [Exceptions](./asp-net-exceptions.md) * [User and page data](./javascript.md) * [Availability](./monitor-web-app-availability.md)
-* Set up custom dependency tracking for [Java](java-in-process-agent.md#add-spans-using-the-opentelemetry-annotation).
+* Set up custom dependency tracking for [Java](java-in-process-agent.md#add-spans-by-using-the-opentelemetry-annotation).
* Set up custom dependency tracking for [OpenCensus Python](./opencensus-python-dependency.md). * [Write custom dependency telemetry](./api-custom-events-metrics.md#trackdependency) * See [data model](./data-model.md) for Application Insights types and data model.
azure-monitor Continuous Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/continuous-monitoring.md
Title: Continuous monitoring of your DevOps release pipeline with Azure Pipelines and Azure Application Insights | Microsoft Docs
-description: Provides instructions to quickly set up continuous monitoring with Application Insights
+ Title: Continuous monitoring of your Azure DevOps release pipeline | Microsoft Docs
+description: This article provides instructions to quickly set up continuous monitoring with Azure Pipelines and Application Insights.
Last updated 05/01/2020
# Add continuous monitoring to your release pipeline
-Azure Pipelines integrates with Azure Application Insights to allow continuous monitoring of your DevOps release pipeline throughout the software development lifecycle.
+Azure Pipelines integrates with Application Insights to allow continuous monitoring of your Azure DevOps release pipeline throughout the software development lifecycle.
-With continuous monitoring, release pipelines can incorporate monitoring data from Application Insights and other Azure resources. When the release pipeline detects an Application Insights alert, the pipeline can gate or roll back the deployment until the alert is resolved. If all checks pass, deployments can proceed automatically from test all the way to production, without the need for manual intervention.
+With continuous monitoring, release pipelines can incorporate monitoring data from Application Insights and other Azure resources. When the release pipeline detects an Application Insights alert, the pipeline can gate or roll back the deployment until the alert is resolved. If all checks pass, deployments can proceed automatically from test all the way to production, without the need for manual intervention.
## Configure continuous monitoring 1. In [Azure DevOps](https://dev.azure.com), select an organization and project.
-
-1. On the left menu of the project page, select **Pipelines** > **Releases**.
-
-1. Drop down the arrow next to **New** and select **New release pipeline**. Or, if you don't have a pipeline yet, select **New pipeline** on the page that appears.
-
-1. On the **Select a template** pane, search for and select **Azure App Service deployment with continuous monitoring**, and then select **Apply**.
- ![New Azure Pipelines release pipeline](media/continuous-monitoring/001.png)
+1. On the left menu of the project page, select **Pipelines** > **Releases**.
+
+1. Select the dropdown arrow next to **New** and select **New release pipeline**. Or, if you don't have a pipeline yet, select **New pipeline** on the page that appears.
+
+1. On the **Select a template** pane, search for and select **Azure App Service deployment with continuous monitoring**, and then select **Apply**.
+
+ ![Screenshot that shows a new Azure Pipelines release pipeline.](media/continuous-monitoring/001.png)
1. In the **Stage 1** box, select the hyperlink to **View stage tasks.**
- ![View stage tasks](media/continuous-monitoring/002.png)
+ ![Screenshot that shows View stage tasks.](media/continuous-monitoring/002.png)
-1. In the **Stage 1** configuration pane, complete the following fields:
+1. In the **Stage 1** configuration pane, fill in the following fields:
| Parameter | Value | | - |:--|
- | **Stage name** | Provide a stage name, or leave it at **Stage 1**. |
- | **Azure subscription** | Drop down and select the linked Azure subscription you want to use.|
- | **App type** | Drop down and select your app type. |
+ | **Stage name** | Provide a stage name or leave it at **Stage 1**. |
+ | **Azure subscription** | Select the dropdown arrow and select the linked Azure subscription you want to use.|
+ | **App type** | Select the dropdown arrow and select your app type. |
| **App Service name** | Enter the name of your Azure App Service. |
- | **Resource Group name for Application Insights** | Drop down and select the resource group you want to use. |
- | **Application Insights resource name** | Drop down and select the Application Insights resource for the resource group you selected.
+ | **Resource Group name for Application Insights** | Select the dropdown arrow and select the resource group you want to use. |
+ | **Application Insights resource name** | Select the dropdown arrow and select the Application Insights resource for the resource group you selected.
-1. To save the pipeline with default alert rule settings, select **Save** at upper right in the Azure DevOps window. Enter a descriptive comment, and then select **OK**.
+1. To save the pipeline with default alert rule settings, select **Save** in the upper-right corner of the Azure DevOps window. Enter a descriptive comment and select **OK**.
## Modify alert rules
-Out of box, the **Azure App Service deployment with continuous monitoring** template has four alert rules: **Availability**, **Failed requests**, **Server response time**, and **Server exceptions**. You can add more rules, or change the rule settings to meet your service level needs.
+Out of the box, the **Azure App Service deployment with continuous monitoring** template has four alert rules: **Availability**, **Failed requests**, **Server response time**, and **Server exceptions**. You can add more rules or change the rule settings to meet your service level needs.
To modify alert rule settings:
az monitor metrics alert create -n 'ServerResponseTime_$(Release.DefinitionName)
az monitor metrics alert create -n 'ServerExceptions_$(Release.DefinitionName)' -g $(Parameters.AppInsightsResourceGroupName) --scopes $resource --condition 'count exceptions/server > 5' --description "created from Azure DevOps"; ```
-You can modify the script and add additional alert rules, modify the alert conditions, or remove alert rules that don't make sense for your deployment purposes.
+You can modify the script and add more alert rules. You can also modify the alert conditions. And you can remove alert rules that don't make sense for your deployment purposes.
## Add deployment conditions
-When you add deployment gates to your release pipeline, an alert that exceeds the thresholds you set prevents unwanted release promotion. Once you resolve the alert, the deployment can proceed automatically.
+When you add deployment gates to your release pipeline, an alert that exceeds the thresholds you set prevents unwanted release promotion. After you resolve the alert, the deployment can proceed automatically.
To add deployment gates: 1. On the main pipeline page, under **Stages**, select the **Pre-deployment conditions** or **Post-deployment conditions** symbol, depending on which stage needs a continuous monitoring gate.
-
- ![Pre-deployment conditions](media/continuous-monitoring/004.png)
-
+
+ ![Screenshot that shows Pre-deployment conditions.](media/continuous-monitoring/004.png)
+ 1. In the **Pre-deployment conditions** configuration pane, set **Gates** to **Enabled**.
-
+ 1. Next to **Deployment gates**, select **Add**.
-
+ 1. Select **Query Azure Monitor alerts** from the dropdown menu. This option lets you access both Azure Monitor and Application Insights alerts.
-
- ![Query Azure Monitor alerts](media/continuous-monitoring/005.png)
-
-1. Under **Evaluation options**, enter the values you want for settings like **The time between re-evaluation of gates** and **The timeout after which gates fail**.
+
+ ![Screenshot that shows Query Azure Monitor alerts.](media/continuous-monitoring/005.png)
+
+1. Under **Evaluation options**, enter the values you want for settings like **The time between re-evaluation of gates** and **The timeout after which gates fail**.
## View release logs You can see deployment gate behavior and other release steps in the release logs. To open the logs:
-1. Select **Releases** from the left menu of the pipeline page.
-
-1. Select any release.
-
-1. Under **Stages**, select any stage to view a release summary.
-
-1. To view logs, select **View logs** in the release summary, select the **Succeeded** or **Failed** hyperlink in any stage, or hover over any stage and select **Logs**.
-
- ![View release logs](media/continuous-monitoring/006.png)
+1. Select **Releases** from the left menu of the pipeline page.
+
+1. Select any release.
+
+1. Under **Stages**, select any stage to view a release summary.
+
+1. To view logs, select **View logs** in the release summary, select the **Succeeded** or **Failed** hyperlink in any stage, or hover over any stage and select **Logs**.
+
+ ![Screenshot that shows viewing release logs.](media/continuous-monitoring/006.png)
## Next steps
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
Title: Create a new Azure Application Insights resource | Microsoft Docs
+ Title: Create a new Application Insights resource | Microsoft Docs
description: Manually set up Application Insights monitoring for a new live application. Last updated 01/28/2023
# Create an Application Insights resource
-Azure Application Insights displays data about your application in a Microsoft Azure *resource*. Creating a new resource is therefore part of [setting up Application Insights to monitor a new application][start]. After you have created your new resource, you can get its instrumentation key and use that to configure the Application Insights SDK. The instrumentation key links your telemetry to the resource.
+Application Insights displays data about your application in an Azure resource. Creating a new resource is part of [setting up Application Insights to monitor a new application][start]. After you've created your new resource, you can get its instrumentation key and use it to configure the Application Insights SDK. The instrumentation key links your telemetry to the resource.
> [!IMPORTANT]
-> On **February 29th, 2024,** [support for classic Application Insights will end](https://azure.microsoft.com/updates/we-re-retiring-classic-application-insights-on-29-february-2024). [Transition to workspace-based Application Insights](convert-classic-resource.md) to take advantage of [new capabilities](create-workspace-resource.md#new-capabilities). Newer regions introduced after February 2021 do not support creating classic Application Insights resources.
+> On **February 29, 2024,** [support for classic Application Insights will end](https://azure.microsoft.com/updates/we-re-retiring-classic-application-insights-on-29-february-2024). [Transition to workspace-based Application Insights](convert-classic-resource.md) to take advantage of [new capabilities](create-workspace-resource.md#new-capabilities). Newer regions introduced after February 2021 don't support creating classic Application Insights resources.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)]
-## Sign in to Microsoft Azure
+## Sign in to Azure
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. ## Create an Application Insights resource
-Sign in to the [Azure portal](https://portal.azure.com), and create an Application Insights resource:
+Sign in to the [Azure portal](https://portal.azure.com) and create an Application Insights resource.
-![Click the `+` sign in the upper left corner. Select Developer Tools followed by Application Insights](./media/create-new-resource/new-app-insights.png)
+![Screenshot that shows selecting the + sign in the upper-left corner, Developer Tools, and Application Insights.](./media/create-new-resource/new-app-insights.png)
| Settings | Value | Description | | - |:-|:--|
- | **Name** | `Unique value` | Name that identifies the app you are monitoring. |
- | **Resource Group** | `myResourceGroup` | Name for the new or existing resource group to host App Insights data. |
- | **Region** | `East US` | Choose a location near you, or near where your app is hosted. |
- | **Resource Mode** | `Classic` or `Workspace-based` | Workspace-based resources allow you to send your Application Insights telemetry to a common Log Analytics workspace. For more information, see the [article on workspace-based resources](create-workspace-resource.md).
+ | **Name** | `Unique value` | Name that identifies the app you're monitoring. |
+ | **Resource group** | `myResourceGroup` | Name for the new or existing resource group to host Application Insights data. |
+ | **Region** | `East US` | Select a location near you or near where your app is hosted. |
+ | **Resource mode** | `Classic` or `Workspace-based` | Workspace-based resources allow you to send your Application Insights telemetry to a common Log Analytics workspace. For more information, see [Workspace-based Application Insights resources](create-workspace-resource.md).
> [!NOTE]
-> While you can use the same resource name across different resource groups, it can be beneficial to use a globally unique name. This can be useful if you plan to [perform cross resource queries](../logs/cross-workspace-query.md#identifying-an-application) as it simplifies the required syntax.
+> You can use the same resource name across different resource groups, but it can be beneficial to use a globally unique name. If you plan to [perform cross-resource queries](../logs/cross-workspace-query.md#identifying-an-application), using a globally unique name simplifies the required syntax.
-Enter the appropriate values into the required fields, and then select **Review + create**.
+Enter the appropriate values in the required fields. Select **Review + create**.
> [!div class="mx-imgBorder"]
-> ![Enter values into required fields, and then select "review + create".](./media/create-new-resource/review-create.png)
+> ![Screenshot that shows entering values in required fields and the Review + create button.](./media/create-new-resource/review-create.png)
-When your app has been created, a new pane opens. This pane is where you see performance and usage data about your monitored application.
+After your app is created, a new pane displays performance and usage data about your monitored application.
## Copy the instrumentation key
-The instrumentation key identifies the resource that you want to associate your telemetry data with. You will need to copy the instrumentation key and add it to your application's code.
+The instrumentation key identifies the resource that you want to associate with your telemetry data. You'll need to copy the instrumentation key and add it to your application's code.
## Install the SDK in your app
Install the Application Insights SDK in your app. This step depends heavily on t
Use the instrumentation key to configure [the SDK that you install in your application][start].
-The SDK includes standard modules that send telemetry without you having to write any additional code. To track user actions or diagnose issues in more detail, [use the API][api] to send your own telemetry.
+The SDK includes standard modules that send telemetry, so you don't have to write any more code. To track user actions or diagnose issues in more detail, [use the API][api] to send your own telemetry.
-## Creating a resource automatically
+## Create a resource automatically
+
+Use PowerShell or the Azure CLI to create a resource automatically.
### PowerShell
-Create a new Application Insights resource
+Create a new Application Insights resource.
```powershell New-AzApplicationInsights [-ResourceGroupName] <String> [-Name] <String> [-Location] <String> [-Kind <String>]
New-AzApplicationInsights [-ResourceGroupName] <String> [-Name] <String> [-Locat
```powershell New-AzApplicationInsights -Kind java -ResourceGroupName testgroup -Name test1027 -location eastus ```+ #### Results ```powershell
SamplingPercentage :
TenantId : {subid} ```
-For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key consult the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
+For the full PowerShell documentation for this cmdlet, and to learn how to retrieve the instrumentation key, see the [Azure PowerShell documentation](/powershell/module/az.applicationinsights/new-azapplicationinsights).
### Azure CLI (preview)
To access the preview Application Insights Azure CLI commands, you first need to
az extension add -n application-insights ```
-If you don't run the `az extension add` command, you will see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
+If you don't run the `az extension add` command, you'll see an error message that states: `az : ERROR: az monitor: 'app-insights' is not in the 'az monitor' command group. See 'az monitor --help'.`
-Now you can run the following to create your Application Insights resource:
+Run the following command to create your Application Insights resource:
```azurecli az monitor app-insights component create --app
az monitor app-insights component create --app demoApp --location eastus --kind
} ```
-For the full Azure CLI documentation for this command, and to learn how to retrieve the instrumentation key consult the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create).
+For the full Azure CLI documentation for this command, and to learn how to retrieve the instrumentation key, see the [Azure CLI documentation](/cli/azure/monitor/app-insights/component#az-monitor-app-insights-component-create).
-## Application Insights overriding default endpoints
+## Override default endpoints
> [!WARNING]
-> Endpoint modification is not recommended. [Transition to connection strings](migrate-from-instrumentation-keys-to-connection-strings.md#migrate-from-application-insights-instrumentation-keys-to-connection-strings) to simplify configuration and eliminate the need for endpoint modification.
+> Don't modify endpoints. [Transition to connection strings](migrate-from-instrumentation-keys-to-connection-strings.md#migrate-from-application-insights-instrumentation-keys-to-connection-strings) to simplify configuration and eliminate the need for endpoint modification.
-To send data from Application Insights to certain regions, you'll need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article. These changes require adjusting the sample code and replacing the placeholder values for `QuickPulse_Endpoint_Address`, `TelemetryChannel_Endpoint_Address`, and `Profile_Query_Endpoint_address` with the actual endpoint addresses for your specific region. The end of this article contains links to the endpoint addresses for regions where this configuration is required.
+To send data from Application Insights to certain regions, you'll need to override the default endpoint addresses. Each SDK requires slightly different modifications, all of which are described in this article.
+These changes require you to adjust the sample code and replace the placeholder values for `QuickPulse_Endpoint_Address`, `TelemetryChannel_Endpoint_Address`, and `Profile_Query_Endpoint_address` with the actual endpoint addresses for your specific region. The end of this article contains links to the endpoint addresses for regions where this configuration is required.
To send data from Application Insights to certain regions, you'll need to overri
# [.NET](#tab/net) > [!NOTE]
-> The applicationinsights.config file is automatically overwritten anytime a SDK upgrade is performed. After performing an SDK upgrade be sure to re-enter the region specific endpoint values.
+> The *applicationinsights.config* file is automatically overwritten anytime an SDK upgrade is performed. After you perform an SDK upgrade, be sure to reenter the region-specific endpoint values.
```xml <ApplicationInsights>
To send data from Application Insights to certain regions, you'll need to overri
# [.NET Core](#tab/netcore)
-Modify the appsettings.json file in your project as follows to adjust the main endpoint:
+Modify the *appsettings.json* file in your project to adjust the main endpoint:
```json "ApplicationInsights": {
Modify the appsettings.json file in your project as follows to adjust the main e
} ```
-The values for Live Metrics and the Profile Query Endpoint can only be set via code. To override the default values for all endpoint values via code, make the following changes in the `ConfigureServices` method of the `Startup.cs` file:
+The values for Live Metrics and the Profile Query endpoint can only be set via code. To override the default values for all endpoint values via code, make the following changes in the `ConfigureServices` method of the `Startup.cs` file:
```csharp using Microsoft.ApplicationInsights.Extensibility.Implementation.ApplicationId;
using Microsoft.ApplicationInsights.Extensibility.PerfCounterCollector.QuickPuls
# [Azure Functions](#tab/functions)
-For Azure Functions it is now recommended to use [connection strings](./sdk-connection-string.md?tabs=net) set in the Function's Application settings. To access Application settings for your function from within the functions pane select **Settings** > **Configuration** > **Application settings**.
+For Azure Functions, we recommend that you use [connection strings](./sdk-connection-string.md?tabs=net) set in the function's Application settings. To access Application settings for your function from within the functions pane, select **Settings** > **Configuration** > **Application settings**.
-Name: `APPLICATIONINSIGHTS_CONNECTION_STRING`
-Value: `Connection String Value`
+**Name**: `APPLICATIONINSIGHTS_CONNECTION_STRING`<br>
+**Value**: `Connection String Value`
# [Java](#tab/java)
-Modify the applicationinsights.xml file to change the default endpoint address.
+Modify the *applicationinsights.xml* file to change the default endpoint address:
```xml <?xml version="1.0" encoding="utf-8"?>
appInsights.Configuration.start();
The endpoints can also be configured through environment variables: ```
-Instrumentation Key: "APPINSIGHTS_INSTRUMENTATIONKEY"
-Profile Endpoint: "Profile_Query_Endpoint_address"
-Live Metrics Endpoint: "QuickPulse_Endpoint_Address"
+**Instrumentation Key*: "APPINSIGHTS_INSTRUMENTATIONKEY"
+*Profile endpoint*: "Profile_Query_Endpoint_address"
+*Live Metrics endpoint*: "QuickPulse_Endpoint_Address"
``` # [JavaScript](#tab/js)
-The current Snippet (listed below) is version "5", the version is encoded in the snippet as sv:"#" and the [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
+The current snippet listed here is version 5. The version is encoded in the snippet as `sv:"#"`. The [current version is also available on GitHub](https://go.microsoft.com/fwlink/?linkid=2156318).
```html <script type="text/javascript"> !function(T,l,y){var S=T.location,k="script",D="instrumentationKey",C="ingestionendpoint",I="disableExceptionTracking",E="ai.device.",b="toLowerCase",w="crossOrigin",N="POST",e="appInsightsSDK",t=y.name||"appInsights";(y.name||T[e])&&(T[e]=t);var n=T[t]||function(d){var g=!1,f=!1,m={initialize:!0,queue:[],sv:"5",version:2,config:d};function v(e,t){var n={},a="Browser";return n[E+"id"]=a[b](),n[E+"type"]=a,n["ai.operation.name"]=S&&S.pathname||"_unknown_",n["ai.internal.sdkVersion"]="javascript:snippet_"+(m.sv||m.version),{time:function(){var e=new Date;function t(e){var t=""+e;return 1===t.length&&(t="0"+t),t}return e.getUTCFullYear()+"-"+t(1+e.getUTCMonth())+"-"+t(e.getUTCDate())+"T"+t(e.getUTCHours())+":"+t(e.getUTCMinutes())+":"+t(e.getUTCSeconds())+"."+((e.getUTCMilliseconds()/1e3).toFixed(3)+"").slice(2,5)+"Z"}(),iKey:e,name:"Microsoft.ApplicationInsights."+e.replace(/-/g,"")+"."+t,sampleRate:100,tags:n,data:{baseData:{ver:2}}}}var h=d.url||y.src;if(h){function a(e){var t,n,a,i,r,o,s,c,u,p,l;g=!0,m.queue=[],f||(f=!0,t=h,s=function(){var e={},t=d.connectionString;if(t)for(var n=t.split(";"),a=0;a<n.length;a++){var i=n[a].split("=");2===i.length&&(e[i[0][b]()]=i[1])}if(!e[C]){var r=e.endpointsuffix,o=r?e.location:null;e[C]="https://"+(o?o+".":"")+"dc."+(r||"services.visualstudio.com")}return e}(),c=s[D]||d[D]||"",u=s[C],p=u?u+"/v2/track":d.endpointUrl,(l=[]).push((n="SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details)",a=t,i=p,(o=(r=v(c,"Exception")).data).baseType="ExceptionData",o.baseData.exceptions=[{typeName:"SDKLoadFailed",message:n.replace(/\./g,"-"),hasFullStack:!1,stack:n+"\nSnippet failed to load ["+a+"] -- Telemetry is disabled\nHelp Link: https://go.microsoft.com/fwlink/?linkid=2128109\nHost: "+(S&&S.pathname||"_unknown_")+"\nEndpoint: "+i,parsedStack:[]}],r)),l.push(function(e,t,n,a){var i=v(c,"Message"),r=i.data;r.baseType="MessageData";var o=r.baseData;return o.message='AI (Internal): 99 message:"'+("SDK LOAD Failure: Failed to load Application Insights SDK script (See stack for details) ("+n+")").replace(/\"/g,"")+'"',o.properties={endpoint:a},i}(0,0,t,p)),function(e,t){if(JSON){var n=T.fetch;if(n&&!y.useXhr)n(t,{method:N,body:JSON.stringify(e),mode:"cors"});else if(XMLHttpRequest){var a=new XMLHttpRequest;a.open(N,t),a.setRequestHeader("Content-type","application/json"),a.send(JSON.stringify(e))}}}(l,p))}function i(e,t){f||setTimeout(function(){!t&&m.core||a()},500)}var e=function(){var n=l.createElement(k);n.src=h;var e=y[w];return!e&&""!==e||"undefined"==n[w]||(n[w]=e),n.onload=i,n.onerror=a,n.onreadystatechange=function(e,t){"loaded"!==n.readyState&&"complete"!==n.readyState||i(0,t)},n}();y.ld<0?l.getElementsByTagName("head")[0].appendChild(e):setTimeout(function(){l.getElementsByTagName(k)[0].parentNode.appendChild(e)},y.ld||0)}try{m.cookie=l.cookie}catch(p){}function t(e){for(;e.length;)!function(t){m[t]=function(){var e=arguments;g||m.queue.push(function(){m[t].apply(m,e)})}}(e.pop())}var n="track",r="TrackPage",o="TrackEvent";t([n+"Event",n+"PageView",n+"Exception",n+"Trace",n+"DependencyData",n+"Metric",n+"PageViewPerformance","start"+r,"stop"+r,"start"+o,"stop"+o,"addTelemetryInitializer","setAuthenticatedUserContext","clearAuthenticatedUserContext","flush"]),m.SeverityLevel={Verbose:0,Information:1,Warning:2,Error:3,Critical:4};var s=(d.extensionConfig||{}).ApplicationInsightsAnalytics||{};if(!0!==d[I]&&!0!==s[I]){var c="onerror";t(["_"+c]);var u=T[c];T[c]=function(e,t,n,a,i){var r=u&&u(e,t,n,a,i);return!0!==r&&m["_"+c]({message:e,url:t,lineNumber:n,columnNumber:a,error:i}),r},d.autoExceptionInstrumented=!0}return m}(y.cfg);function a(){y.onInit&&y.onInit(n)}(T[t]=n).queue&&0===n.queue.length?(n.queue.push(a),n.trackPageView({})):a()}(window,document,{ src: "https://js.monitor.azure.com/scripts/b/ai.2.min.js", // The SDK URL Source
-// name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied
-// ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head. (default) = 0ms load after timeout,
-// useXhr: 1, // Use XHR instead of fetch to report failures (if available),
-crossOrigin: "anonymous", // When supplied this will add the provided value as the cross origin attribute on the script tag
-// onInit: null, // Once the application insights instance has loaded and initialized this callback function will be called with 1 argument -- the sdk instance (DO NOT ADD anything to the sdk.queue -- As they won't get called)
+// name: "appInsights", // Global SDK Instance name defaults to "appInsights" when not supplied.
+// ld: 0, // Defines the load delay (in ms) before attempting to load the sdk. -1 = block page load and add to head (default) = 0ms load after timeout.
+// useXhr: 1, // Use XHR instead of fetch to report failures (if available).
+crossOrigin: "anonymous", // When supplied, this will add the provided value as the cross origin attribute on the script tag.
+// onInit: null, // Once the Application Insights instance has loaded and initialized, this callback function will be called with 1 argument -- the sdk instance. (DO NOT ADD anything to the sdk.queue -- as they won't get called.)
cfg: { // Application Insights Configuration instrumentationKey:"INSTRUMENTATION_KEY", endpointUrl: "TelemetryChannel_Endpoint_Address"
cfg: { // Application Insights Configuration
``` > [!NOTE]
-> For readability and to reduce possible JavaScript errors, all of the possible configuration options are listed on a new line in snippet code above, if you don't want to change the value of a commented line it can be removed.
+> For readability and to reduce possible JavaScript errors, all the possible configuration options are listed on a new line in the preceding snippet code. If you don't want to change the value of a commented line, it can be removed.
# [Python](#tab/python)
-For guidance on modifying the ingestion endpoint for the opencensus-python SDK consult the [opencensus-python repo.](https://github.com/census-instrumentation/opencensus-python/blob/af284a92b80bcbaf5db53e7e0813f96691b4c696/contrib/opencensus-ext-azure/opencensus/ext/azure/common/__init__.py)
+For guidance on modifying the ingestion endpoint for the opencensus-python SDK, consult the [opencensus-python repo](https://github.com/census-instrumentation/opencensus-python/blob/af284a92b80bcbaf5db53e7e0813f96691b4c696/contrib/opencensus-ext-azure/opencensus/ext/azure/common/__init__.py).
### Regions that require endpoint modification
-Currently the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
+Currently, the only regions that require endpoint modifications are [Azure Government](../../azure-government/compare-azure-government-global-azure.md#application-insights) and [Azure China](/azure/china/resources-developer-guide).
-|Region | Endpoint Name | Value |
+|Region | Endpoint name | Value |
|--|:|:-| | Azure China | Telemetry Channel | `https://dc.applicationinsights.azure.cn/v2/track` | | Azure China | QuickPulse (Live Metrics) |`https://live.applicationinsights.azure.cn/QuickPulseService.svc` |
Currently the only regions that require endpoint modifications are [Azure Govern
| Azure Government | QuickPulse (Live Metrics) |`https://quickpulse.applicationinsights.us/QuickPulseService.svc` | | Azure Government | Profile Query |`https://dc.applicationinsights.us/api/profiles/{0}/appId` |
-If you currently use the [Application Insights REST API](/rest/api/application-insights/) which is normally accessed via `api.applicationinsights.io' you will need to use an endpoint that is local to your region:
+If you currently use the [Application Insights REST API](/rest/api/application-insights/), which is normally accessed via `api.applicationinsights.io`, you'll need to use an endpoint that's local to your region.
-|Region | Endpoint Name | Value |
+|Region | Endpoint name | Value |
|--|:|:-| | Azure China | REST API | `api.applicationinsights.azure.cn` | | Azure Government | REST API | `api.applicationinsights.us`| ## Next steps
-* [Diagnostic Search](./diagnostic-search.md)
-* [Explore metrics](../essentials/metrics-charts.md)
-* [Write Analytics queries](../logs/log-query-overview.md)
-* To learn more about the custom modifications for Azure Government, consult the detailed guidance for [Azure monitoring and management](../../azure-government/compare-azure-government-global-azure.md#application-insights).
-* To learn more about Azure China, consult the [Azure China Playbook](/azure/china/).
+* Use [Diagnostic Search](./diagnostic-search.md).
+* [Explore metrics](../essentials/metrics-charts.md).
+* [Write Log Analytics queries](../logs/log-query-overview.md).
+* To learn more about the custom modifications for Azure Government, see the detailed guidance for [Azure monitoring and management](../../azure-government/compare-azure-government-global-azure.md#application-insights).
+* To learn more about Azure China, see the [Azure China Playbook](/azure/china/).
<!--Link references-->
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java
-description: Application performance monitoring for Java applications running in any environment without requiring code modification. Distributed tracing and application map.
+description: Application performance monitoring for Java applications running in any environment without requiring code modification. The article also discusses distributed tracing and the application map.
Last updated 12/14/2022 ms.devlang: java
# Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications
-This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. It can be used for any environment, including on-premises. After you finish the instructions in this article, you'll be able to use Azure Monitor Application Insights to monitor your application.
+This article describes how to enable and configure the OpenTelemetry-based Azure Monitor Java offering. It can be used for any environment, including on-premises. After you finish the instructions in this article, you can use Azure Monitor Application Insights to monitor your application.
[!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] ## Get started
-Java auto-instrumentation is enabled through configuration changes; no code changes are required.
+Java auto-instrumentation is enabled through configuration changes. No code changes are required.
### Prerequisites -- Java application using Java 8+-- Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/)-- Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource)
+You need:
+
+- A Java application using Java 8+.
+- An Azure subscription: [Create an Azure subscription for free](https://azure.microsoft.com/free/).
+- An Application Insights resource: [Create an Application Insights resource](create-workspace-resource.md#create-a-workspace-based-resource).
### Enable Azure Monitor Application Insights
Download the [applicationinsights-agent-3.4.7.jar](https://github.com/microsoft/
> [!WARNING] >
-> If you are upgrading from an earlier 3.x version,
+> If you're upgrading from an earlier 3.x version:
> > Starting from 3.4.0: >
-> - Rate-limited sampling is now the default (if you have not configured a fixed percentage previously). By default, it will capture at most around 5 requests per second (along with their dependencies, traces and custom events). See [fixed-percentage sampling](./java-standalone-config.md#fixed-percentage-sampling) if you wish to revert to the previous behavior of capturing 100% of requests.
+> - Rate-limited sampling is now the default, if you haven't configured a fixed percentage previously. By default, it will capture at most around five requests per second, along with their dependencies, traces, and custom events. See [fixed-percentage sampling](./java-standalone-config.md#fixed-percentage-sampling) if you want to revert to the previous behavior of capturing 100% of requests.
> > Starting from 3.3.0: >
-> - `LoggingLevel` is not captured by default as part of Traces' custom dimension since that data is already captured in the `SeverityLevel` field. For details on how to re-enable this if needed, please see the [config options](./java-standalone-config.md#logging-level-as-a-custom-dimension)
-> - Exception records are no longer recorded for failed dependencies, they are only recorded for failed requests.
+> - `LoggingLevel` isn't captured by default as part of Traces' custom dimension because that data is already captured in the `SeverityLevel` field. For information on how to reenable it, see the [config options](./java-standalone-config.md#logging-level-as-a-custom-dimension).
+> - Exception records are no longer recorded for failed dependencies. They're only recorded for failed requests.
> > Starting from 3.2.0:
->
-> - Controller "InProc" dependencies are no longer captured by default. For details on how to re-enable these, please see the [config options](./java-standalone-config.md#autocollect-inproc-dependencies-preview).
+>
+> - Controller `InProc` dependencies are no longer captured by default. For information on how to reenable these dependencies, see the [config options](./java-standalone-config.md#autocollect-inproc-dependencies-preview).
> - Database dependency names are now more concise with the full (sanitized) query still present in the `data` field. HTTP dependency names are now more descriptive. > This change can affect custom dashboards or alerts if they relied on the previous values.
-> For details, see the [3.2.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0).
->
+> For more information, see the [3.2.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.2.0).
+>
> Starting from 3.1.0:
->
+>
> - The operation names and request telemetry names are now prefixed by the HTTP method, such as `GET` and `POST`. > This change can affect custom dashboards or alerts if they relied on the previous values.
-> For details, see the [3.1.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0).
+> For more information, see the [3.1.0 release notes](https://github.com/microsoft/ApplicationInsights-Java/releases/tag/3.1.0).
> - #### Point the JVM to the jar file Add `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` to your application's JVM args.
Add `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` to your applicati
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
-> [!TIP]
-> If you develop a Spring Boot application, you can replace the JVM argument by a programmatic configuration. More [here](./java-spring-boot.md).
+If you develop a Spring Boot application, you can replace the JVM argument by a programmatic configuration. For more information, see [Using Azure Monitor Application Insights with Spring Boot](./java-spring-boot.md).
#### Set the Application Insights connection string 1. There are two ways you can point the jar file to your Application Insights resource:
- - You can set an environment variable:
+ - Set an environment variable:
```console APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ```
- - Or you can create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.7.jar` with the following content:
+ - Create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.7.jar` with the following content:
```json {
Add `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` to your applicati
1. Find the connection string on your Application Insights resource.
- :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot displaying Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
-
+ :::image type="content" source="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png" alt-text="Screenshot that shows Application Insights overview and connection string." lightbox="media/migrate-from-instrumentation-keys-to-connection-strings/migrate-from-instrumentation-keys-to-connection-strings.png":::
+ #### Confirm data is flowing Run your application and open your **Application Insights Resource** tab in the Azure portal. It can take a few minutes for data to show up in the portal.
As part of using Application Insights instrumentation, we collect and send diagn
## Configuration options
-In the `applicationinsights.json` file, you can also configure these settings:
+In the *applicationinsights.json* file, you can also configure these settings:
* Cloud role name * Cloud role instance
In the `applicationinsights.json` file, you can also configure these settings:
* Custom dimensions * Telemetry processors (preview) * Autocollected logging
-* Autocollected Micrometer metrics, which include Spring Boot Actuator metrics
+* Autocollected Micrometer metrics, including Spring Boot Actuator metrics
* Heartbeat * HTTP proxy * Self-diagnostics For more information, see [Configuration options](./java-standalone-config.md).
-## Auto-Instrumentation
+## Auto-instrumentation
Java 3.x includes the following auto-instrumentation.
Java 3.x includes the following auto-instrumentation.
* Spring scheduling > [!NOTE]
-> Servlet and Netty auto-instrumentation covers the majority of Java HTTP services
-> including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
+> Servlet and Netty auto-instrumentation covers the majority of Java HTTP services, including Java EE, Jakarta EE, Spring Boot, Quarkus, and Micronaut.
### Autocollected dependencies
Autocollected dependencies without downstream distributed trace propagation:
### Autocollected metrics
-* Micrometer, which includes Spring Boot Actuator metrics
+* Micrometer, including Spring Boot Actuator metrics
* JMX Metrics ### Azure SDKs
Telemetry emitted by these Azure SDKs is automatically collected by default:
This section explains how to modify telemetry.
-### Add spans using the OpenTelemetry annotation
+### Add spans by using the OpenTelemetry annotation
-The simplest way to add your own spans is using OpenTelemetry's `@WithSpan` annotation.
+The simplest way to add your own spans is by using OpenTelemetry's `@WithSpan` annotation.
Spans populate the `requests` and `dependencies` tables in Application Insights.
Spans populate the `requests` and `dependencies` tables in Application Insights.
} ```
-By default the span will end up in the dependencies table with dependency type `InProc`.
+By default, the span will end up in the `dependencies` table with dependency type `InProc`.
-If your method represents a background job that is not already captured by auto-instrumentation,
-it is recommended to apply the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation
+If your method represents a background job that isn't already captured by auto-instrumentation,
+we recommend that you apply the attribute `kind = SpanKind.SERVER` to the `@WithSpan` annotation
so that it will end up in the Application Insights `requests` table.
-### Add spans using the OpenTelemetry API
+### Add spans by using the OpenTelemetry API
-If the OpenTelemetry `@WithSpan` annotation above doesn't meet your needs,
-then you can add your spans using the OpenTelemetry API.
+If the preceding OpenTelemetry `@WithSpan` annotation doesn't meet your needs, you can add your spans by using the OpenTelemetry API.
> [!NOTE] > This feature is only in 3.2.0 and later.
then you can add your spans using the OpenTelemetry API.
</dependency> ```
-1. Use the `GlobalOpenTelemetry` class to create a `Tracer`
+1. Use the `GlobalOpenTelemetry` class to create a `Tracer`:
```java import io.opentelemetry.api.GlobalOpenTelemetry;
then you can add your spans using the OpenTelemetry API.
### Add span events
-You can use `opentelemetry-api` to create span events, which populate the traces table in Application Insights. The string passed in to `addEvent()` is saved to the _message_ field within the trace.
+You can use `opentelemetry-api` to create span events, which populate the `traces` table in Application Insights. The string passed in to `addEvent()` is saved to the `message` field within the trace.
> [!NOTE] > This feature is only in 3.2.0 and later.
You can use `opentelemetry-api` to create span events, which populate the traces
You can use `opentelemetry-api` to add attributes to spans. These attributes can include adding a custom business dimension to your telemetry. You can also use attributes to set optional fields in the Application Insights schema, such as User ID or Client IP.
-Adding one or more span attributes populates the _customDimensions_ field in the requests, dependencies, traces, or exceptions table.
+Adding one or more span attributes populates the `customDimensions` field in the `requests`, `dependencies`, `traces`, or `exceptions` table.
> [!NOTE] > This feature is only in 3.2.0 and later.
You can use `opentelemetry-api` to update the status of a span and record except
</dependency> ```
-1. Set status to error and record an exception in your code:
+1. Set status to `error` and record an exception in your code:
```java import io.opentelemetry.api.trace.Span;
You can use `opentelemetry-api` to update the status of a span and record except
#### Set the user ID
-Populate the _user ID_ field in the requests, dependencies, or exceptions table.
+Populate the `user ID` field in the `requests`, `dependencies`, or `exceptions` table.
-> [!IMPORTANT]
-> Consult applicable privacy laws before you set Authenticated User ID.
+Consult applicable privacy laws before you set the Authenticated User ID.
> [!NOTE] > This feature is only in 3.2.0 and later.
We currently support Micrometer, popular logging frameworks, and the Application
The following table represents currently supported custom telemetry types that you can enable to supplement the Java 3.x agent. To summarize: -- Custom metrics are supported through micrometer.
+- Custom metrics are supported through Micrometer.
- Custom exceptions and traces are supported through logging frameworks. - Custom requests, dependencies, metrics, and exceptions are supported through the OpenTelemetry API. - The remaining telemetry types are supported through the [Application Insights Classic SDK](#send-custom-telemetry-by-using-the-application-insights-classic-sdk).
The following table represents currently supported custom telemetry types that y
</dependency> ```
-2. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter:
+1. Use the Micrometer [global registry](https://micrometer.io/docs/concepts#_global_registry) to create a meter:
```java static final Counter counter = Metrics.counter("test.counter"); ```
-3. Use the counter to record metrics:
+1. Use the counter to record metrics:
```java counter.increment(); ```
-4. The metrics will be ingested into the
+1. The metrics will be ingested into the
[customMetrics](/azure/azure-monitor/reference/tables/custommetrics) table, with tags captured in the `customDimensions` column. You can also view the metrics in the
- [Metrics explorer](../essentials/metrics-getting-started.md) under the "Log-based metrics" metric namespace.
+ [metrics explorer](../essentials/metrics-getting-started.md) under the `Log-based metrics` metric namespace.
> [!NOTE]
- > Application Insights Java replaces all non-alphanumeric characters (except dashes) in the Micrometer metric name
- > with underscores, so the `test.counter` metric above will show up as `test_counter`.
+ > Application Insights Java replaces all non-alphanumeric characters (except dashes) in the Micrometer metric name with underscores. As a result, the preceding `test.counter` metric will show up as `test_counter`.
### Send custom traces and exceptions by using your favorite logging framework Logback, Log4j, and java.util.logging are auto-instrumented. Logging performed via these logging frameworks is autocollected as trace and exception telemetry.
-By default, logging is only collected when that logging is performed at the INFO level or above.
+By default, logging is only collected when that logging is performed at the INFO level or higher.
To change this level, see the [configuration options](./java-standalone-config.md#auto-collected-logging). Structured logging (attaching custom dimensions to your logs) can be accomplished in these ways:
Structured logging (attaching custom dimensions to your logs) can be accomplishe
</dependency> ```
-1. Create a TelemetryClient:
+1. Create a `TelemetryClient` instance:
```java static final TelemetryClient telemetryClient = new TelemetryClient();
To provide feedback:
- Fill out the OpenTelemetry community's [customer feedback survey](https://docs.google.com/forms/d/e/1FAIpQLScUt4reClurLi60xyHwGozgM9ZAz8pNAfBHhbTZ4gFWaaXIRQ/viewform). - Tell Microsoft about yourself by joining our [OpenTelemetry Early Adopter Community](https://aka.ms/AzMonOTel/).-- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).
+- Engage with other Azure Monitor users in the [Microsoft Tech Community](https://techcommunity.microsoft.com/t5/azure-monitor/bd-p/AzureMonitor).
- Make a feature request at the [Azure Feedback Forum](https://feedback.azure.com/d365community/forum/8849e04d-1325-ec11-b6e6-000d3a4f09d0). ## Next steps
azure-monitor Live Stream https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/live-stream.md
public void ConfigureServices(IServiceCollection services)
} ```
-For more information on how to configure ASP.NET Core applications, see [Configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configuring-or-removing-default-telemetrymodules).
+For more information on how to configure ASP.NET Core applications, see [Configuring telemetry modules in ASP.NET Core](./asp-net-core.md#configure-or-remove-default-telemetrymodules).
#### WorkerService
azure-monitor Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/powershell.md
Azure should set up the resources in strict order. To make sure one setup comple
See these other automation articles:
-* [Create an Application Insights resource](./create-new-resource.md#creating-a-resource-automatically) via a quick method without using a template.
+* [Create an Application Insights resource](./create-new-resource.md#create-a-resource-automatically) via a quick method without using a template.
* [Create web tests](../alerts/resource-manager-alerts-metric.md#availability-test-with-metric-alert). * [Send Azure Diagnostics to Application Insights](../agents/diagnostics-extension-to-application-insights.md). * [Create release annotations](annotations.md).
azure-monitor Usage Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-flows.md
Title: Application Insights User Flows analyzes navigation flows
-description: Analyze how users navigate between the pages and features of your web app.
+ Title: Application Insights User Flows analyzes navigation flows
+description: Analyze how users move between the pages and features of your web app.
Last updated 07/30/2021
# Analyze user navigation patterns with User Flows in Application Insights
-![Application Insights User Flows tool](./media/usage-flows/flows.png)
+![Screenshot that shows the Application Insights User Flows tool.](./media/usage-flows/flows.png)
-The User Flows tool visualizes how users navigate between the pages and features of your site. It's great for answering questions like:
+The User Flows tool visualizes how users move between the pages and features of your site. It's great for answering questions like:
-* How do users navigate away from a page on your site?
-* What do users click on a page on your site?
+* How do users move away from a page on your site?
+* What do users select on a page on your site?
* Where are the places that users churn most from your site? * Are there places where users repeat the same action over and over?
-The User Flows tool starts from an initial page view, custom event, or exception that you specify. Given this initial event, User Flows shows the events that happened before and afterwards during user sessions. Lines of varying thickness show how many times each path was followed by users. Special **Session Started** nodes show where the subsequent nodes began a session. **Session Ended** nodes show how many users sent no page views or custom events after the preceding node, highlighting where users probably left your site.
+The User Flows tool starts from an initial page view, custom event, or exception that you specify. From this initial event, User Flows shows the events that happened before and after user sessions. Lines of varying thickness show how many times users followed each path. Special **Session Started** nodes show where the subsequent nodes began a session. **Session Ended** nodes show how many users sent no page views or custom events after the preceding node, highlighting where users probably left your site.
> [!NOTE] > Your Application Insights resource must contain page views or custom events to use the User Flows tool. [Learn how to set up your app to collect page views automatically with the Application Insights JavaScript SDK](./javascript.md). >
->
-## Start by choosing an initial event
+## Choose an initial event
-![Choose an initial event for User Flows](./media/usage-flows/initial-event.png)
+![Screenshot that shows choosing an initial event for User Flows.](./media/usage-flows/initial-event.png)
To begin answering questions with the User Flows tool, choose an initial page view, custom event, or exception to serve as the starting point for the visualization:
-1. Click the link in the **What do users do after...?** title, or click the **Edit** button.
-2. Select a page view, custom event, or exception from the **Initial event** dropdown.
-3. Click **Create graph**.
+1. Select the link in the **What do users do after?** title or select **Edit**.
+1. Select a page view, custom event, or exception from the **Initial event** dropdown list.
+1. Select **Create graph**.
-The "Step 1" column of the visualization shows what users did most frequently just after the initial event, ordered top to bottom from most to least frequent. The "Step 2" and subsequent columns show what users did thereafter, creating a picture of all the ways users have navigated through your site.
+The **Step 1** column of the visualization shows what users did most frequently after the initial event. The items are ordered from top to bottom and from most to least frequent. The **Step 2** and subsequent columns show what users did next. The information creates a picture of all the ways that users moved through your site.
-By default, the User Flows tool randomly samples only the last 24 hours of page views and custom event from your site. You can increase the time range and change the balance of performance and accuracy for random sampling in the Edit menu.
+By default, the User Flows tool randomly samples only the last 24 hours of page views and custom events from your site. You can increase the time range and change the balance of performance and accuracy for random sampling on the **Edit** menu.
-If some of the page views, custom events, and exceptions arenΓÇÖt relevant to you, click the **X** on the nodes you want to hide. Once you've selected the nodes you want to hide, click the **Create graph** button below the visualization. To see all of the nodes you've hidden, click the **Edit** button, then look at the **Excluded events** section.
+If some of the page views, custom events, and exceptions aren't relevant to you, select **X** on the nodes you want to hide. After you've selected the nodes you want to hide, select **Create graph**. To see all the nodes you've hidden, select **Edit** and look at the **Excluded events** section.
-If page views or custom events are missing that you expect to see on the visualization:
+If page views or custom events are missing that you expect to see in the visualization:
-* Check the **Excluded events** section in the **Edit** menu.
+* Check the **Excluded events** section on the **Edit** menu.
* Use the plus buttons on **Others** nodes to include less-frequent events in the visualization.
-* If the page view or custom event you expect is sent infrequently by users, try increasing the time range of the visualization in the **Edit** menu.
-* Make sure the page view, custom event, or exception you expect is set up to be collected by the Application Insights SDK in the source code of your site. [Learn more about collecting custom events.](./api-custom-events-metrics.md)
+* If the page view or custom event you expect is sent infrequently by users, increase the time range of the visualization on the **Edit** menu.
+* Make sure the page view, custom event, or exception you expect is set up to be collected by the Application Insights SDK in the source code of your site. Learn more about [collecting custom events](./api-custom-events-metrics.md).
+
+If you want to see more steps in the visualization, use the **Previous steps** and **Next steps** dropdown lists above the visualization.
+
+## After users visit a page or feature, where do they go and what do they select?
-If you want to see more steps in the visualization, use the **Previous steps** and **Next steps** dropdowns above the visualization.
+![Screenshot that shows using User Flows to understand where users select.](./media/usage-flows/one-step.png)
-## After visiting a page or feature, where do users go and what do they click?
+If your initial event is a page view, the first column (**Step 1**) of the visualization is a quick way to understand what users did immediately after they visited the page.
-![Use User Flows to understand where users click](./media/usage-flows/one-step.png)
+Open your site in a window next to the User Flows visualization. Compare your expectations of how users interact with the page to the list of events in the **Step 1** column. Often, a UI element on the page that seems insignificant to your team can be among the most used on the page. It can be a great starting point for design improvements to your site.
-If your initial event is a page view, the first column ("Step 1") of the visualization is a quick way to understand what users did immediately after visiting the page. Try opening your site in a window next to the User Flows visualization. Compare your expectations of how users interact with the page to the list of events in the "Step 1" column. Often, a UI element on the page that seems insignificant to your team can be among the most-used on the page. It can be a great starting point for design improvements to your site.
+If your initial event is a custom event, the first column shows what users did after they performed that action. As with page views, consider if the observed behavior of your users matches your team's goals and expectations.
-If your initial event is a custom event, the first column shows what users did just after performing that action. As with page views, consider if the observed behavior of your users matches your team's goals and expectations. If your selected initial event is "Added Item to Shopping Cart", for example, look to see if "Go to Checkout" and "Completed Purchase" appear in the visualization shortly thereafter. If user behavior is different from your expectations, use the visualization to understand how users are getting "trapped" by your site's current design.
+If your selected initial event is **Added Item to Shopping Cart**, for example, look to see if **Go to Checkout** and **Completed Purchase** appear in the visualization shortly thereafter. If user behavior is different from your expectations, use the visualization to understand how users are getting "trapped" by your site's current design.
## Where are the places that users churn most from your site?
-Watch for **Session Ended** nodes that appear high-up in a column in the visualization, especially early in a flow. This means many users probably churned from your site after following the preceding path of pages and UI interactions. Sometimes churn is expected - after completing a purchase on an eCommerce site, for example - but usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved.
+Watch for **Session Ended** nodes that appear high up in a column in the visualization, especially early in a flow. This positioning means many users probably churned from your site after they followed the preceding path of pages and UI interactions.
-Keep in mind, that **Session Ended** nodes are based only on telemetry collected by this Application Insights resource. If Application Insights doesn't receive telemetry for certain user interactions, users could still have interacted with your site in those ways after the User Flows tool says the session ended.
+Sometimes churn is expected. For example, it's expected after a user makes a purchase on an e-commerce site. But usually churn is a sign of design problems, poor performance, or other issues with your site that can be improved.
+
+Keep in mind that **Session Ended** nodes are based only on telemetry collected by this Application Insights resource. If Application Insights doesn't receive telemetry for certain user interactions, users might have interacted with your site in those ways after the User Flows tool says the session ended.
## Are there places where users repeat the same action over and over?
-Look for a page view or custom event that is repeated by many users across subsequent steps in the visualization. This usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, adding bulk edit functionality if you find users performing repetitive actions on each row of a table element.
+Look for a page view or custom event that's repeated by many users across subsequent steps in the visualization. This activity usually means that users are performing repetitive actions on your site. If you find repetition, think about changing the design of your site or adding new functionality to reduce repetition. For example, you might add bulk edit functionality if you find users performing repetitive actions on each row of a table element.
## Frequently asked questions
-### Does the initial event represent the first time the event appears in a session, or any time it appears in a session?
+This section provides answers to common questions.
+
+### Does the initial event represent the first time the event appears in a session or any time it appears in a session?
-The initial event on the visualization only represents the first time a user sent that page view or custom event during a session. If users can send the initial event multiple times in a session, then the "Step 1" column only shows how users behave after the *first* instance of initial event, not all instances.
+The initial event on the visualization only represents the first time a user sent that page view or custom event during a session. If users can send the initial event multiple times in a session, then the **Step 1** column only shows how users behave after the *first* instance of an initial event, not all instances.
-### Some of the nodes in my visualization are too high-level. For example, a node that just says ΓÇ£Button Clicked.ΓÇ¥ How can I break it down into more detailed nodes?
+### Some of the nodes in my visualization have a level that's too high. How can I get more detailed nodes?
-Use the **Split by** options in the **Edit** menu:
+Use the **Split by** options on the **Edit** menu:
-1. Choose the event you want to break down in the **Event** menu.
-2. Choose a dimension in the **Dimension** menu. For example, if you have an event called ΓÇ£Button Clicked,ΓÇ¥ try a custom property called ΓÇ£Button Name.ΓÇ¥
+1. Select the event you want to break down on the **Event** menu.
+1. Select a dimension on the **Dimension** menu. For example, if you have an event called **Button Clicked**, try a custom property called **Button Name**.
## Next steps * [Usage overview](usage-overview.md)
-* [Users, Sessions, and Events](usage-segmentation.md)
+* [Users, sessions, and events](usage-segmentation.md)
* [Retention](usage-retention.md)
-* [Adding custom events to your app](./api-custom-events-metrics.md)
-
+* [Adding custom events to your app](./api-custom-events-metrics.md)
azure-monitor Usage Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/usage-overview.md
In the web app initializer, such as Global.asax.cs:
> [!NOTE] > Adding an initializer by using `ApplicationInsights.config` or `TelemetryConfiguration.Active` isn't valid for ASP.NET Core applications.
-For [ASP.NET Core](asp-net-core.md#adding-telemetryinitializers) applications, adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown here. This step is done in the `ConfigureServices` method of your `Startup.cs` class.
+For [ASP.NET Core](asp-net-core.md#add-telemetryinitializers) applications, adding a new telemetry initializer is done by adding it to the Dependency Injection container, as shown here. This step is done in the `ConfigureServices` method of your `Startup.cs` class.
```csharp using Microsoft.ApplicationInsights.Extensibility;
azure-monitor Azure Cli Metrics Alert Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/azure-cli-metrics-alert-sample.md
These samples create metric alert monitors in Azure Monitor by using Azure CLI commands. The first sample creates an alert for a virtual machine. The second command creates an alert that includes a dimension for an App Service Plan. ## Create an alert
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Since Azure Monitor charges for the collection of data, your goal should be to c
| Recommendation | Description | |:|:|
-| Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-workloads.md#controlling-costs) for guidance on data to collect and strategies for using XPath queries and transformations to limit it.|
+| Configure VM agents to collect only critical events. | Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. See [Monitor virtual machines with Azure Monitor: Workloads](vm/monitor-virtual-machine-data-collection.md#controlling-costs) for guidance on data to collect and strategies for using XPath queries and transformations to limit it.|
| Ensure that VMs aren't sending duplicate data. | Any configuration that uses multiple agents on a single machine or where you multi-home agents to send data to multiple workspaces may incur charges for the same data multiple times. If you do multi-home agents, make sure you're sending unique data to each workspace. See [Analyze usage in Log Analytics workspace](logs/analyze-usage.md) for guidance on analyzing your collected data to make sure you aren't collecting duplicate data. If you're migrating between agents, continue to use the Log Analytics agent until you [migrate to the Azure Monitor agent](./agents/azure-monitor-agent-migration.md) rather than using both together unless you can ensure that each is collecting unique data. | #### Container insights
azure-monitor Container Insights Log Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-log-alerts.md
You might also decide not to split when you want a condition on multiple resourc
You might want to see a list of the alerts by affected computer. You can use a custom workbook that uses a custom [resource graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook. ## Create a log query alert rule
-To create a log query alert rule by using the portal, see [this example of a log query alert](../vm/monitor-virtual-machine-alerts.md#example-log-query-alert), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article.
+To create a log query alert rule by using the portal, see [this example of a log query alert](../alerts/tutorial-log-alert.md), which provides a complete walkthrough. You can use these same processes to create alert rules for AKS clusters by using queries similar to the ones in this article.
To create a query alert rule by using an Azure Resource Manager (ARM) template, see [Resource Manager template samples for log alert rules in Azure Monitor](../alerts/resource-manager-alerts-log.md). You can use these same processes to create ARM templates for the log queries in this article.
azure-monitor Azure Cli Log Analytics Workspace Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/azure-cli-log-analytics-workspace-sample.md
Use the Azure CLI commands described here to manage your log analytics workspace
[!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] ## Create a workspace for Monitor Logs
azure-monitor Vmext Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/vmext-troubleshoot.md
If the *Microsoft Monitoring Agent* VM extension is not installing or reporting,
1. Check if the Azure VM agent is installed and working correctly by using the steps in [KB 2965986](https://support.microsoft.com/kb/2965986#mt1). * You can also review the VM agent log file `C:\WindowsAzure\logs\WaAppAgent.log` * If the log does not exist, the VM agent is not installed.
- * [Install the Azure VM Agent](../vm/monitor-virtual-machine.md#agents)
+ * [Install the Azure VM Agent](../../virtual-machines/extensions/agent-windows.md#install-the-vm-agent)
2. Review the Microsoft Monitoring Agent VM extension log files in `C:\Packages\Plugins\Microsoft.EnterpriseCloud.Monitoring.MicrosoftMonitoringAgent` 3. Ensure the virtual machine can run PowerShell scripts 4. Ensure permissions on C:\Windows\temp havenΓÇÖt been changed
azure-monitor Monitor Virtual Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-agent.md
+
+ Title: 'Monitor virtual machines with Azure Monitor: Deploy agent'
+description: Learn how to deploy the Azure Monitor agent to your virtual machines for monitoring in Azure Monitor. Monitor virtual machines and their workloads with an Azure Monitor guide.
++++ Last updated : 01/05/2023++++
+# Monitor virtual machines with Azure Monitor: Deploy agent
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to deploy the Azure Monitor agent to your Azure and hybrid virtual machines in Azure Monitor.
+
+> [!NOTE]
+> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
+
+Any monitoring tool like Azure Monitor, requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor uses the [Azure Monitor agent](../agents/agents-overview.md), which supports virtual machines in Azure, other cloud environments, and on-premises.
+
+## Legacy agents
+The Azure Monitor agent replaces legacy agents that are still available but should only be used if you require particular functionality not yet available with Azure Monitor agent. Most users will be able to use Azure Monitor without the legacy agents.
+
+The legacy agents include the following:
+
+- [Log Analytics agent](../agents/log-analytics-agent.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. This agent is the same agent used for System Center Operations Manager.
+- [Azure Diagnostic extension](../agents/diagnostics-extension-overview.md): Supports Azure Monitor virtual machines only. Sends data to Azure Monitor Metrics, Azure Event Hubs, and Azure Storage.
+
+See [Supported services and features](../agents/agents-overview.md#supported-services-and-features) for the current features supported by Azure Monitor agent. See [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md) for details on migrating to the Azure Monitor agent if you already have the Log Analytics agent deployed.
+
+## Prerequisites
+
+### Create a Log Analytics workspace
+You don't need a Log Analytics workspace to deploy the Azure Monitor agent, but you will need one to collect the data that it sends. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data.
+
+Many environments use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're getting started with Azure Monitor, start with a single workspace and consider creating more workspaces as your requirements evolve. [VM insights]() will create a default workspace which you can use to get started quickly.
+
+For complete details on logic that you should consider for designing a workspace configuration, see [Design a Log Analytics workspace configuration](../logs/workspace-design.md).
+
+### Workspace permissions
+The access mode of the workspace defines which users can access different sets of data. For details on how to define your access mode and configure permissions, see [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md). If you're just getting started with Azure Monitor, consider accepting the defaults when you create your workspace and configure its permissions later.
+
+## Multihoming agents
+Multihoming refers to a virtual machine that connects to multiple workspaces. There's typically little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces most likely creates duplicate data in each workspace, which increases your overall cost. You can combine data from multiple workspaces by using [cross-workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
+
+One reason you might consider multihoming, though, is if you have an environment with Microsoft Defender for Cloud or Microsoft Sentinel stored in a workspace that's separate from Azure Monitor. A machine being monitored by each service needs to send data to each workspace.
+
+## Prepare hybrid machines
+A hybrid machine is any machine not running in Azure. It's a virtual machine running in another cloud or hosted provider or a virtual or physical machine running on-premises in your datacenter. Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) on hybrid machines so you can manage them similarly to your Azure virtual machines. You can use VM insights in Azure Monitor to use the same process to enable monitoring for Azure Arc-enabled servers as you do for Azure virtual machines. For a complete guide on preparing your hybrid machines for Azure, see [Plan and deploy Azure Arc-enabled servers](../../azure-arc/servers/plan-at-scale-deployment.md). This task includes enabling individual machines and using [Azure Policy](../../governance/policy/overview.md) to enable your entire hybrid environment at scale.
+
+There's no additional cost for Azure Arc-enabled servers, but there might be some cost for different options that you enable. For details, see [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/). There is a cost for the data collected in the workspace after your hybrid machines are onboarded, but this is the same as for an Azure virtual machine.
+
+### Network requirements
+The Azure Monitor agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Azure Monitor agent for all communication, so it doesn't require any another ports. For details on how to configure your firewall and proxy, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
++
+### Log Analytics gateway
+With the Log Analytics gateway, you can channel communications from your on-premises machines through a single gateway. Azure Arc doesn't use the gateway, but its Connected Machine agent is required to install Azure Monitor agent. For details on how to configure and use the Log Analytics gateway, see [Log Analytics gateway](../agents/gateway.md).
+
+### Azure Private Link
+By using Azure Private Link, you can create a private endpoint for your Log Analytics workspace. After it's configured, any connections to the workspace must be made through this private endpoint. Private Link works by using DNS overrides, so there's no configuration requirement on individual agents. For details on Private Link, see [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md). For specific guidance on configuring private link for your virtual machines, see [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md).
++
+## Agent deployment options
+The Azure Monitor agent is implemented as a [virtual machine extension](../../virtual-machines/extensions/overview.md), so you can install it using a variety of standard methods including PowerShell, CLI, and Resource Manager templates. See [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md) for details on each. Other notable methods for installation are described below.
+
+### Azure Policy
+If you have a significant number of virtual machines, you should deploy the agent using Azure Policy as described in [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md?tabs=azure-portal#use-azure-policy). This will ensure that the agent is automatically added to existing virtual machines and any new ones that you deploy. See [Enable VM insights by using Azure Policy](vminsights-enable-policy.md) for deploying the agent with VM insights.
+
+### Data collection rule in the Azure portal
+When you create a data collection rule in the Azure portal as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md), you have the option of specifying virtual machines to receive it. The Azure Monitor agent will be automatically installed on any machines that don't already have it.
+
+### VM insights
+VM insights provides simplified onboarding of agents in the Azure portal. With a single click for a particular machine, it installs the Azure Monitor agent, connects to a workspace, and starts collecting performance data. You can optionally have it install the dependency agent and collect processes and dependency data to enable the map feature of VM insights.
+
+You can enable VM insights on individual machines by using the same methods for Azure virtual machines and Azure Arc-enabled servers. These methods include onboarding individual machines with the Azure portal or Azure Resource Manager templates or enabling machines at scale by using Azure Policy. For different options to enable VM insights for your machines, see [Enable VM insights overview](vminsights-enable-overview.md). To create a policy that automatically enables VM insights on any new machines as they're created, see [Enable VM insights by using Azure Policy](vminsights-enable-policy.md).
++
+### Windows client installer
+Use the [Windows client installer](../agents/azure-monitor-agent-windows-client.md) to install the agent on Windows clients such as Windows 11. For different options deploying the agent on a single machine or as part of a script, see [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md?tabs=azure-portal#install).
+
+## Next steps
+
+* [Configure data collection for machines with the Azure Monitor agent](monitor-virtual-machine-data-collection.md).
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Previously updated : 06/28/2022 Last updated : 01/11/2023 # Monitor virtual machines with Azure Monitor: Alerts
-This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It provides guidance on creating alert rules for your virtual machines and their guest operating systems. [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data collected by VM insights.
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). [Alerts in Azure Monitor](../alerts/alerts-overview.md) proactively notify you of interesting data and patterns in your monitoring data. There are no preconfigured alert rules for virtual machines, but you can create your own based on data you collect from the Azure Monitor agent. This article presents alerting concepts specific to virtual machines and common alert rules used by other Azure Monitor customers.
> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md), [Tutorial: Create a metric alert for an Azure resource](../alerts/tutorial-metric-alert.md), or [Tutorial: Create alert when Azure virtual machine is unavailable](tutorial-monitor-vm-alert.md).
+> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md). To quickly enable a recommended set of alerts, see [Enable recommended alert rules for Azure virtual machine](tutorial-monitor-vm-alert-recommended.md)
> [!IMPORTANT] > Most alert rules have a cost that's dependent on the type of rule, how many dimensions it includes, and how frequently it's run. Before you create any alert rules, refer to **Alert rules** in [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-## Choose the alert type
-The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md).
-The type of alert rule that you create for a particular scenario depends on where the data is located that you're alerting on. You might have cases where data for a particular alerting scenario is available in both Metrics and Logs, and you'll need to determine which rule type to use. You might also have flexibility in how you collect certain data and let your decision of alert rule type drive your decision for data collection method.
+## Data collection
+Alert rules inspect data that's already been collected in Azure Monitor. You need to ensure that data is being collected for a particular scenario before you can create an alert rule. See [Monitor virtual machines with Azure Monitor: Collect data](monitor-virtual-machine-data-collection.md) for guidance on configuring data collection for a variety of scenarios including all of the alert rules in this article.
-Typically, the best strategy is to use metric alerts instead of log alerts when possible because they're more responsive and stateful. To use metric alerts, the data you're alerting on must be available in Metrics. VM insights currently sends all of its data to Logs, so you must install the Azure Monitor agent to use metric alerts with data from the guest operating system. Use Log query alerts with metric data when it's unavailable in Metrics or if you require logic beyond the relatively simple logic for a metric alert rule.
+## Recommended alert rules
+Azure Monitor provides a set of [recommended alert rules](tutorial-monitor-vm-alert-availability.md) that you can quickly enable for any Azure virtual machine. These are a great starting point for basic monitoring but alone will not provide sufficient alerting for most enterprise implementations for the following reasons:
-### Metric alerts
-[Metric alert rules](../alerts/alerts-metric.md) are useful for alerting when a particular metric exceeds a threshold. An example is when the CPU of a machine is running high. The target of a metric alert rule can be a specific machine, a resource group, or a subscription. In this instance, you can create a single rule that applies to a group of machines.
+- Recommended alerts only apply to Azure virtual machines and not hybrid machines.
+- Recommended alerts only include host metrics and not guest metrics or logs. These are useful to monitor the health of the machine itself but give you minimal visibility into the workloads and applications running on the machine.
+- Recommended alerts are associated with individual machines creating an excessive number of alert rules. Instead of relying on this method for each machine, see [Scaling alert rules](#scaling-alert-rules) for strategies on using a minimal number of alert rules for multiple machines.
-Metric rules for virtual machines can use the following data:
+## Alert types
-- Host metrics for Azure virtual machines, which are collected automatically. -- Metrics that are collected by the Azure Monitor agent from the guest operating system.
+The most common types of alert rules in Azure Monitor are [metric alerts](../alerts/alerts-metric.md) and [log query alerts](../alerts/alerts-log-query.md).
+The type of alert rule that you create for a particular scenario depends on where the data that you're alerting on is located. You might have cases where data for a particular alerting scenario is available in both Metrics and Logs, and you'll need to determine which rule type to use. You might also have flexibility in how you [collect certain data]() and let your decision of alert rule type drive your decision for data collection method.
-> [!NOTE]
-> When VM insights supports the Azure Monitor agent, which is currently in public preview, it sends performance data from the guest operating system to Metrics so that you can use metric alerts.
+
+### Metric alerts
+Common uses for metric alerts include:
+- Alert when a particular metric exceeds a threshold. An example is when the CPU of a machine is running high.
+
+Data sources for metric alerts include:
+- Host metrics for Azure virtual machines, which are collected automatically.
+- Metrics collected by the Azure Monitor agent from the guest operating system
### Log alerts
-[Log alerts](../alerts/alerts-unified-log.md) can measure two different things which can be used to monitor virtual machines in different scenarios:
+Common uses for log alerts include:
+- Alert when a particular event or pattern of events from Windows event log or syslog are found. These alert rules will typically measure table rows returned from the query.
+- Alert based on a calculation of numeric data across multiple machines. These alert rules will typically measure the calculation of a numeric column in the query results.
+
+Data sources for metric alerts include:
+- All data collected in a Log Analytics workspace.
+## Scaling alert rules
+Since you may have many virtual machines that require the same monitoring, you don't want to have to create individual alert rules for each one. You also want to ensure There are different strategies to limit the number of alert rules you need to manage, depending on the type of rule. Each of these strategies depends on understanding the target resource of the alert rule.
+
+### Metric alert rules
+Virtual machines support multiple resource metric alert rules as described in [Monitor multiple resources](../alerts/alerts-types.md#metric-alerts). This allows you to create a single metric alert rule that applies to all virtual machines in a resource group or subscription within the same region. Start with the [recommended alerts](#recommended-alert-rules) and [create a corresponding rule]() for each using your subscription or a resource group as the target resource. You will need to create duplicate rules for each region if you have machines in multiple regions.
-- [Result count](../alerts/alerts-unified-log.md#result-count): Counts the number of rows returned by the query, and can be used to work with events such as Windows event logs, syslog, application exceptions.-- [Calculation of a value](../alerts/alerts-unified-log.md#calculation-of-a-value): Makes a calculation based on a numeric column, and can be used to include any number of resources. For example, CPU percentage.
+As you identify requirements for additional metric alert rules, use this same strategy of using a subscription or resource group as the target resource to minimize the number of alert rules you need to manage and ensure that they're automatically applied to any new machines.
-### Targeting resources and dimensions
+### Log alert rules
-You can monitor multiple instancesΓÇÖ values with one rule using dimensions. You would use dimensions if, for example, you want to monitor CPU usage on multiple instances running your web site or app for CPU usage over 80%.
+If you set the target resource of a log alert rule to a specific machine, then queries are limited to data associated with that machine giving you individual alerts for it. This would require a separate alert rule for each machine.
-To create resource-centric alerts at scale for a subscription or resource group, you can **Split by dimensions**. When you want to monitor the same condition on multiple Azure resources, splitting by dimensions splits the alerts into separate alerts by grouping unique combinations using numerical or string columns. Splitting on Azure resource ID column makes the specified resource into the alert target.
+If you set the target resource of a log alert rule to a Log Analytics workspace, you have access to all data in that workspace which allows you to alert on data from all machines in the workgroup with a single rule. This gives you the option of creating a single alert for all machines. You can then use dimensions to create a separate alert for each machine.
-You may also decide not to split when you want a condition on multiple resources in the scope, for example, if you want to alert if at least five machines in the resource group scope have CPU usage over 80%.
+For example, you may want to alert when an error event is created in the Windows event log by any machine. You would first need to create a data collection rule as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) to send these events to the `Event` table in the Log Analytics workspace. You could then create an alert rule that queries this table using the workspace as the target resource and the condition shown below.
+
+The query will return a record for any error messages on any machine. Use the **Split by dimensions** option and specify **_ResourceId** to instruct the rule to create an alert for each machine if multiple machines are returned in the results.
:::image type="content" source="media/monitor-virtual-machines/log-alert-rule.png" alt-text="Screenshot of new log alert rule with split by dimensions.":::
-You might want to see a list of the alerts with the affected computers. You can use a custom workbook that uses a custom [Resource Graph](../../governance/resource-graph/overview.md) to provide this view. Use the following query to display alerts, and use the data source **Azure Resource Graph** in the workbook.
+#### Dimensions
+
+Depending on the information you would like to include in the alert, you might need to split using different dimensions. In this case, make sure the necessary dimensions are projected in the query using the [project](/azure/data-explorer/kusto/query/projectoperator) or [extend](/azure/data-explorer/kusto/query/extendoperator) operator. Set the **Resource ID column** field to **Don't split** and include all the meaningful dimensions in the list. Make sure the **Include all future values** is selected, so any value returned from the query will be included.
++
+#### Dynamic thresholds
+An additional benefit using log alert rules is the ability to include complex logic in the query for determining the threshold value. This threshold could be hardcoded, applied to all resources, or calculated dynamically based on some field or calculated value. This allows the threshold to be applied to only resources according to specific conditions. For example, you might create an alert based on available memory but only for machines with a particular amount of total memory.
-```kusto
-alertsmanagementresources
-| extend dimension = properties.context.context.condition.allOf
-| mv-expand dimension
-| extend dimension = dimension.dimensions
-| mv-expand dimension
-| extend Computer = dimension.value
-| extend AlertStatus = properties.essentials.alertState
-| summarize count() by Alert=name, tostring(AlertStatus), tostring(Computer)
-| project Alert, AlertStatus, Computer
-```
## Common alert rules
-The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log metric measurement alerts are provided for each. For guidance on which type of alert to use, see [Choose the alert type](#choose-the-alert-type).
-If you're unfamiliar with the process for creating alert rules in Azure Monitor, see the [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
+The following section lists common alert rules for virtual machines in Azure Monitor. Details for metric alerts and log alerts are provided for each. For guidance on which type of alert to use, see [Alert types](#alert-types). If you're unfamiliar with the process for creating alert rules in Azure Monitor, see [instructions to create a new alert rule](../alerts/alerts-create-new-alert-rule.md).
+
+> [!NOTE]
+> The details for log alerts provided below are using data collected using [VM Insights](vminsights-overview.md) which provides a set of common performance counters for the client operating system. This name is independent of the operating system type.
### Machine unavailable
-The most basic requirement is to send an alert when a machine is unavailable. It could be stopped, the guest operating system could be unresponsive, or the agent could be unresponsive. There are various ways to configure this alerting, but the most common is to use the heartbeat sent from the Log Analytics agent.
+One of the most common monitoring requirements for a virtual machine is to create an alert if it stops running. The best method for this is to create a metric alert rule in Azure Monitor using the VM availability metric which is currently in public preview. See [Create availability alert rule for Azure virtual machine](tutorial-monitor-vm-alert-availability.md) for a complete walk-through on this metric.
-#### Log query alert rules
-Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
+As described in [Scaling alert rules](#scaling-alert-rules), create an availability alert rule using a subscription or resource group as the target resource to have the rule apply to multiple virtual machines, including new machines that you create after the alter rule.
+
+### Agent heartbeat
+The agent heartbeat is slightly different than the machine unavailable alert because it relies on the Azure Monitor agent to send a heartbeat. This can alert you if the machine is running, but the agent is unresponsive.
+
+#### Metric alert rules
+
+A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
++
+#### Log alert rules
+
+Log query alerts use the [Heartbeat table](/azure/azure-monitor/reference/tables/heartbeat), which should have a heartbeat record every minute from each machine.
Use a rule with the following query.
Heartbeat
| extend Duration = datetime_diff('minute',now(),TimeGenerated) | summarize AggregatedValue = min(Duration) by Computer, bin(TimeGenerated,5m), _ResourceId ```
-#### Metric alert rules
-A metric called *Heartbeat* is included in each Log Analytics workspace. Each virtual machine connected to that workspace sends a heartbeat metric value each minute. Because the computer is a dimension on the metric, you can fire an alert when any computer fails to send a heartbeat. Set the **Aggregation type** to **Count** and the **Threshold** value to match the **Evaluation granularity**.
+ ### CPU alerts+ #### Metric alert rules | Target | Metric | |:|:|
-| Host | Percentage CPU |
+| Host | Percentage CPU (included in recommended alerts) |
| Windows guest | \Processor Information(_Total)\% Processor Time | | Linux guest | cpu/usage_active |
InsightsMetrics
| Target | Metric | |:|:|
+| Host | Available Memory Bytes (preview) (included in recommended alerts) |
| Windows guest | \Memory\% Committed Bytes in Use<br>\Memory\Available Bytes | | Linux guest | mem/available<br>mem/available_percent |
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, Disk ```
-## Network alerts
+### Network alerts
#### Metric alert rules | Target | Metric | |:|:|
+| Host | Network In Total, Network Out Total (included in recommended alerts) |
| Windows guest | \Network Interface\Bytes Sent/sec<br>\Logical Disk\(_Total)\Free Megabytes | | Linux guest | disk/free<br>disk/free_percent |
-### Log query alert rules
+#### Log query alert rules
**Network interfaces bytes received - all interfaces**
InsightsMetrics
| summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId, NetworkInterface ```
-## Example log query alert
-Here's a walk-through of creating a log alert for when the CPU of a virtual machine exceeds 80 percent. The data you need is in the [InsightsMetrics table](/azure/azure-monitor/reference/tables/insightsmetrics). The following query returns the records that need to be evaluated for the alert. Each type of alert rule uses a variant of this query.
-### Create the log alert rule
- 1. In the portal, select the relevant resource. We recommend scaling resources by using subscriptions or resource groups.
- 1. In the Resource menu, select **Logs**.
- 1. Use this query to monitor for virtual machines CPU usage:
-
+### Windows and Linux events
+The following sample creates an alert when a specific Windows event is created. It uses a metric measurement alert rule to create a separate alert for each computer.
+
+- **Create an alert rule on a specific Windows event.**
+
+ This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0.
+ ```kusto
- InsightsMetrics
- | where Origin == "vm.azm.ms"
- | where Namespace == "Processor" and Name == "UtilizationPercentage"
- | summarize AggregatedValue = avg(Val) by bin(TimeGenerated, 15m), Computer, _ResourceId
+ Event
+ | where EventLog == "Application"
+ | where EventID == 123
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
```
- 1. Run the query to make sure you get the results you were expecting.
- 1. From the top command bar, Select **+ New alert rule** to create a rule using the current query.
- 1. The **Create an alert rule** page opens with your query. We try to detect summarized data from the query results automatically. If detected, the appropriate values are automatically selected.
- :::image type="content" source="media/monitor-virtual-machines/log-alert-rule-query.png" alt-text="Screenshot of new log alert rule query.":::
- 1. In the **Measurement** section, select the values for these fields if they are not already automatically selected.
-
- |Field |Description |Value for this scenario |
- ||||
- |Measure| The number of table rows or a numeric column to aggregate |AggregatedValue|
- |Aggregation type|The type of aggregation to apply to the data points in aggregation granularity|Average|
- |Aggregation granularity|The interval over which data points are grouped by the aggregation type|15 minutes|
-
- :::image type="content" source="media/monitor-virtual-machines/log-alert-rule-measurement.png" alt-text="Screenshot of new log alert rule measurement. ":::
- 1. In the **Split by dimensions** section, select the values for these fields if they are not already automatically selected.
-
- |Field|Description |Value for this scenario |
- ||||
- |Resource ID column|An Azure Resource ID column that will split the alerts and set the fired alert target scope.|_Resourceid|
- |Dimension name|Dimensions monitor specific time series and provide context to the fired alert. Dimensions can be either number or string columns. If you select more than one dimension value, each time series that results from the combination will trigger its own alert and will be charged separately. The displayed dimension values are based on data from the last 48 hours. Custom dimension values can be added by clicking 'Add custom value'.|Computer|
- |Operator|The operator to compare the dimension value|=|
- |Dimension value| The list of dimension column values |All current and future values|
+
+- **Create an alert rule on Syslog events with a particular severity.**
+
+ The following example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0.
+
+ ```kusto
+ Syslog
+ | where Facility == "auth"
+ | where SeverityLevel == "err"
+ | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
+ ```
+
+### Custom performance counters
+
+- **Create an alert on the maximum value of a counter.**
- :::image type="content" source="media/monitor-virtual-machines/log-alert-rule-dimensions.png" alt-text="Screenshot of new log alert rule with dimensions. ":::
- 1. In the **Alert Logic** section, select the values for these fields if they are not already automatically selected.
-
- |Field |Description |Value for this scenario |
- ||||
- |Operator |The operator to compare the metric value against the threshold|Greater than|
- |Threshold value| The value that the result is measured against.|80|
- |Frequency of evaluation|How often the alert rule should run. A frequency smaller than the aggregation granularity results in a sliding window evaluation.|15 minutes|
- 1. (Optional) In the **Advanced options** section, set the **Number of violations to trigger alert**.
- :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-preview-advanced-options.png" alt-text="Screenshot of alerts rule preview advanced options.":::
-
- 1. The **Preview** chart shows query evaluations results over time. You can change the chart period or select different time series that resulted from unique alert splitting by dimensions.
- :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-create-alert-rule-preview.png" alt-text="Screenshot of alerts rule preview.":::
-
- 1. From this point on, you can select the **Review + create** button at any time.
- 1. In the **Actions** tab, select or create the required [action groups](../alerts/action-groups.md).
- :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot of alerts rule preview actions tab.":::
-
- 1. In the **Details** tab, define the **Project details** and the **Alert rule details**.
- 1. (Optional) In the **Advanced options** section, you can set several options, including whether to **Enable upon creation**, or to **mute actions** for a period after the alert rule fires.
- :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-log-rule-details-tab.png" alt-text="Screenshot of alerts rule preview details tab.":::
- > [!NOTE]
- > If you or your administrator assigned the Azure Policy **Azure Log Search Alerts over Log Analytics workspaces should use customer-managed keys**, you must select **Check workspace linked storage** option in **Advanced options**, or the rule creation will fail as it will not meet the policy requirements.
-
-1. In the **Tags** tab, set any required tags on the alert rule resource.
- :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-tags-tab.png" alt-text="Screenshot of alerts rule preview tags tab.":::
-
-1. In the **Review + create** tab, a validation will run and inform you of any issues.
-1. When validation passes and you have reviewed the settings, click the **Create** button.
- :::image type="content" source="../alerts/media/alerts-create-new-alert-rule/alerts-rule-review-create.png" alt-text="Screenshot of alerts rule preview review and create tab.":::
+ ```kusto
+ Perf
+ | where CounterName == "My Counter"
+ | summarize AggregatedValue = max(CounterValue) by Computer
+ ```
+
+- **Create an alert on the average value of a counter.**
+
+ ```kusto
+ Perf
+ | where CounterName == "My Counter"
+ | summarize AggregatedValue = avg(CounterValue) by Computer
+ ```
++ ## Next steps
-* [Monitor workloads running on virtual machines.](monitor-virtual-machine-workloads.md)
* [Analyze monitoring data collected for virtual machines.](monitor-virtual-machine-analyze.md)
azure-monitor Monitor Virtual Machine Analyze https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-analyze.md
Previously updated : 06/21/2021 Last updated : 01/10/2023 # Monitor virtual machines with Azure Monitor: Analyze monitoring data
-This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to analyze monitoring data for your virtual machines after you've completed their configuration.
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to analyze monitoring data for your virtual machines after you've completed their configuration.
> [!NOTE] > This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md) or [Tutorial: Collect guest logs and metrics from Azure virtual machine](tutorial-monitor-vm-guest.md).
-After you've enabled VM insights on your virtual machines, data will be available for analysis. This article describes the different features of Azure Monitor that you can use to analyze the health and performance of your virtual machines. Several of these features provide a different experience depending on whether you're analyzing a single machine or multiple. Each experience is described here with any unique behavior of each feature depending on which experience is being used.
+After you've [configured data collection](monitor-virtual-machine-data-collection.md) for your virtual machines, data will be available for analysis. This article describes the different features of Azure Monitor that you can use to analyze the health and performance of your virtual machines. Several of these features provide a different experience depending on whether you're analyzing a single machine or multiple. Each experience is described here with any unique behavior of each feature depending on which experience is being used.
-> [!NOTE]
-> This article includes guidance on analyzing data that's collected by Azure Monitor and VM insights. For data that you configure to monitor workloads running on virtual machines, see [Monitor workloads](monitor-virtual-machine-workloads.md).
## Single machine experience Access the single machine analysis experience from the **Monitoring** section of the menu in the Azure portal for each Azure virtual machine and Azure Arc-enabled server. These options either limit the data that you're viewing to that machine or at least set an initial filter for it. In this way, you can focus on a particular machine, view its current performance and its trending over time, and help to identify any issues it might be experiencing. :::image type="content" source="media/monitor-virtual-machines/vm-menu.png" alt-text="Screenshot that shows analyzing a VM in the Azure portal."::: -- **Overview page**: Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:
- - **Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.
- - **Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations, and add more counters for analysis.
-- **Activity log**: See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started. -- **Insights**: Open [VM insights](../vm/vminsights-overview.md) with the map for the current virtual machine selected. The map shows you running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).-
- Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).
-- **Alerts**: View [alerts](../alerts/alerts-overview.md) for the current virtual machine. These alerts only use the machine as the target resource, so there might be other alerts associated with it. You might need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. For details, see [Monitor virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md).-- **Metrics**: Open metrics explorer with the scope set to the machine. This option is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added.-- **Diagnostic settings**: Enable and configure the [diagnostics extension](../agents/diagnostics-extension-overview.md) for the current virtual machine. This option is different than the **Diagnostic settings** option for other Azure resources. Only enable the diagnostic extension if you need to send data to Azure Event Hubs or Azure Storage.-- **Advisor recommendations**: See recommendations for the current virtual machine from [Azure Advisor](../../advisor/index.yml).-- **Logs**: Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the current virtual machine. You can select from a variety of existing queries to drill into log and performance data for only this machine. -- **Connection monitor**: Open [Network Watcher Connection Monitor](../../network-watcher/connection-monitor-overview.md) to monitor connections between the current virtual machine and other virtual machines. -- **Workbooks**: Open the workbook gallery with the VM insights workbooks for single machines. For a list of the VM insights workbooks designed for individual machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks).
+| Option | Description |
+|:|:|
+| Overview page | Select the **Monitoring** tab to display alerts, [platform metrics](../essentials/data-platform-metrics.md), and other monitoring information for the virtual machine host. You can see the number of active alerts on the tab. In the **Monitoring** tab, you get a quick view of:<br><br>**Alerts:** the alerts fired in the last 24 hours, with some important statistics about those alerts. If you do not have any alerts set up for this VM, there is a link to help you quickly create new alerts for your VM.<br><br>**Key metrics:** the trend over different time periods for important metrics, such as CPU, network, and disk. Because these are host metrics though, counters from the guest operating system such as memory aren't included. Select a graph to work with this data in [metrics explorer](../essentials/metrics-getting-started.md) where you can perform different aggregations, and add more counters for analysis. |
+| Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for the current virtual machine. Use this log to view the recent activity of the machine, such as any configuration changes and when it was stopped and started.
+| Insights | Displays VM insights views if If the VM is enabled for [VM insights](../vm/vminsights-overview.md).<br><br>Select the **Performance** tab to view trends of critical performance counters over different periods of time. When you open VM insights from the virtual machine menu, you also have a table with detailed metrics for each disk. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).<br><br>If *processes and dependencies* is enabled for the VM, select the **Map** tab to view the running processes on the machine, dependencies on other machines, and external processes. For details on how to use the Map view for a single machine, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-a-vm).<br><br>If the VM is not enabled for VM insights, it offers the option to enable VM insights. |
+| Alerts | View [alerts](../alerts/alerts-overview.md) for the current virtual machine. These alerts only use the machine as the target resource, so there might be other alerts associated with it. You might need to use the **Alerts** option in the Azure Monitor menu to view alerts for all resources. For details, see [Monitor virtual machines with Azure Monitor - Alerts](monitor-virtual-machine-alerts.md). |
+| Metrics | Open metrics explorer with the scope set to the machine. This option is the same as selecting one of the performance charts from the **Overview** page except that the metric isn't already added. |
+| Diagnostic settings | Enable and configure the [diagnostics extension](../agents/diagnostics-extension-overview.md) for the current virtual machine. This option is different than the **Diagnostic settings** option for other Azure resources. This is a [legacy agent](monitor-virtual-machine-agent.md#legacy-agents) that has been replaced by the [Azure Monitor agent](monitor-virtual-machine-agent.md). |
+| Advisor recommendations | See recommendations for the current virtual machine from [Azure Advisor](../../advisor/index.yml). |
+| Logs | Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the current virtual machine. You can select from a variety of existing queries to drill into log and performance data for only this machine. |
+| Connection monitor | Open [Network Watcher Connection Monitor](../../network-watcher/connection-monitor-overview.md) to monitor connections between the current virtual machine and other virtual machines. |
+| Workbooks | Open the workbook gallery with the VM insights workbooks for single machines. For a list of the VM insights workbooks designed for individual machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks). |
## Multiple machine experience
-Access the multiple machine analysis experience from the **Monitor** menu in the Azure portal for each Azure virtual machine and Azure Arc-enabled server. These options provide access to all data so that you can select the virtual machines that you're interested in comparing.
+Access the multiple machine analysis experience from the **Monitor** menu in the Azure portal for each Azure virtual machine and Azure Arc-enabled server. This will include only VMs that are enabled for VM insights. These options provide access to all data so that you can select the virtual machines that you're interested in comparing.
:::image type="content" source="media/monitor-virtual-machines/monitor-menu.png" alt-text="Screenshot that shows analyzing multiple VMs in the Azure portal." lightbox="media/monitor-virtual-machines/monitor-menu.png"::: -- **Activity log**: See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of virtual machines or virtual machine scale sets to view events for all your machines.-- **Alerts**: View [alerts](../alerts/alerts-overview.md) for all resources, which includes alerts related to virtual machines but that are associated with the workspace. Create a filter for a **Resource Type** of virtual machines or virtual machine scale sets to view alerts for all your machines. -- **Metrics**: Open [metrics explorer](../essentials/metrics-getting-started.md) with no scope selected. This feature is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together.-- **Logs**: Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. You can select from a variety of existing queries to drill into log and performance data for all machines. Or you can create a custom query to perform additional analysis.-- **Workbooks**: Open the workbook gallery with the VM insights workbooks for multiple machines. For a list of the VM insights workbooks designed for multiple machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks). -- **Virtual Machines**: Open [VM insights](../vm/vminsights-overview.md) with the **Get Started** tab open. This action displays all machines in your Azure subscription and identifies which are being monitored. Use this view to onboard individual machines that aren't already being monitored.-
- Select the **Performance** tab to compare trends of critical performance counters for multiple machines over different periods of time. Select all machines in a subscription or resource group to include in the view. For details on how to use the Map view for a single machine, see [Chart performance with VM insights](vminsights-performance.md#view-performance-directly-from-an-azure-vm).
-
- Select the **Map** tab to view running processes on machines, dependencies between machines, and external processes. Select all machines in a subscription or resource group, or inspect the data for a single machine. For details on how to use the Map view for multiple machines, see [Use the Map feature of VM insights to understand application components](vminsights-maps.md#view-a-map-from-azure-monitor).
-
-## Compare Metrics and Logs
-For many features of Azure Monitor, you don't need to understand the different types of data it uses and where it's stored. You can use VM insights, for example, without any understanding of what data is being used to populate the Performance view, Map view, and workbooks. You just focus on the logic that you're analyzing. As you dig deeper, you'll need to understand the difference between [Metrics](../essentials/data-platform-metrics.md) and [Logs](../logs/data-platform-logs.md). Different features of Azure Monitor use different kinds of data. The type of alerting that you use for a particular scenario depends on having that data available in a particular location.
-
-This level of detail can be confusing if you're new to Azure Monitor. The following information helps you understand the differences between the types of data:
+| Option | Description |
+|:|:|
+| Activity log | See [activity log](../essentials/activity-log.md#view-the-activity-log) entries filtered for all resources. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view events for all your machines. |
+| Alerts | View [alerts](../alerts/alerts-overview.md) for all resources. This includes alerts related to all virtual machines in the workspace. Create a filter for a **Resource Type** of virtual machines or Virtual Machine Scale Sets to view alerts for all your machines. |
+| Metrics | Open [metrics explorer](../essentials/metrics-getting-started.md) with no scope selected. This feature is particularly useful when you want to compare trends across multiple machines. Select a subscription or a resource group to quickly add a group of machines to analyze together. |
+| Logs | Open [Log Analytics](../logs/log-analytics-overview.md) with the [scope](../logs/scope.md) set to the workspace. You can select from a variety of existing queries to drill into log and performance data for all machines. Or you can create a custom query to perform additional analysis. |
+| Workbooks | Open the workbook gallery with the VM insights workbooks for multiple machines. For a list of the VM insights workbooks designed for multiple machines, see [VM insights workbooks](vminsights-workbooks.md#vm-insights-workbooks). |
-- Any non-numeric data, such as events, is stored in Logs. Metrics can only include numeric data that's sampled at regular intervals.-- Numeric data can be stored in both Metrics and Logs so that it can be analyzed in different ways and support different types of alerts.-- Performance data from the guest operating system is sent to Logs by VM insights by using the Log Analytics agent.-- Performance data from the guest operating system is sent to Metrics by the Azure Monitor agent.
-> [!NOTE]
-> The Azure Monitor agent sends data to both Metrics and Logs. In this scenario, it's only used for Metrics because the Log Analytics agent sends data to Logs as currently required for VM insights. When VM insights uses the Azure Monitor agent, this scenario will be updated to remove the Log Analytics agent.
-## Analyze data with VM insights
+## VM insights experience
VM insights includes multiple performance charts that help you quickly get a status of the operation of your monitored machines, their trending performance over time, and dependencies between machines and processes. It also offers a consolidated view of different aspects of any monitored machine, such as its properties and events collected in the Log Analytics workspace. The **Get Started** tab displays all machines in your Azure subscription and identifies which ones are being monitored. Use this view to quickly identify which machines aren't being monitored and to onboard individual machines that aren't already being monitored.
Use the **Map** view to see running processes on machines and their dependencies
:::image type="content" source="media/monitor-virtual-machines/vminsights-map.png" alt-text="Screenshot that shows VM insights map." lightbox="media/monitor-virtual-machines/vminsights-map.png"::: ++
+## Compare Metrics and Logs
+For many features of Azure Monitor, you don't need to understand the different types of data it uses and where it's stored. You can use VM insights, for example, without any understanding of what data is being used to populate the Performance view, Map view, and workbooks. You just focus on the logic that you're analyzing. As you dig deeper, you'll need to understand the difference between [Azure Monitor Metrics](../essentials/data-platform-metrics.md) and [Azure Monitor Logs](../logs/data-platform-logs.md). Different features of Azure Monitor use different kinds of data. The type of alerting that you use for a particular scenario depends on having that data available in a particular location.
+
+This level of detail can be confusing if you're new to Azure Monitor. The following information helps you understand the differences between the types of data:
+
+- Any non-numeric data, such as events, is stored in Logs. Metrics can only include numeric data that's sampled at regular intervals.
+- Numeric data can be stored in both Metrics and Logs so that it can be analyzed in different ways and support different types of alerts.
+- Performance data from the guest operating system is sent to either Metrics or Logs or both by the Azure Monitor agent.
+- Performance data from the guest operating system is sent to Logs by VM insights.
++ ## Analyze metric data with metrics explorer By using metrics explorer, you can plot charts, visually correlate trends, and investigate spikes and dips in metrics' values. For details on how to use this tool, see [Getting started with Azure Metrics Explorer](../essentials/metrics-getting-started.md).
-Three namespaces are used by virtual machines.
+The following namespaces are used by virtual machines.
| Namespace | Description | Requirement | |:|:|:| | Virtual Machine Host | Host metrics automatically collected for all Azure virtual machines. Detailed list of metrics at [Microsoft.Compute/virtualMachines](../essentials/metrics-supported.md#microsoftcomputevirtualmachines). | Collected automatically with no configuration required. |
-| Guest (classic) | Limited set of guest operating system and application performance data. Available in metrics explorer but not other Azure Monitor features, such as metric alerts. | [Diagnostic extension](../agents/diagnostics-extension-overview.md) installed. Data is read from Azure Storage. |
-| Virtual Machine Guest | Guest operating system and application performance data available to all Azure Monitor features using metrics. | [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) installed with a [Data Collection Rule](../essentials/data-collection-rule-overview.md). |
+| Virtual Machine Guest | Guest operating system and application performance data on Windows machines. | Azure Monitor agent installed with a [Data Collection Rule](monitor-virtual-machine-data-collection.md#collect-performance-counters). |
+| azure.vm.linux.guestmetrics | Guest operating system and application performance data on Linux machines. | Azure Monitor agent installed with a [Data Collection Rule](monitor-virtual-machine-data-collection.md#collect-performance-counters). |
++ ## Analyze log data with Log Analytics
-By using Log Analytics, you can perform custom analysis of your log data. Use Log Analytics when you want to dig deeper into the data used to create the views in VM insights. You might want to analyze different logic and aggregations of that data, correlate security data collected by Microsoft Defender for Cloud and Microsoft Sentinel with your health and availability data, or work with data collected for your [workloads](monitor-virtual-machine-workloads.md).
+Use Log Analytics to perform custom analysis of your log data and when you want to dig deeper into the data used to create the views in workbooks and VM insights. You might want to analyze different logic and aggregations of that data or correlate security data collected by Microsoft Defender for Cloud and Microsoft Sentinel with your [health and availability data](monitor-virtual-machine-data-collection.md).
You don't necessarily need to understand how to write a log query to use Log Analytics. There are multiple prebuilt queries that you can select and either run without modification or use as a start to a custom query. Select **Queries** at the top of the Log Analytics screen, and view queries with a **Resource type** of **Virtual machines** or **Virtual machine scale sets**. For information on how to use these queries, see [Using queries in Azure Monitor Log Analytics](../logs/queries.md). For a tutorial on how to use Log Analytics to run queries and work with their results, see [Log Analytics tutorial](../logs/log-analytics-tutorial.md). :::image type="content" source="media/monitor-virtual-machines/vm-queries.png" alt-text="Screenshot that shows virtual machine queries." lightbox="media/monitor-virtual-machines/vm-queries.png":::
-When you start Log Analytics from VM insights by using the properties pane in either the **Performance** or **Map** view, it lists the tables that have data for the selected computer. Select a table to open Log Analytics with a simple query that returns all records in that table for the selected computer. Work with these results or modify the query for more complex analysis. The [scope](../log/../logs/scope.md) set to the workspace means that you have access data for all computers using that workspace.
+When you start Log Analytics from the **Logs** menu for a machine, its [scope](../logs/scope.md) is set to that computer. Any queries will only return records associated with that computer. For a simple query that returns all records in a table, double-click a table in the left pane. Work with these results or modify the query for more complex analysis. To set the scope to all records in a workspace, change the scope or select **Logs** from the **Monitor** menu.
:::image type="content" source="media/monitor-virtual-machines/table-query.png" alt-text="Screenshot that shows a Table query." lightbox="media/monitor-virtual-machines/table-query.png"::: ++ ## Visualize data with workbooks [Workbooks](../visualize/workbooks-overview.MD) provide interactive reports in the Azure portal and combine different kinds of data into a single view. Workbooks combine text,ΓÇ»[log queries](/azure/data-explorer/kusto/query/), metrics, and parameters into rich interactive reports. Workbooks are editable by any other team members who have access to the same Azure resources.
VM insights include the following workbooks. You can use these workbooks or use
| Security and Audit | Provides an analysis of your TCP/IP traffic that reports on overall connections, malicious connections, and where the IP endpoints reside globally. To enable all features, you'll need to enable Security Detection. | | TCP Traffic | Provides a ranked report for your monitored machines and their sent, received, and total network traffic in a grid and displayed as a trend line. | | Traffic Comparison | Compares network traffic trends for a single machine or a group of machines. |
-| Log Analytics agent | Analyzes the health of your agents, including the number of agents connecting to a workspace that are unhealthy, and the effect of the agent on the performance of the machine. This workbook isn't available from VM insights like the other workbooks. On the Azure Monitor menu, go to **Workbooks** and select **Public Templates**. |
+| AMA Migration Helper | Helps you discover what to migrate and track progress as you move from Log Analytics Agent to Azure Monitor Agent. This workbook isn't available from VM insights like the other workbooks. On the Azure Monitor menu, go to **Workbooks** and select **Public Templates**. See [Tools for migrating from Log Analytics Agent to Azure Monitor Agent](../agents/azure-monitor-agent-migration-tools.md#using-ama-migration-helper) |
For instructions on how to create your own custom workbooks, see [Create interactive reports VM insights with workbooks](vminsights-workbooks.md). :::image type="content" source="media/monitor-virtual-machines/workbook-example.png" alt-text="Screenshot that shows virtual machine workbooks." lightbox="media/monitor-virtual-machines/workbook-example.png"::: +
+## VM availability information in Azure Resource Graph
+[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service that allows you to use the same KQL query language used in log queries to query your Azure resources at scale with complex filtering, grouping, and sorting by resource properties. You can use [VM health annotations](../../service-health/resource-health-vm-annotation.md) to Azure Resource Graph (ARG) for detailed failure attribution and downtime analysis including the following:
+
+- Query the latest snapshot of VM availability together across all your Azure subscriptions.
+- Assess the impact to business SLAs and trigger decisive mitigation actions, in response to disruptions and type of failure signature.
+- Set up custom dashboards to supervise the comprehensive health of applications by [joining](../../governance/resource-graph/concepts/work-with-data.md) VM availability information with additional [resource metadata](../../governance/resource-graph/samples/samples-by-table.md?tabs=azure-cli) in Resource Graph.
+- Track relevant changes in VM availability across a rolling 14 days window, by using the [change tracking](../../governance/resource-graph/how-to/get-resource-changes.md) mechanism for conducting detailed investigations.
+
+To get started with Resource Graph, open **Resource Graph Explorer** in the Azure portal. Select the **Table** tab and have a look at the [microsoft.resourcehealth/availabilitystatuses](#microsoftresourcehealthavailabilitystatuses) and [microsoft.resourcehealth/resourceannotations](#microsoftresourcehealthresourceannotations) tables which are described below. Click on **healthresources** to create a simple query and then click **Run** to return the records.
++
+To view the details for a record, scroll to the right and select **See details**.
++
+There will be two types of events populated in the HealthResources table:
+
+### microsoft.resourcehealth/availabilitystatuses
+This event denotes the latest availability status of a VM, based on the [health checks](../../service-health/resource-health-checks-resource-types.md#microsoftcomputevirtualmachines) performed by the underlying Azure platform. The [availability states](../../service-health/resource-health-overview.md#health-status) currently emitted for VMs are as follows:
+
+- **Available**: The VM is up and running as expected.
+- **Unavailable**: A disruption to the normal functioning of the VM has been detected.
+- **Unknown**: The platform is unable to accurately detect the health of the VM. Check back in a few minutes.
+
+The availability state is in the `properties` field of the record which includes the following properties:
+
+| Field | Description |
+|:|:|
+| targetResourceType | Type of resource for which health data is flowing |
+| targetResourceId | Resource ID |
+| occurredTime | Timestamp when the latest availability state is emitted by the platform |
+| previousAvailabilityState | Previous availability state of the VM |
+| availabilityState | Current availability state of the VM |
+
+A sample `properties` value looks similar to the following:
+
+```json
+{
+ "targetResourceType": "Microsoft.Compute/virtualMachines",
+ "previousAvailabilityState": "Available",
+"targetResourceId": "/subscriptions/<subscriptionId>/resourceGroups/<ResourceGroupName>/providers/Microsoft.Compute/virtualMachines/<VMName>",
+ "occurredTime": "2022-10-11T11:13:59.9570000Z",
+ "availabilityState": "Unavailable"
+}
+
+```
+
+### microsoft.resourcehealth/resourceannotations
+This event contextualizes any changes to VM availability, by detailing necessary failure attributes to help you investigate and mitigate the disruption as needed. The full list of VM health annotations are listed at [Resource Health virtual machine Health Annotations] (../../service-health/resource-health-vm-annotation.md).
+
+These annotations can be broadly classified into the following:
+
+- **Downtime Annotations**: Emitted when the platform detects VM availability transitioning to Unavailable. Examples include host crashes or reboot operations.
+- **Informational Annotations**: Emitted during control plane activities with no impact to VM availability. Examples include VM allocation, stop, delete, start. Usually, no additional customer action is required in response.
+- **Degraded Annotations**: Emitted when VM availability is detected to be at risk. Examples include when failure prediction models predict a degraded hardware component that can cause the VM to reboot at any given time. You should redeploy by the deadline specified in the annotation message to avoid any unanticipated loss of data or downtime.
+
+| Field | Description |
+|:|:|
+| targetResourceType | Type of resource for which health data is flowing |
+| targetResourceId | Resource ID |
+| occurredTime | Timestamp when the latest availability state is emitted by the platform |
+| annotationName | Name of the Annotation emitted |
+| reason | Brief overview of the availability impact observed by the customer |
+| category | Denotes whether the platform activity triggering the annotation was either planned maintenance or unplanned repair. This field is not applicable to customer/VM-initiated events.<br><br>Possible values: Planned \| Unplanned \| Not Applicable \| Null |
+| context | Denotes whether the activity triggering the annotation was due to an authorized user or process (customer initiated), or due to the Azure platform (platform initiated) or even activity in the guest OS that has resulted in availability impact (VM initiated).<br><br>Possible values: Platform-Initiated \| User-initiated \|VM-initiated \| Not Applicable \| Null |
+| summary | Statement detailing the cause for annotation emission, along with remediation steps that can be taken by users |
+
+See [Azure Resource Graph sample queries by table](../../governance/resource-graph/samples/samples-by-table.md?tabs=azure-cli#healthresources) for sample queries using this data.
+ ## Next steps * [Create alerts from collected data](monitor-virtual-machine-alerts.md)
-* [Monitor workloads running on virtual machines](monitor-virtual-machine-workloads.md)
+
azure-monitor Monitor Virtual Machine Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-configure.md
- Title: 'Monitor virtual machines with Azure Monitor: Configure monitoring'
-description: Learn how to configure virtual machines for monitoring in Azure Monitor. Monitor virtual machines and their workloads with an Azure Monitor scenario.
---- Previously updated : 06/21/2021----
-# Monitor virtual machines with Azure Monitor: Configure monitoring
-This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to configure monitoring of your Azure and hybrid virtual machines in Azure Monitor.
-
-> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
-
-This article discusses the most common Azure Monitor features to monitor the virtual machine host and its guest operating system. Depending on your particular environment and business requirements, you might not want to implement all features enabled by this configuration. Each section describes what features are enabled by that configuration and whether it potentially results in additional cost. This information will help you assess whether to perform each step of the configuration. For detailed pricing information, see [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/).
-
-A general description of each feature enabled by this configuration is provided in the [overview for the scenario](monitor-virtual-machine.md). That article also includes links to content that provides a detailed description of each feature to further help you assess your requirements.
-
-> [!NOTE]
-> The features enabled by the configuration support monitoring workloads running on your virtual machine. But depending on your particular workloads, you'll typically require additional configuration. For details on this additional configuration, see [Workload monitoring](monitor-virtual-machine-workloads.md).
-
-## Configuration overview
-The following table lists the steps that must be performed for this configuration. Each one links to the section with the detailed description of that configuration step.
-
-| Step | Description |
-|:|:|
-| [No configuration](#no-configuration) | Activity log and platform metrics for the Azure virtual machine hosts are automatically collected with no configuration. |
-| [Create and prepare Log Analytics workspace](#create-and-prepare-a-log-analytics-workspace) | Create a Log Analytics workspace and configure it for VM insights. Depending on your particular requirements, you might configure multiple workspaces. |
-| [Send Activity log to Log Analytics workspace](#send-an-activity-log-to-a-log-analytics-workspace) | Send the Activity log to the workspace to analyze it with other log data. |
-| [Prepare hybrid machines](#prepare-hybrid-machines) | Hybrid machines either need the server agents enabled by Azure Arc installed so they can be managed like Azure virtual machines or must have their agents installed manually. |
-| [Enable VM insights on machines](#enable-vm-insights-on-machines) | Onboard machines to VM insights, which deploys required agents and begins collecting data from the guest operating system. |
-| [Send guest performance data to Metrics](#send-guest-performance-data-to-metrics) |Install the Azure Monitor agent to send performance data to Azure Monitor Metrics. |
-
-## No configuration
-Azure Monitor provides a basic level of monitoring for Azure virtual machines at no cost and with no configuration. Platform metrics for Azure virtual machines include important metrics such as CPU, network, and disk utilization. They can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience) for the machine in the Azure portal. The Activity log is also collected automatically and includes the recent activity of the machine, such as any configuration changes and when it was stopped and started.
-
-## Create and prepare a Log Analytics workspace
-You require at least one Log Analytics workspace to support VM insights and to collect telemetry from the Log Analytics agent. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data. For more information, see [Azure Monitor Logs pricing details](../logs/cost-logs.md).
-
-Many environments use a single workspace for all their virtual machines and other Azure resources they monitor. You can even share a workspace used by [Microsoft Defender for Cloud and Microsoft Sentinel](monitor-virtual-machine-security.md), although many customers choose to segregate their availability and performance telemetry from security data. If you're getting started with Azure Monitor, start with a single workspace and consider creating more workspaces as your requirements evolve.
-
-For complete details on logic that you should consider for designing a workspace configuration, see Design a Log Analytics workspace configuration(../logs/workspace-design.md).
-
-### Multihoming agents
-Multihoming refers to a virtual machine that connects to multiple workspaces. Typically, there's little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces most likely creates duplicate data in each workspace, which increases your overall cost. You can combine data from multiple workspaces by using [cross-workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
-
-One reason you might consider multihoming, though, is if you have an environment with Microsoft Defender for Cloud or Microsoft Sentinel stored in a workspace that's separate from Azure Monitor. A machine being monitored by each service needs to send data to each workspace. The Windows agent supports this scenario because it can send to up to four workspaces. The Linux agent can currently send to only a single workspace. If you want to have Azure Monitor and Microsoft Defender for Cloud or Microsoft Sentinel monitor a common set of Linux machines, the services need to share the same workspace.
-
-Another reason you might multihome your agents is if you're using a [hybrid monitoring model](/azure/cloud-adoption-framework/manage/monitor/cloud-models-monitor-overview#hybrid-cloud-monitoring). In this model, you use Azure Monitor and Operations Manager together to monitor the same machines. The Log Analytics agent and the Microsoft Management Agent for Operations Manager are the same agent. Sometimes they're referred to with different names.
-
-### Workspace permissions
-The access mode of the workspace defines which users can access different sets of data. For details on how to define your access mode and configure permissions, see [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md). If you're just getting started with Azure Monitor, consider accepting the defaults when you create your workspace and configure its permissions later.
-
-### Prepare the workspace for VM insights
-Prepare each workspace for VM insights before you enable monitoring for any virtual machines. This step installs required solutions that support data collection from the Log Analytics agent. You complete this configuration only once for each workspace. For details on this configuration by using the Azure portal in addition to other methods, see [Enable VM insights overview](vminsights-enable-overview.md).
-
-## Send an Activity log to a Log Analytics workspace
-You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal. Send this data into the same Log Analytics workspace as VM insights to analyze it with the other monitoring data collected for the virtual machine. You might have already done this task when you configured monitoring for other Azure resources because there's a single Activity log for all resources in an Azure subscription.
-
-There's no cost for ingestion or retention of Activity log data. For details on how to create a diagnostic setting to send the Activity log to your Log Analytics workspace, see [Create diagnostic settings](../essentials/diagnostic-settings.md).
-
-### Network requirements
-The Log Analytics agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Log Analytics agent for all communication, so it doesn't require any another ports. For details on how to configure your firewall and proxy, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
--
-### Gateway
-With the Log Analytics gateway, you can channel communications from your on-premises machines through a single gateway. You can't use the Azure Arc-enabled server agents with the Log Analytics gateway though. If your security policy requires a gateway, you'll need to manually install the agents for your on-premises machines. For details on how to configure and use the Log Analytics gateway, see [Log Analytics gateway](../agents/gateway.md).
-
-### Azure Private Link
-By using Azure Private Link, you can create a private endpoint for your Log Analytics workspace. After it's configured, any connections to the workspace must be made through this private endpoint. Private Link works by using DNS overrides, so there's no configuration requirement on individual agents. For details on Private Link, see [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md).
-
-## Prepare hybrid machines
-A hybrid machine is any machine not running in Azure. It's a virtual machine running in another cloud or hosted provide or a virtual or physical machine running on-premises in your datacenter. Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) on hybrid machines so you can manage them similarly to your Azure virtual machines. You can use VM insights in Azure Monitor to use the same process to enable monitoring for Azure Arc-enabled servers as you do for Azure virtual machines. For a complete guide on preparing your hybrid machines for Azure, see [Plan and deploy Azure Arc-enabled servers](../../azure-arc/servers/plan-at-scale-deployment.md). This task includes enabling individual machines and using [Azure Policy](../../governance/policy/overview.md) to enable your entire hybrid environment at scale.
-
-There's no more cost for Azure Arc-enabled servers, but there might be some cost for different options that you enable. For details, see [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/). There is a cost for the data collected in the workspace after the hybrid machines are enabled for VM insights.
-
-### Machines that can't use Azure Arc-enabled servers
-If you have any hybrid machines that match the following criteria, they won't be able to use Azure Arc-enabled servers:
--- The operating system of the machine isn't supported by the server agents enabled by Azure Arc. For more information, see [Supported operating systems](../../azure-arc/servers/prerequisites.md#supported-operating-systems).-- Your security policy doesn't allow machines to connect directly to Azure. The Log Analytics agent can use the [Log Analytics gateway](../agents/gateway.md) whether or not Azure Arc-enabled servers are installed. The server agents enabled by Azure Arc must connect directly to Azure.-
-You still can monitor these machines with Azure Monitor, but you need to manually install their agents. To manually install the Log Analytics agent and Dependency agent on those hybrid machines, see [Enable VM insights for a hybrid virtual machine](vminsights-enable-hybrid.md).
-
-> [!NOTE]
-> The private endpoint for Azure Arc-enabled servers is currently in public preview. The endpoint allows your hybrid machines to securely connect to Azure by using a private IP address from your virtual network.
-
-## Enable VM insights on machines
-After you enable VM insights on a machine, it installs the Log Analytics agent and Dependency agent, connects to a workspace, and starts collecting performance data. You can start using performance views and workbooks to analyze trends for a variety of guest operating system metrics, enable the map feature of VM insights for analyzing running processes and dependencies between machines, and collect the data required for you to create a variety of alert rules.
-
-You can enable VM insights on individual machines by using the same methods for Azure virtual machines and Azure Arc-enabled servers. These methods include onboarding individual machines with the Azure portal or Azure Resource Manager templates or enabling machines at scale by using Azure Policy. There's no direct cost for VM insights, but there is a cost for the ingestion and retention of data collected in the Log Analytics workspace.
-
-For different options to enable VM insights for your machines, see [Enable VM insights overview](vminsights-enable-overview.md). To create a policy that automatically enables VM insights on any new machines as they're created, see [Enable VM insights by using Azure Policy](vminsights-enable-policy.md).
--
-## Send guest performance data to Metrics
-The [Azure Monitor agent](../agents/azure-monitor-agent-overview.md) replaces the Log Analytics agent when it fully supports Azure Monitor, Microsoft Defender for Cloud, and Microsoft Sentinel. Until that time, it can be installed with the Log Analytics agent to send performance data from the guest operating system of machines to Azure Monitor Metrics. This configuration allows you to evaluate this data with metrics explorer and use metric alerts.
-
-The Azure Monitor agent requires at least one data collection rule (DCR) that defines which data it should collect and where it should send that data. A single DCR can be used by any machines in the same resource group.
-
-Create a single DCR for each resource group with machines to monitor by using the following data source:
--- **Data source type**: Performance Counters-- **Destination**: Azure Monitor Metrics-
-Be careful to not send data to Logs because it would be redundant with the data already being collected by the Log Analytics agent.
-
-You can install an Azure Monitor agent on individual machines by using the same methods for Azure virtual machines and Azure Arc-enabled servers. These methods include onboarding individual machines with the Azure portal or Resource Manager templates or enabling machines at scale by using Azure Policy. For hybrid machines that can't use Azure Arc-enabled servers, install the agent manually.
-
-To create a DCR and deploy the Azure Monitor agent to one or more agents by using the Azure portal, see [Create rule and association in the Azure portal](../agents/data-collection-rule-azure-monitor-agent.md). Other installation methods are described at [Install the Azure Monitor agent](../agents/azure-monitor-agent-manage.md). To create a policy that automatically deploys the agent and DCR to any new machines as they're created, see [Deploy Azure Monitor at scale using Azure Policy](../best-practices.md).
-
-## Next steps
-
-* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
-* [Create alerts from collected data](monitor-virtual-machine-alerts.md)
-* [Monitor workloads running on virtual machines](monitor-virtual-machine-workloads.md)
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
+
+ Title: 'Monitor virtual machines with Azure Monitor: Collect data'
+description: Learn how to configure data collection for virtual machines for monitoring in Azure Monitor. Monitor virtual machines and their workloads with an Azure Monitor guide.
++++ Last updated : 01/05/2023++++
+# Monitor virtual machines with Azure Monitor: Collect data
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to configure collection of data once you've deployed the Azure Monitor agent to your Azure and hybrid virtual machines in Azure Monitor.
+
+This article provides guidance on collecting the most common types of telemetry from virtual machines. The exact configuration that you choose will depend on the workloads that you run on your machines. Included in each section are sample log query alerts that you can use with that data.
+
+- See [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md) for more information about analyzing telemetry collected from your virtual machines.
+- See [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md) for more information about using telemetry collected from your virtual machines to create alerts in Azure Monitor.
+
+> [!NOTE]
+> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
++
+## Data collection rules
+Data collection from the Azure Monitor agent is defined by one or more [data collection rules (DCR)](../essentials/data-collection-rule-overview.md) stored in your Azure subscription and are associated with your virtual machines.
+
+For virtual machines, DCRs will define data such as events and performance counters to collect and specify the Log Analytics workspaces that data should be sent to. The DCR can also use [transformations](../essentials/data-collection-transformations.md) to filter out unwanted data and to add calculated columns. A single machine can be associated with multiple DCRs, and a single DCR can be associated with multiple machines. DCRs are delivered to any machines they're associated with where they're processed by the Azure Monitor agent.
+
+### View data collection rules
+You can view the DCRs in your Azure subscription from **Data Collection Rules** in the **Monitor** menu in the Azure portal. DCRs support other data collection scenarios in Azure Monitor, so all of your DCRs won't necessarily be for virtual machines.
+++
+### Create data collection rules
+There are multiple methods to create data collection rules depending on the data collection scenario. In some cases, the Azure portal will walk you through the configuration while other scenarios will require you to edit the DCR directly. When you configure VM insights, it will create a preconfigured DCR for you automatically. The sections below identify common data to collect and how to configure data collection.
+
+In some cases, you may need to [edit an existing DCR](../essentials/data-collection-rule-edit.md) to add functionality. For example, you may use the Azure portal to create a DCR that collects Windows or Syslog events. You then want to add a transformation to that DCR to filter out columns in the events that you don't want to collect.
+
+As your environment matures and grows in complexity, you should implement a strategy for organizing your DCRs to assist in their management. See [Best practices for data collection rule creation and management in Azure Monitor](../essentials/data-collection-rule-best-practices.md) for guidance on different strategies.
+
+## Controlling costs
+Since your Azure Monitor cost is dependent on how much data you collect, you should ensure that you're not collecting any more than you need to meet your monitoring requirements. Your configuration will be a balance between your budget and how much insight you want into the operation of your virtual machines.
++
+A typical virtual machine will generate between 1GB and 3GB of data per month, but this data size is highly dependent on the configuration of the machine itself, the workloads running on it, and the configuration of your data collection rules. Before you configure data collection across your entire virtual machine environment, you should begin collection on some representative machines to better predict your expected costs when deployed across your environment. Use log queries in [Data volume by computer](../logs/analyze-usage.md#data-volume-by-computer) to determine the amount of billable data collected for each machine and adjust accordingly.
+
+Each data source that you collect may have a different method for filtering out unwanted data. You can also use [transformations](../essentials/data-collection-transformations.md) to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
+++
+## Default data collection
+Azure Monitor will automatically perform the following data collection without requiring any additional configuration.
+
+### Platform metrics
+Platform metrics for Azure virtual machines include important host metrics such as CPU, network, and disk utilization. They can be viewed on the [Overview page](monitor-virtual-machine-analyze.md#single-machine-experience), analyzed with [metrics explorer](../essentials/tutorial-metrics.md) for the machine in the Azure portal and used for [metric alerts](tutorial-monitor-vm-alert-recommended.md).
+
+### Activity log
+The [Activity log](../essentials/activity-log.md) is collected automatically and includes the recent activity of the machine, such as any configuration changes and when it was stopped and started. You can view the platform metrics and Activity log collected for each virtual machine host in the Azure portal.
+
+You can [view the Activity log](../essentials/activity-log.md#view-the-activity-log) for an individual machine or for all resources in a subscription. You should [create a diagnostic setting](../essentials/diagnostic-settings.md) to send this data into the same Log Analytics workspace used by your Azure Monitor agent to analyze it with the other monitoring data collected for the virtual machine. There's no cost for ingestion or retention of Activity log data.
+
+### VM availability information in Azure Resource Graph
+[Azure Resource Graph](../../governance/resource-graph/overview.md) is an Azure service that allows you to use the same KQL query language used in log queries to query your Azure resources at scale with complex filtering, grouping, and sorting by resource properties. You can use [VM health annotations](../../service-health/resource-health-vm-annotation.md) to Azure Resource Graph (ARG) for detailed failure attribution and downtime analysis.
+
+See [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monitor-virtual-machine-analyze.md) for details on what data is collected and how to view it.
+
+### VM insights
+When you enable VM insights, then it will create a data collection rule, with the **_MSVMI-_** prefix that collects the following information. You can use this same DCR with other machines as opposed to creating a new one for each VM.
+
+- Common performance counters for the client operating system are sent to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table in the Log Analytics workspace. Counter names will be normalized to use the same common name regardless of the operating system type.
+- If you specified processes and dependencies to be collected, then the following tables are populated:
+
+ - [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) - Traffic for open server ports on the machine
+ - [VMComputer](/azure/azure-monitor/reference/tables/vmcomputer) - Inventory data for the machine
+ - [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) - Traffic for inbound and outbound connections to and from the machine
+ - [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) - Processes running on the machine
+
+By default, [VM insights](../vm/vminsights-overview.md) will not enable collection of processes and dependencies to save data ingestion costs. This data is required for the map feature and will also deploy the dependency agent to the machine. [Enable this collection](vminsights-enable-portal.md#enable-vm-insights-for-azure-monitor-agent) if you want to use this feature.
++++
+## Collect Windows and Syslog events
+The operating system and applications in virtual machines will often write to the Windows Event Log or Syslog. You may create an alert as soon as a single event is found or wait for a series of matching events within a particular time window. You may also collect events for later analysis such as identifying particular trends over time, or for performing troubleshooting after a problem occurs.
+
+See [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) for guidance on creating a DCR to collect Windows and Syslog events. This will allow you to quickly create a DCR using the most common Windows event logs and Syslog facilities filtering by event level. For more granular filtering by criteria such as event ID, you can create a custom filter using [XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries). You can further filter the collected data by [editing the DCR](../essentials/data-collection-rule-edit.md) to add a [transformation](../essentials/data-collection-transformations.md).
+
+Use the following guidance as a recommended starting point for event collection. Modify the DCR settings to filter unneeded events and add additional events depending on your requirements.
++
+| Source | Strategy |
+|:|:|
+| Windows events | Collect at least **Critical**, **Error**, and **Warning** events for the **System** and **Application** logs to support alerting. Add **Information** events to analyze trends and support troubleshooting. **Verbose** events will rarely be useful and typically shouldn't be collected. |
+| Syslog events | Collect at least **LOG_WARNING** events for each facility to support alerting. Add **Information** events to analyze trends and support troubleshooting. **LOG_DEBUG** events will rarely be useful and typically shouldn't be collected. |
++
+### Sample log queries - Windows events
+
+| Query | Description |
+|:|:|
+| `Event` | All Windows events. |
+| `Event | where EventLevelName == "Error"` |All Windows events with severity of error. |
+| `Event | summarize count() by Source` |Count of Windows events by source. |
+| `Event | where EventLevelName == "Error" | summarize count() by Source` |Count of Windows error events by source. |
+
+### Sample log queries - Syslog events
+
+| Query | Description |
+|: |: |
+| `Syslog` |All Syslogs |
+| `Syslog | where SeverityLevel == "error"` |All Syslog records with severity of error |
+| `Syslog | summarize AggregatedValue = count() by Computer` |Count of Syslog records by computer |
+| `Syslog | summarize AggregatedValue = count() by Facility` |Count of Syslog records by facility |
++
+## Collect performance counters
+Performance data from the client can be sent to either [Azure Monitor Metrics](../essentials/data-platform-metrics.md) or [Azure Monitor Logs](../logs/data-platform-logs.md), and you'll typically send them to both destinations. If you enabled VM insights, then a common set of performance counters is collected in Logs to support its performance charts. You can't modify this set of counters, but you can create additional DCRs to collect additional counters and send them to different destinations.
+
+There are multiple reasons why you would want to create a DCR to collect guest performance:
+
+- You aren't using VM insights, so client performance data isn't already being collected.
+- Collect additional performance counters that aren't being collected by VM insights.
+- Collect performance counters from other workloads running on your client.
+- Send performance data to [Azure Monitor Metrics](../essentials/data-platform-metrics.md) where you can use them with metrics explorer and metrics alerts.
+
+See [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md) for guidance on creating a DCR to collect performance counters. This will allow you to quickly create a DCR using the most common counters. For more granular filtering by criteria such as event ID, you can create a custom filter using [XPath queries](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries).
+
+> [!NOTE]
+> You may choose to combine performance and event collection in the same data collection rule.
+++
+ Destination | Description |
+|:|:|
+| Metrics | Host metrics are automatically sent to Azure Monitor Metrics, and you can use a DCR to collect client metrics so they can be analyzed together with [metrics explorer](../essentials/metrics-getting-started.md) or used with [metrics alerts](../alerts/alerts-create-new-alert-rule.md?tabs=metric). This data is stored for 93 days. |
+| Logs | Performance data stored in Azure Monitor Logs can be stored for extended periods and can be analyzed along with your event data using [log queries](../logs/log-query-overview.md) with [Log Analytics](../logs/log-analytics-overview.md) or [log query alerts](../alerts/alerts-create-new-alert-rule.md?tabs=log). You can also corelate data using complex logic across multiple machines, regions, and subscriptions.<br><br>Performance data is sent to the following tables:<br>VM insights - [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics)<br>Other performance data - [Perf](/azure/azure-monitor/reference/tables/perf) |
+
+### Sample log queries
+The following samples use the `Perf` table with custom performance data. For details on performance data collected by VM insights, see [How to query logs from VM insights](../vm/vminsights-log-query.md#performance-records).
+
+| Query | Description |
+|: |:|
+| `Perf` | All Performance data |
+| `Perf | where Computer == "MyComputer"` |All Performance data from a particular computer |
+| `Perf | where CounterName == "Current Disk Queue Length"` |All Performance data for a particular counter |
+| `Perf | where ObjectName == "Processor" and CounterName == "% Processor Time" and InstanceName == "_Total" | summarize AVGCPU = avg(CounterValue) by Computer` |Average CPU Utilization across all computers |
+| `Perf | where CounterName == "% Processor Time" | summarize AggregatedValue = max(CounterValue) by Computer` |Maximum CPU Utilization across all computers |
+| `Perf | where ObjectName == "LogicalDisk" and CounterName == "Current Disk Queue Length" and Computer == "MyComputerName" | summarize AggregatedValue = avg(CounterValue) by InstanceName` |Average Current Disk Queue length across all the instances of a given computer |
+| `Perf | where CounterName == "Disk Transfers/sec" | summarize AggregatedValue = percentile(CounterValue, 95) by Computer` |95th Percentile of Disk Transfers/Sec across all computers |
+| `Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" | summarize AggregatedValue = avg(CounterValue) by bin(TimeGenerated, 1h), Computer` |Hourly average of CPU usage across all computers |
+| `Perf | where Computer == "MyComputer" and CounterName startswith_cs "%" and InstanceName == "_Total" | summarize AggregatedValue = percentile(CounterValue, 70) by bin(TimeGenerated, 1h), CounterName` | Hourly 70 percentile of every % percent counter for a particular computer |
+| `Perf | where CounterName == "% Processor Time" and InstanceName == "_Total" and Computer == "MyComputer" | summarize ["min(CounterValue)"] = min(CounterValue), ["avg(CounterValue)"] = avg(CounterValue), ["percentile75(CounterValue)"] = percentile(CounterValue, 75), ["max(CounterValue)"] = max(CounterValue) by bin(TimeGenerated, 1h), Computer` |Hourly average, minimum, maximum, and 75-percentile CPU usage for a specific computer |
+| | |
+| `Perf | where ObjectName == "MSSQL$INST2:Databases" and InstanceName == "master"` | All Performance data from the Database performance object for the master database from the named SQL Server instance INST2. |
+
+## Collect text logs
+Some applications write events written to a text log stored on the virtual machine. Create a [custom table and DCR](../agents/data-collection-text-log.md) to collect this data. You define the location of the text log, its detailed configuration, and the schema of the custom table. There's a cost for the ingestion and retention of this data in the workspace.
+
+### Sample log queries
+The column names used here are for example only. The column names for your log will most likely be different.
+
+| Query | Description |
+|: |: |
+| `MyApp_CL | summarize count() by code` | Count the number of events by code. |
+| `MyApp_CL | where status == "Error" | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)` | Create an alert rule on any error event. |
+
+++
+## Collect IIS logs
+IIS running on Windows machines writes logs to a text file. Configure IIS log collection using [Collect IIS logs with Azure Monitor Agent](../agents/data-collection-iis.md). There's a cost for the ingestion and retention of this data in the workspace. Records from the IIS log are stored in the [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) table in the Log Analytics workspace. There's a cost for the ingestion and retention of this data in the workspace.
+
+### Sample log queries
++
+| Query | Description |
+|: |: |
+| `W3CIISLog | where csHost=="www.contoso.com" | summarize count() by csUriStem` | Count the IIS log entries by URL for the host www.contoso.com. |
+| `W3CIISLog | summarize sum(csBytes) by Computer` | Review the total bytes received by each IIS machine. |
++
+## Monitor a service or daemon
+To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md).
+Azure Monitor has no ability on its own to monitor the status of a service or daemon. There are some possible methods to use, such as looking for events in the Windows event log, but this method is unreliable. You can also look for the process associated with the service running on the machine from the [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) table populated by VM insights. This table only updates every hour, which isn't typically sufficient if you want to use this data for alerting.
+
+> [!NOTE]
+> The Change Tracking and Analysis solution is different from the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
+
+For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution.
+
+When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for logs queries and log query alert rules.
+
+| Table | Description |
+|:|:|
+| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationdata) | Changes to in-guest configuration data |
+| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Last reported state for in-guest configuration data |
++
+### Sample log queries
+
+- **List all services and daemons that have recently started.**
+
+ ```kusto
+ ConfigurationChange
+ | where ConfigChangeType == "Daemons" or ConfigChangeType == "WindowsServices"
+ | where SvcState == "Running"
+ | sort by Computer, SvcName
+ ```
+
+- **Alert when a specific service stops.**
+Use this query in a log alert rule.
+
+ ```kusto
+ ConfigurationData
+ | where SvcName == "W3SVC"
+ | where SvcState == "Stopped"
+ | where ConfigDataType == "WindowsServices"
+ | where SvcStartupType == "Auto"
+ | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
+ ```
+
+- **Alert when one of a set of services stops.**
+Use this query in a log alert rule.
+
+ ```kusto
+ let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
+ ConfigurationData
+ | where ConfigDataType == "WindowsServices"
+ | where SvcStartupType == "Auto"
+ | where SvcName in (services)
+ | where SvcState == "Stopped"
+ | project TimeGenerated, Computer, SvcName, SvcDisplayName, SvcState
+ | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
+ ```
+
+## Monitor a port
+Port monitoring verifies that a machine is listening on a particular port. Two potential strategies for port monitoring are described here.
+
+### Dependency agent tables
+If you're using VM insights with Processes and dependencies collection enabled, you can use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The VMBoundPort table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
++
+- **Review the count of ports open on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**
+
+ ```kusto
+ VMBoundPort
+ | where Ip != "127.0.0.1"
+ | summarize by Computer, Machine, Port, Protocol
+ | summarize OpenPorts=count() by Computer, Machine
+ | order by OpenPorts desc
+ ```
+
+- **List the bound ports on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**
+
+ ```kusto
+ VMBoundPort
+ | distinct Computer, Port, ProcessName
+ ```
++
+- **Analyze network activity by port to determine how your application or service is configured.**
+
+ ```kusto
+ VMBoundPort
+ | where Ip != "127.0.0.1"
+ | summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
+ | project-away TimeGenerated
+ | order by Machine, Computer, Port, Ip, ProcessName
+ ```
+
+- **Review bytes sent and received trends for your VMs.**
+
+ ```kusto
+ VMConnection
+ | summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
+ | order by Computer desc
+ | render timechart
+ ```
+
+- **Use connection failures over time to determine if the failure rate is stable or changing.**
+
+ ```kusto
+ VMConnection
+ | where Computer == <replace this with a computer name, e.g. 'acme-demo'>
+ | extend bythehour = datetime_part("hour", TimeGenerated)
+ | project bythehour, LinksFailed
+ | summarize failCount = count() by bythehour
+ | sort by bythehour asc
+ | render timechart
+ ```
+
+- **Link status trends to analyze the behavior and connection status of a machine.**
+
+ ```kusto
+ VMConnection
+ | where Computer == <replace this with a computer name, e.g. 'acme-demo'>
+ | summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
+ | render timechart
+ ```
+
+### Connection Manager
+The [Connection Monitor](../../network-watcher/connection-monitor-overview.md) feature of [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is used to test connections to a port on a virtual machine. A test verifies that the machine is listening on the port and that it's accessible on the network.
+Connection Manager requires the Network Watcher extension on the client machine initiating the test. It doesn't need to be installed on the machine being tested. For details, see [Tutorial - Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md).
+
+There's an extra cost for Connection Manager. For details, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
++
+## Run a process on a local machine
+Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there is a cost for each runbook that it uses.
+
+The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log and then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
+++
+## Next steps
+
+* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
+* [Create alerts from collected data](monitor-virtual-machine-alerts.md)
+
azure-monitor Monitor Virtual Machine Management Packs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-management-packs.md
+
+ Title: 'Monitor virtual machines with Azure Monitor: Migrate management pack logic'
+description: Includes a general approach that existing customers of System Center Operations Manager (SCOM) might take to translate critical logic in their management packs to Azure Monitor.
++++ Last updated : 01/10/2023++++
+# Monitor virtual machines with Azure Monitor: Migrate management pack logic
+This article is part of the guide [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It discusses a general approach that existing customers of System Center Operations Manager (SCOM) might take to translate critical logic in their management packs to Azure Monitor.
+
+> [!NOTE]
+> [Azure Monitor SCOM Managed Instance (preview)](scom-managed-instance-overview.md) is now in public preview. This allows you to move your existing SCOM environment into the Azure portal with Azure Monitor while continuing to use the same management packs. The rest of the recommendations in this article still apply as you migrate your monitoring logic into Azure Monitor.
++
+## Translating logic
+You may currently use SCOM to monitor your virtual machines and their workloads and are starting to consider which monitoring you can move to Azure Monitor. As described in [Azure Monitor for existing Operations Manager customer](../azure-monitor-operations-manager.md), you may continue using SCOM for some period of time until you no longer require the extensive monitoring that SCOM provides. See [Cloud monitoring guide: Monitoring platforms overview](/azure/cloud-adoption-framework/manage/monitor/platform-overview) for a complete comparison of Azure Monitor and SCOM.
+
+There are no migration tools to convert SCOM management packs to Azure Monitor because the platforms are fundamentally different. Your migration instead constitutes a standard Azure Monitor implementation while you continue to use SCOM. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
+
+Management packs in SCOM contain rules and monitors that combine collection of data and the resulting alert into a single end-to-end workflow. Data that's already been collected by SCOM is rarely used for alerting. Azure Monitor separates data collection and alerts into separate processes. Alert rules access data from Azure Monitor Logs and Azure Monitor Metrics that has already been collected from agents. Also, rules and monitors are typically narrowly focused on very specific data such as a particular event or performance counter. Data collection rules in Azure Monitor are typically more broad collecting multiple sets of events and performance counters in a single DCR.
+++
+- Data that you need to collect to support alerting, analysis, and visualization. See [Monitor virtual machines with Azure Monitor: Data collection](monitor-virtual-machine-data-collection.md)
+- Alerts rules that analyze the collected data to proactively notify of you of issues. See [Monitor virtual machines with Azure Monitor: Alerts](monitor-virtual-machine-alerts.md)
++
+## Identify critical management pack logic
+
+Instead of attempting to replicate the entire functionality of a management pack, analyze the critical monitoring provided by the management pack. Decide whether you can replicate those monitoring requirements by using the methods described in the previous sections. In many cases, you can configure data collection and alert rules in Azure Monitor that replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
+
+In most scenarios, Operations Manager combines data collection and alerting conditions in the same rule or monitor. In Azure Monitor, you must configure data collection and an alert rule for any alerting scenarios.
+
+One strategy is to focus on those monitors and rules that triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup), such as **Alerts** and **Most Common Alerts**, which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
+
+```sql
+select AlertName, COUNT(AlertName) as 'Total Alerts' from
+Alert.vAlertResolutionState ars
+inner join Alert.vAlertDetail adt on ars.AlertGuid = adt.AlertGuid
+inner join Alert.vAlert alt on ars.AlertGuid = alt.AlertGuid
+group by AlertName
+order by 'Total Alerts' DESC
+```
+
+Evaluate the output to identify specific alerts for migration. Ignore any alerts that were tuned out or are known to be problematic. Review your management packs to identify any critical alerts of interest that never fired.
+++
+## Synthetic transactions
+Management packs often make use of synthetic transactions that connect to an application or service running on a machine to simulate a user connection or actual user traffic. If the application is available, you can assume that the machine is running properly. [Application insights](../app/app-insights-overview.md) in Azure Monitor provides this functionality. It only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test. Or you can use an alternate monitoring solution, such as System Center Operations Manager.
+
+|Method | Description |
+|:|:|
+| [URL test](../app/monitor-web-app-availability.md) | Ensures that HTTP is available and returning a web page |
+| [Multistep test](../app/availability-multistep.md) | Simulates a user session |
+
+## Next steps
+
+* [Learn how to analyze data in Azure Monitor logs using log queries](../logs/get-started-queries.md)
+* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md)
azure-monitor Monitor Virtual Machine Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-security.md
- Title: 'Monitor virtual machines with Azure Monitor: Security'
-description: Learn about services for monitoring security of virtual machines and how they relate to Azure Monitor.
---- Previously updated : 06/28/2022----
-# Monitor virtual machines with Azure Monitor: Security monitoring
-This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes the Azure services for monitoring security for your virtual machines and how they relate to Azure Monitor. Azure Monitor was designed to monitor the availability and performance of your virtual machines and other cloud resources. While the operational data stored in Azure Monitor might be useful for investigating security incidents, other services in Azure were designed to monitor security.
-
-> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
-
-> [!IMPORTANT]
-> The security services have their own cost independent of Azure Monitor. Before you configure these services, refer to their pricing information to determine your appropriate investment in their usage.
-
-## Azure services for security monitoring
-Azure Monitor focuses on operational data like Activity logs, Metrics, and Log Analytics supported sources, including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by Microsoft Defender for Cloud and Microsoft Sentinel. These services each have additional cost, so you should determine their value in your environment before you implement them.
--
-## Integration with Azure Monitor
-The following table lists the integration points for Azure Monitor with the security services. All the services use the same Log Analytics agent, which reduces complexity because there are no other components being deployed to your virtual machines. Defender for Cloud and Microsoft Sentinel store their data in a Log Analytics workspace so that you can use log queries to correlate data collected by the different services. Or you can create a custom workbook that combines security data and availability and performance data in a single view.
-
-| Integration point | Azure Monitor | Microsoft Defender for Cloud | Microsoft Sentinel | Defender for Endpoint |
-|:|:|:|:|:|
-| Collects security events | | X | X | X |
-| Stores data in Log Analytics workspace | X | X | X | |
-| Uses Log Analytics agent | X | X | X | X |
---
-## Agent deployment
-You can configure Defender for Cloud to automatically deploy the Log Analytics agent to Azure virtual machines. While this might seem redundant with Azure Monitor deploying the same agent, you'll most likely want to enable both because they'll each perform their own configuration. For example, if Defender for Cloud attempts to provision a machine that's already being monitored by Azure Monitor, it will use the agent that's already installed and add the configuration for the Defender for Cloud workspace.
-
-## Next steps
-
-* [Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
-* [Create alerts from collected data](monitor-virtual-machine-alerts.md)
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
- Title: 'Monitor virtual machines with Azure Monitor: Workloads'
-description: Learn how to monitor the guest workloads of virtual machines in Azure Monitor.
---- Previously updated : 06/28/2022----
-# Monitor virtual machines with Azure Monitor: Workloads
-This article is part of the scenario [Monitor virtual machines and their workloads in Azure Monitor](monitor-virtual-machine.md). It describes how to monitor workloads that are running on the guest operating systems of your virtual machines. This article includes details on analyzing and alerting on different sources of data on your virtual machines.
-
-> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md) or [Tutorial: Collect guest logs and metrics from Azure virtual machine](tutorial-monitor-vm-guest.md).
-
-## Configure additional data collection
-VM insights collects only performance data from the guest operating system of enabled machines. You can enable the collection of additional performance data, events, and other monitoring data from the agent by configuring the Log Analytics workspace. It's configured only once because any agent that connects to the workspace automatically downloads the configuration and immediately starts collecting the defined data.
-
-For a list of the data sources available and details on how to configure them, see [Agent data sources in Azure Monitor](../agents/agent-data-sources.md).
-
-> [!NOTE]
-> You can't selectively configure data collection for different machines. All machines connected to the workspace use the configuration for that workspace.
-
-> [!IMPORTANT]
-> Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
-
-## Controlling costs
-Be careful to collect only the data that you require. Costs are associated with any data collected in your workspace. The data that you collect should only support particular analysis and alerting scenarios.
---
-Virtual machines can vary significantly in the amount of data they collect, depending on the amount of telemetry generated by the applications and services they have installed. Since your Azure Monitor cost is dependent on how much data you collect, you want to ensure that you're not collecting any more data than you require to meet your monitoring requirements.
-
-Each data source that you collect may have a different method for filtering out unwanted data. You can also use transformations to implement more granular filtering and also to filter data from columns that provide little value. For example, you might have a Windows event that's valuable for alerting, but it includes columns with redundant or excessive data. You can create a transformation that allows the event to be collected but removes this excessive data.
-
-The following table lists the different data sources on a VM and how to filter the data they collect.
-
-> [!NOTE]
-> Azure tables here refers to tables that are created and maintained by Microsoft and documented in the [Azure Monitor reference](/azure/azure-monitor/reference/). Custom tables are created by custom applications and have a suffix of _CL in their name.
-
-| Target | Description | Filtering method |
-|:|:|:|
-| Azure tables | [Collect data from standard sources](../agents/data-collection-rule-azure-monitor-agent.md) such as Windows events, Syslog, and performance data and send to Azure tables in Log Analytics workspace. | Use [XPath in the data collection rule (DCR)](../agents/data-collection-rule-azure-monitor-agent.md#filter-events-using-xpath-queries) to collect specific data from client machines.<br><br>Use transformations to further filter specific events or remove unnecessary columns. |
-| Custom tables | [Create a data collection rule](../agents/data-collection-text-log.md) to collect file-base text logs from the agent. | Add a [transformation](../essentials/data-collection-transformations.md) to the data collection rule. |
--
-## Convert management pack logic
-A significant number of customers who implement Azure Monitor currently monitor their virtual machine workloads by using management packs in System Center Operations Manager. There are no migration tools to convert assets from Operations Manager to Azure Monitor because the platforms are fundamentally different. Your migration instead constitutes a standard Azure Monitor implementation while you continue to use Operations Manager. As you customize Azure Monitor to meet your requirements for different applications and components and as it gains more features, then you can start to retire different management packs and agents in Operations Manager.
-
-Instead of attempting to replicate the entire functionality of a management pack, analyze the critical monitoring provided by the management pack. Decide whether you can replicate those monitoring requirements by using the methods described in the previous sections. In many cases, you can configure data collection and alert rules in Azure Monitor that replicate enough functionality that you can retire a particular management pack. Management packs can often include hundreds and even thousands of rules and monitors.
-
-In most scenarios, Operations Manager combines data collection and alerting conditions in the same rule or monitor. In Azure Monitor, you must configure data collection and an alert rule for any alerting scenarios.
-
-One strategy is to focus on those monitors and rules that triggered alerts in your environment. Refer to [existing reports available in Operations Manager](/system-center/scom/manage-reports-installed-during-setup), such as **Alerts** and **Most Common Alerts**, which can help you identify alerts over time. You can also run the following query on the Operations Database to evaluate the most common recent alerts.
-
-```sql
-select AlertName, COUNT(AlertName) as 'Total Alerts' from
-Alert.vAlertResolutionState ars
-inner join Alert.vAlertDetail adt on ars.AlertGuid = adt.AlertGuid
-inner join Alert.vAlert alt on ars.AlertGuid = alt.AlertGuid
-group by AlertName
-order by 'Total Alerts' DESC
-```
-
-Evaluate the output to identify specific alerts for migration. Ignore any alerts that were tuned out or are known to be problematic. Review your management packs to identify any critical alerts of interest that never fired.
-
-## Windows or Syslog event
-In this common monitoring scenario, the operating system and applications write to the Windows events or Syslog. Create an alert as soon as a single event is found. Or you can wait for a series of matching events within a particular time window.
-
-To collect these events, configure a Log Analytics workspace to collect [Windows events](../agents/data-sources-windows-events.md) or [Syslog events](../agents/data-sources-windows-events.md). There's a cost for the ingestion and retention of this data in the workspace.
-
-Windows events are stored in the [Event](/azure/azure-monitor/reference/tables/event) table and Syslog events are stored in the [Syslog](/azure/azure-monitor/reference/tables/syslog) table in the Log Analytics workspace.
-
-### Sample log queries
--- **Count the number of events by computer event log and event type.**-
- ```kusto
- Event
- | summarize count() by Computer, EventLog, EventLevelName
- | sort by Computer, EventLog, EventLevelName
- ```
--- **Count the number of events by computer event log and event ID.**
-
- ```kusto
- Event
- | summarize count() by Computer, EventLog, EventLevelName
- | sort by Computer, EventLog, EventLevelName
- ```
-
-### Sample alert rules
-The following sample creates an alert when a specific Windows event is created. It uses a metric measurement alert rule to create a separate alert for each computer.
--- **Create an alert rule on a specific Windows event.**-
- This example shows an event in the Application log. Specify a threshold of 0 and consecutive breaches greater than 0.
-
- ```kusto
- Event
- | where EventLog == "Application"
- | where EventID == 123
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
- ```
--- **Create an alert rule on Syslog events with a particular severity.**-
- The following example shows error authorization events. Specify a threshold of 0 and consecutive breaches greater than 0.
-
- ```kusto
- Syslog
- | where Facility == "auth"
- | where SeverityLevel == "err"
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
- ```
-
-## Custom performance counters
-You might need performance counters created by applications or the guest operating system that aren't collected by VM insights. Configure the Log Analytics workspace to collect this [performance data](../agents/data-sources-windows-events.md). There's a cost for the ingestion and retention of this data in the workspace. Be careful to not collect performance data that's already being collected by VM insights.
-
-Performance data configured by the workspace is stored in the [Perf](/azure/azure-monitor/reference/tables/perf) table. This table has a different structure than the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table used by VM insights.
-
-### Sample log queries
-
-For examples of log queries that use custom performance counters, see [Log queries with Performance records](../agents/data-sources-performance-counters.md#log-queries-with-performance-records).
-
-### Sample alerts
--- **Create an alert on the maximum value of a counter.**
-
- ```kusto
- Perf
- | where CounterName == "My Counter"
- | summarize AggregatedValue = max(CounterValue) by Computer
- ```
--- **Create an alert on the average value of a counter.**-
- ```kusto
- Perf
- | where CounterName == "My Counter"
- | summarize AggregatedValue = avg(CounterValue) by Computer
- ```
-
-## Text logs
-Some applications write events written to a text log stored on the virtual machine. Define a [custom log](../agents/data-sources-custom-logs.md) in the Log Analytics workspace to collect these events. You define the location of the text log and its detailed configuration. There's a cost for the ingestion and retention of this data in the workspace.
-
-Events from the text log are stored in a table with a name similar to **MyTable_CL**. You define the name and structure of the log when you configure it.
-
-### Sample log queries
-The column names used here are for example only. You define the column names for your particular log when you define it. The column names for your log will most likely be different.
--- **Count the number of events by code.**
-
- ```kusto
- MyApp_CL
- | summarize count() by code
- ```
-
-### Sample alert rule
--- **Create an alert rule on any error event.**
-
- ```kusto
- MyApp_CL
- | where status == "Error"
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
- ```
-## IIS logs
-IIS running on Windows machines writes logs to a text file. Configure a Log Analytics workspace to collect [IIS logs](../agents/data-sources-iis-logs.md). There's a cost for the ingestion and retention of this data in the workspace.
-
-Records from the IIS log are stored in the [W3CIISLog](/azure/azure-monitor/reference/tables/w3ciislog) table in the Log Analytics workspace.
-
-### Sample log queries
--- **Count the IIS log entries by URL for the host www.contoso.com.**
-
- ```kusto
- W3CIISLog
- | where csHost=="www.contoso.com"
- | summarize count() by csUriStem
- ```
--- **Review the total bytes received by each IIS machine.**-
- ```kusto
- W3CIISLog
- | summarize sum(csBytes) by Computer
- ```
-
-### Sample alert rule
--- **Create an alert rule on any record with a return status of 500.**
-
- ```kusto
- W3CIISLog
- | where scStatus==500
- | summarize AggregatedValue = count() by Computer, bin(TimeGenerated, 15m)
- ```
-
-## Service or daemon
-To monitor the status of a Windows service or Linux daemon, enable the [Change Tracking and Inventory](../../automation/change-tracking/overview.md) solution in [Azure Automation](../../automation/automation-intro.md).
-Azure Monitor has no ability to monitor the status of a service or daemon. There are some possible methods to use, such as looking for events in the Windows event log, but this method is unreliable. You can also look for the process associated with the service running on the machine from the [VMProcess](/azure/azure-monitor/reference/tables/vmprocess) table. This table only updates every hour, which isn't typically sufficient for alerting.
-
-> [!NOTE]
-> The Change Tracking and Analysis solution is different from the [Change Analysis](vminsights-change-analysis.md) feature in VM insights. This feature is in public preview and not yet included in this scenario.
-
-For different options to enable the Change Tracking solution on your virtual machines, see [Enable Change Tracking and Inventory](../../automation/change-tracking/overview.md#enable-change-tracking-and-inventory). This solution includes methods to configure virtual machines at scale. You'll have to [create an Azure Automation account](../../automation/quickstarts/create-azure-automation-account-portal.md) to support the solution.
-
-When you enable Change Tracking and Inventory, two new tables are created in your Log Analytics workspace. Use these tables for log query alert rules.
-
-| Table | Description |
-|:|:|
-| [ConfigurationChange](/azure/azure-monitor/reference/tables/configurationdata) | Changes to in-guest configuration data |
-| [ConfigurationData](/azure/azure-monitor/reference/tables/configurationdata) | Last reported state for in-guest configuration data |
--
-### Sample log queries
--- **List all services and daemons that have recently started.**
-
- ```kusto
- ConfigurationChange
- | where ConfigChangeType == "Daemons" or ConfigChangeType == "WindowsServices"
- | where SvcState == "Running"
- | sort by Computer, SvcName
- ```
-
-### Alert rule samples
--- **Create an alert rule based on when a specific service stops.**-
-
- ```kusto
- ConfigurationData
- | where SvcName == "W3SVC"
- | where SvcState == "Stopped"
- | where ConfigDataType == "WindowsServices"
- | where SvcStartupType == "Auto"
- | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
- ```
--- **Create an alert rule based on when one of a set of services stops.**
-
- ```kusto
- let services = dynamic(["omskd","cshost","schedule","wuauserv","heathservice","efs","wsusservice","SrmSvc","CertSvc","wmsvc","vpxd","winmgmt","netman","smsexec","w3svc","sms_site_vss_writer","ccmexe","spooler","eventsystem","netlogon","kdc","ntds","lsmserv","gpsvc","dns","dfsr","dfs","dhcp","DNSCache","dmserver","messenger","w32time","plugplay","rpcss","lanmanserver","lmhosts","eventlog","lanmanworkstation","wnirm","mpssvc","dhcpserver","VSS","ClusSvc","MSExchangeTransport","MSExchangeIS"]);
- ConfigurationData
- | where ConfigDataType == "WindowsServices"
- | where SvcStartupType == "Auto"
- | where SvcName in (services)
- | where SvcState == "Stopped"
- | project TimeGenerated, Computer, SvcName, SvcDisplayName, SvcState
- | summarize AggregatedValue = count() by Computer, SvcName, SvcDisplayName, SvcState, bin(TimeGenerated, 15m)
- ```
-
-## Port monitoring
-Port monitoring verifies that a machine is listening on a particular port. Two potential strategies for port monitoring are described here.
-
-### Dependency agent tables
-Use [VMConnection](/azure/azure-monitor/reference/tables/vmconnection) and [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) to analyze connections and ports on the machine. The VMBoundPort table is updated every minute with each process running on the computer and the port it's listening on. You can create a log query alert similar to the missing heartbeat alert to find processes that have stopped or to alert when the machine isn't listening on a particular port.
-
-### Sample log queries
--- **Review the count of ports open on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**-
- ```kusto
- VMBoundPort
- | where Ip != "127.0.0.1"
- | summarize by Computer, Machine, Port, Protocol
- | summarize OpenPorts=count() by Computer, Machine
- | order by OpenPorts desc
- ```
--- **List the bound ports on your VMs, which is useful for assessing which VMs have configuration and security vulnerabilities.**-
- ```kusto
- VMBoundPort
- | distinct Computer, Port, ProcessName
- ```
---- **Analyze network activity by port to determine how your application or service is configured.**-
- ```kusto
- VMBoundPort
- | where Ip != "127.0.0.1"
- | summarize BytesSent=sum(BytesSent), BytesReceived=sum(BytesReceived), LinksEstablished=sum(LinksEstablished), LinksTerminated=sum(LinksTerminated), arg_max(TimeGenerated, LinksLive) by Machine, Computer, ProcessName, Ip, Port, IsWildcardBind
- | project-away TimeGenerated
- | order by Machine, Computer, Port, Ip, ProcessName
- ```
--- **Review bytes sent and received trends for your VMs.**-
- ```kusto
- VMConnection
- | summarize sum(BytesSent), sum(BytesReceived) by bin(TimeGenerated,1hr), Computer
- | order by Computer desc
- | render timechart
- ```
--- **Use connection failures over time to determine if the failure rate is stable or changing.**-
- ```kusto
- VMConnection
- | where Computer == <replace this with a computer name, e.g. 'acme-demo'>
- | extend bythehour = datetime_part("hour", TimeGenerated)
- | project bythehour, LinksFailed
- | summarize failCount = count() by bythehour
- | sort by bythehour asc
- | render timechart
- ```
--- **Link status trends to analyze the behavior and connection status of a machine.**-
- ```kusto
- VMConnection
- | where Computer == <replace this with a computer name, e.g. 'acme-demo'>
- | summarize dcount(LinksEstablished), dcount(LinksLive), dcount(LinksFailed), dcount(LinksTerminated) by bin(TimeGenerated, 1h)
- | render timechart
- ```
-
-### Connection Manager
-The [Connection Monitor](../../network-watcher/connection-monitor-overview.md) feature of [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) is used to test connections to a port on a virtual machine. A test verifies that the machine is listening on the port and that it's accessible on the network.
-Connection Manager requires the Network Watcher extension on the client machine initiating the test. It doesn't need to be installed on the machine being tested. For details, see [Tutorial - Monitor network communication using the Azure portal](../../network-watcher/connection-monitor.md).
-
-There's an extra cost for Connection Manager. For details, see [Network Watcher pricing](https://azure.microsoft.com/pricing/details/network-watcher/).
-
-## Run a process on a local machine
-Monitoring of some workloads requires a local process. An example is a PowerShell script that runs on the local machine to connect to an application and collect or process data. You can use [Hybrid Runbook Worker](../../automation/automation-hybrid-runbook-worker.md), which is part of [Azure Automation](../../automation/automation-intro.md), to run a local PowerShell script. There's no direct charge for Hybrid Runbook Worker, but there is a cost for each runbook that it uses.
-
-The runbook can access any resources on the local machine to gather required data. It can't send data directly to Azure Monitor or create an alert. To create an alert, have the runbook write an entry to a custom log and then configure that log to be collected by Azure Monitor. Create a log query alert rule that fires on that log entry.
-
-## Synthetic transactions
-A synthetic transaction connects to an application or service running on a machine to simulate a user connection or actual user traffic. If the application is available, you can assume that the machine is running properly. [Application insights](../app/app-insights-overview.md) in Azure Monitor provides this functionality. It only works for applications that are accessible from the internet. For internal applications, you must open a firewall to allow access from specific Microsoft URLs performing the test. Or you can use an alternate monitoring solution, such as System Center Operations Manager.
-
-|Method | Description |
-|:|:|
-| [URL test](../app/monitor-web-app-availability.md) | Ensures that HTTP is available and returning a web page |
-| [Multistep test](../app/availability-multistep.md) | Simulates a user session |
-
-## SQL Server
-
-Use [SQL Insights (preview)](/azure/azure-sql/database/sql-insights-overview) to monitor SQL Server running on your virtual machines.
-
-## Next steps
-
-* [Learn how to analyze data in Azure Monitor logs using log queries](../logs/get-started-queries.md)
-* [Learn about alerts using metrics and logs in Azure Monitor](../alerts/alerts-overview.md)
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Previously updated : 06/02/2021 Last updated : 01/05/2023 # Monitor virtual machines with Azure Monitor
-This scenario describes how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads. It includes collection of telemetry critical for monitoring and analysis and visualization of collected data to identify trends. It also shows you how to configure alerting to be proactively notified of critical issues.
+This guide describes how to use Azure Monitor to monitor the health and performance of virtual machines and their workloads. It includes collection of telemetry critical for monitoring and analysis and visualization of collected data to identify trends. It also shows you how to configure alerting to be proactively notified of critical issues.
> [!NOTE]
-> This scenario describes how to implement complete monitoring of your Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
-
-This article introduces the scenario and provides general concepts for monitoring virtual machines in Azure Monitor. If you want to jump right into a specific area, see one of the other articles that are part of this scenario described in the following table.
-
-| Article | Description |
-|:|:|
-| [Enable monitoring](monitor-virtual-machine-configure.md) | Configure Azure Monitor to monitor virtual machines, which includes enabling VM insights and enabling each virtual machine for monitoring. |
-| [Analyze](monitor-virtual-machine-analyze.md) | Analyze monitoring data collected by Azure Monitor from virtual machines and their guest operating systems and applications to identify trends and critical information. |
-| [Alerts](monitor-virtual-machine-alerts.md) | Create alerts to proactively identify critical issues in your monitoring data. |
-| [Monitor security](monitor-virtual-machine-security.md) | Discover Azure services for monitoring security of virtual machines. |
-| [Monitor workloads](monitor-virtual-machine-workloads.md) | Monitor applications and other workloads running on your virtual machines. |
-
-> [!IMPORTANT]
-> This scenario doesn't include features that aren't generally available. Features in public preview, such as [virtual machine guest health](vminsights-health-overview.md), have the potential to significantly modify the recommendations made here. The scenario will be updated as preview features move into general availability.
+> This scenario describes how to implement complete monitoring of your enterprise Azure and hybrid virtual machine environment. To get started monitoring your first Azure virtual machine, see [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md).
## Types of machines
-This scenario includes monitoring of the following types of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate. The types of machines include:
+This guide includes monitoring of the following types of machines using Azure Monitor. Many of the processes described here are the same regardless of the type of machine. Considerations for different types of machines are clearly identified where appropriate. The types of machines include:
- Azure virtual machines.-- Azure virtual machine scale sets.
+- Azure Virtual Machine Scale Sets.
- Hybrid machines, which are virtual machines running in other clouds, with a managed service provider, or on-premises. They also include physical machines running on-premises. ## Layers of monitoring There are fundamentally four layers to a virtual machine that require monitoring. Each layer has a distinct set of telemetry and monitoring requirements. + | Layer | Description | |:|:|
-| Virtual machine host | The host virtual machine in Azure. Azure Monitor has no access to the host in other clouds but must rely on information collected from the guest operating system. The host can be useful for tracking activity such as configuration changes, but typically isn't used for significant alerting. |
-| Guest operating system | The operating system running on the virtual machine, which is some version of either Windows or Linux. A significant amount of monitoring data is available from the guest operating system, such as performance data and events. VM insights in Azure Monitor provides a significant amount of logic for monitoring the health and performance of the guest operating system. |
-| Workloads | Workloads running in the guest operating system that support your business applications. Azure Monitor provides predefined monitoring for some workloads. You typically need to configure data collection and alerting for other workloads by using monitoring data that they generate. |
-| Application | The business application that depends on your virtual machines can be monitored by using [Application Insights](../app/app-insights-overview.md).
+| Virtual machine host | The host virtual machine in Azure. Azure Monitor has no access to the host in other clouds but must rely on information collected from the guest operating system. The host can be useful for tracking activity such as configuration changes, and basic alerting such as processor utilization and whether the machine is running. |
+| Guest operating system | The operating system running on the virtual machine, which is some version of either Windows or Linux. A significant amount of monitoring data is available from the guest operating system, such as performance data and events. You must install Azure Monitor agent to retrieve this telemetry. |
+| Workloads | Workloads running in the guest operating system that support your business applications. These will typically generate performance data and events similar to the operating system that you can retrieve. You must install Azure Monitor agent to retrieve this telemetry. |
+| Application | The business application that depends on your virtual machines. This will typically be monitored by Application insights. |
+## Configuration steps
+The following table lists the different steps for configuration of VM monitoring. Each one links to an article with the detailed description of that configuration step.
-## VM insights
+| Step | Description |
+|:|:|
+| [Deploy Azure Monitor agent](monitor-virtual-machine-agent.md) | Deploy the Azure Monitor agent to your Azure and hybrid virtual machines to collect data from the guest operating system and workloads. |
+| [Configure data collection](monitor-virtual-machine-data-collection.md)) | Create data collection rules to instruct the Azure Monitor agent to collect telemetry from the guest operating system. |
+| [Analyze collect data](monitor-virtual-machine-analyze.md) | Analyze monitoring data collected by Azure Monitor from virtual machines and their guest operating systems and applications to identify trends and critical information. |
+| [Create alert rules](monitor-virtual-machine-alerts.md) | Create alerts to proactively identify critical issues in your monitoring data. |
+| [Migrate management pack logic](monitor-virtual-machine-management-packs.md) | General guidance for translation the logic from your System Center Operations Manager management packs to Azure Monitor. |
-This scenario focuses on [VM insights](../vm/vminsights-overview.md), which is the primary feature in Azure Monitor for monitoring virtual machines. VM insights provides the following features:
+
+## VM insights
+[VM insights](../vm/vminsights-overview.md) is a feature in Azure Monitor that allows you to quickly get started monitoring your virtual machines. While it's not required to take advantage of most Azure Monitor features for monitoring your VMs, it provides the following value:
-- Simplified onboarding of agents to enable monitoring of a virtual machine guest operating system and workloads.
+- Simplified onboarding of the Azure Monitor agent to enable monitoring of a virtual machine guest operating system and workloads.
+- Preconfigured data collection rule that collects the most common set of performance counters for Windows and Linux.
- Predefined trending performance charts and workbooks that you can use to analyze core performance metrics from the virtual machine's guest operating system.-- Dependency map that displays processes running on each virtual machine and the interconnected components with other machines and external sources.
+- Optional collection of details for each virtual machine, the processes running on it, and dependencies with other services.
+- Optional dependency map that displays interconnected components with other machines and external sources.
-## Agents
+The articles in this guide provide guidance on configuring VM insights and using the data it collects with other Azure Monitor features. They also identify alternatives if you choose not to use VM insights.
-Any monitoring tool, such as Azure Monitor, requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor currently has multiple agents that collect different data, send data to different locations, and support different features. VM insights manages the deployment and configuration of the agents that most customers will use.
-Different agents are described in the following table in case you require the particular scenarios that they support. For a detailed description and comparison of the different agents, see [Overview of Azure Monitor agents](../agents/agents-overview.md).
+## Security monitoring
+Azure Monitor focuses on operational data like Activity logs, Metrics, and Log Analytics supported sources, including Windows Events (excluding security events), performance counters, logs, and Syslog. Security monitoring in Azure is performed by [Microsoft Defender for Cloud](/azure/defender-for-cloud/) and [Microsoft Sentinel](/azure/sentinel/). Configuration of these services is not included in this guide.
-> [!NOTE]
-> The Azure Monitor agent will completely replace the Log Analytics agent, Azure Diagnostics extension, and Telegraf agent after it gains required functionality. These other agents are still required for features such as VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel.
+> [!IMPORTANT]
+> The security services have their own cost independent of Azure Monitor. Before you configure these services, refer to their pricing information to determine your appropriate investment in their usage.
++
+### Integration with Azure Monitor
+The following table lists the integration points for Azure Monitor with the security services. All the services use the same Azure Monitor agent, which reduces complexity because there are no other components being deployed to your virtual machines. Defender for Cloud and Microsoft Sentinel store their data in a Log Analytics workspace so that you can use log queries to correlate data collected by the different services. Or you can create a custom workbook that combines security data and availability and performance data in a single view.
+
+See [Design a Log Analytics workspace architecture](../logs/workspace-design.md) for guidance on the most effective workspace design for your requirements taking into account all your services that use them.
+
+| Integration point | Azure Monitor | Microsoft Defender for Cloud | Microsoft Sentinel | Defender for Endpoint |
+|:|::|::|::|::|
+| Collects security events | | X | X | X |
+| Stores data in Log Analytics workspace | X | X | X | |
+| Uses Azure Monitor agent | X | X | X | X |
+
+> [!IMPORTANT]
+> Azure Monitor agent is in preview for some service features. See [Supported services and features](../agents/agents-overview.md#supported-services-and-features) for current details.
-| Agent | Description |
-|:|:|
-| [Azure Monitor agent](../agents/agents-overview.md) | Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Metrics and Logs. When it fully supports VM insights, Microsoft Defender for Cloud, and Microsoft Sentinel, it will completely replace the Log Analytics agent and Azure Diagnostics extension. |
-| [Log Analytics agent](../agents/log-analytics-agent.md) | Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. Supports VM insights and monitoring solutions. This agent is the same agent used for System Center Operations Manager. |
-| [Dependency agent](vminsights-dependency-agent-maintenance.md) | Collects data about the processes running on the virtual machine and their dependencies. Relies on the Log Analytics agent to transmit data into Azure and supports VM insights, Service Map, and Wire Data 2.0 solutions. |
-| [Azure Diagnostics extension](../agents/diagnostics-extension-overview.md) | Available for Azure Monitor virtual machines only. Can send data to Azure Event Hubs and Azure Storage.
## Next steps
-[Analyze monitoring data collected for virtual machines](monitor-virtual-machine-analyze.md)
+[Deploy the Azure Monitor agent to your virtual machines](monitor-virtual-machine-agent.md)
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
Visualizations|[Azure Workbooks](./visualize/workbooks-overview.md)|New video to
| Article | Description | ||| |[Azure Application Insights Overview Dashboard](app/overview-dashboard.md)|Important information has been added clarifying that moving or renaming resources will break dashboards, with additional instructions on how to resolve this scenario.|
-|[Azure Application Insights override default SDK endpoints](app/create-new-resource.md#application-insights-overriding-default-endpoints)|We've clarified that endpoint modification isn't recommended and to use connection strings instead.|
+|[Azure Application Insights override default SDK endpoints](app/create-new-resource.md#override-default-endpoints)|We've clarified that endpoint modification isn't recommended and to use connection strings instead.|
|[Continuous export of telemetry from Application Insights](app/export-telemetry.md)|Important information has been added about avoiding duplicates when saving diagnostic logs in a Log Analytics workspace.| |[Dependency Tracking in Azure Application Insights with OpenCensus Python](app/opencensus-python-dependency.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.| |[Incoming Request Tracking in Azure Application Insights with OpenCensus Python](app/opencensus-python-request.md)|Updated Django sample application and documentation in the Azure Monitor OpenCensus Python samples repository.|
azure-netapp-files Azure Netapp Files Quickstart Set Up Account Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-quickstart-set-up-account-create-volumes.md
This how-to article requires the Azure PowerShell module Az version 2.6.0 or lat
Prepare your environment for the Azure CLI. [!INCLUDE [azure-netapp-files-cloudshell-include](../../includes/azure-netapp-files-azure-cloud-shell-window.md)]
azure-netapp-files Manage Availability Zone Volume Placement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-availability-zone-volume-placement.md
na Previously updated : 12/15/2022 Last updated : 01/13/2023 # Manage availability zone volume placement for Azure NetApp Files
You can also use [Azure CLI commands](/cli/azure/feature) `az feature register`
[ ![Screenshot that shows the Availability Zone review.](../media/azure-netapp-files/availability-zone-display-down.png) ](../media/azure-netapp-files/availability-zone-display-down.png#lightbox)
-4. After you create the volume, the **Volume Overview** page includes availability zone information for the volume.
+4. Navigate to **Properties** to confirm your availability zone configuration.
- [ ![Screenshot that shows the Availability Zone volume overview.](../media/azure-netapp-files/availability-zone-volume-overview.png) ](../media/azure-netapp-files/availability-zone-volume-overview.png#lightbox)
+ :::image type="content" source="../media/azure-netapp-files/availability-zone-volume-overview.png" alt-text="Screenshot of volume properties interface." lightbox="../media/azure-netapp-files/availability-zone-volume-overview.png":::
## Next steps
azure-portal Azure Portal Dashboards Create Programmatically https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-dashboards-create-programmatically.md
Now that you've seen an example of using a parameterized template to deploy a da
Prepare your environment for the Azure CLI. - These examples use the following dashboard: [portal-dashboard-template-testvm.json](https://raw.githubusercontent.com/Azure/azure-docs-powershell-samples/master/azure-portal/portal-dashboard-template-testvm.json). Be sure to replace all of the content in angled brackets with your values.
azure-portal Quickstart Portal Dashboard Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/quickstart-portal-dashboard-azure-cli.md
A dashboard in the Azure portal is a focused and organized view of your cloud re
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - If you have multiple Azure subscriptions, choose the appropriate subscription in which to bill the resources. Select a subscription by using the [az account set](/cli/azure/account#az-account-set) command:
azure-resource-manager Create Custom Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/create-custom-provider.md
In this quickstart, you create a custom resource provider and deploy custom reso
Prepare your environment for the Azure CLI. Azure CLI examples use `az rest` for `REST` requests. For more information, see [az rest](/cli/azure/reference-index#az-rest).
azure-resource-manager Managed Application Define Create Cli Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/scripts/managed-application-define-create-cli-sample.md
This script publishes a managed application definition to a service catalog and
[!INCLUDE [quickstarts-free-trial-note](../../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-signalr Signalr Cli Create Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-service.md
This sample script creates a new Azure SignalR Service resource in a new resourc
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-signalr Signalr Cli Create With App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/scripts/signalr-cli-create-with-app-service.md
This sample script creates a new Azure SignalR Service resource, which is used t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
azure-signalr Signalr Concept Authenticate Oauth https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-authenticate-oauth.md
In this section, you will turn on real authentication by adding the `Authorize`
Prepare your environment for the Azure CLI: In this section, you will use the Azure CLI to create a new web app in [Azure App Service](../app-service/index.yml) to host your ASP.NET application in Azure. The web app will be configured to use local Git deployment. The web app will also be configured with your SignalR connection string, GitHub OAuth app secrets, and a deployment user.
azure-signalr Signalr Howto Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-howto-event-grid-integration.md
Azure Event Grid is a fully managed event routing service that provides uniform
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - The Azure CLI commands in this article are formatted for the **Bash** shell. If you're using a different shell like PowerShell or Command Prompt, you may need to adjust line continuation characters or variable assignment lines accordingly. This article uses variables to minimize the amount of command editing required.
azure-vmware Attach Azure Netapp Files To Azure Vmware Solution Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/attach-azure-netapp-files-to-azure-vmware-solution-hosts.md
Title: Attach Azure NetApp Files datastores to Azure VMware Solution hosts
description: Learn how to create Azure NetApp Files-based NFS datastores for Azure VMware Solution hosts. Previously updated : 01/09/2023 Last updated : 01/13/2023
The following diagram demonstrates a typical architecture of Azure NetApp Files
Before you begin the prerequisites, review the [Performance best practices](#performance-best-practices) section to learn about optimal performance of NFS datastores on Azure NetApp Files volumes.
-1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md).
+1. [Deploy Azure VMware Solution](./deploy-azure-vmware-solution.md) private cloud and a dedicated virtual network connected via ExpressRoute gateway. The virtual network gateway should be configured with the Ultra performance or ErGw3Az SKU and have FastPath enabled. For more information, see [Configure networking for your VMware private cloud](tutorial-configure-networking.md) and [Network planning checklist](tutorial-network-checklist.md).
1. Create an [NFSv3 volume for Azure NetApp Files](../azure-netapp-files/azure-netapp-files-create-volumes.md) in the same virtual network created in the previous step. 1. Verify connectivity from the private cloud to Azure NetApp Files volume by pinging the attached target IP. 2. Verify the subscription is registered to the `ANFAvsDataStore` feature in the `Microsoft.NetApp` namespace. If the subscription isn't registered, register it now.
azure-vmware Concepts Private Clouds Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-private-clouds-clusters.md
Title: Concepts - Private clouds and clusters
description: Learn about the key capabilities of Azure VMware Solution software-defined data centers and VMware vSphere clusters. Previously updated : 10/25/2022 Last updated : 1/10/2023
The diagram shows a single Azure subscription with two private clouds that repre
## Host monitoring and remediation
-Azure VMware Solution continuously monitors the health of both the underlay and the VMware components. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
+Azure VMware Solution continuously monitors the health of both the VMware components and underlay. When Azure VMware Solution detects a failure, it takes action to repair the failed components. When Azure VMware Solution detects a degradation or failure on an Azure VMware Solution node, it triggers the host remediation process.
-Host remediation involves replacing the faulty node with a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster.
+Host remediation involves replacing the faulty node with a new healthy node in the cluster. Then, when possible, the faulty host is placed in VMware vSphere maintenance mode. VMware vMotion moves the VMs off the faulty host to other available servers in the cluster, potentially allowing zero downtime for live migration of workloads. If the faulty host can't be placed in maintenance mode, the host is removed from the cluster. Before the faulty host is removed, the customer workloads will be migrated to a newly added host.
+
+> [!TIP]
+> **Customer communication:** An email is sent to the customer's email address before the replacement is initiated and again after the replacement is successful.
+>
+> To receive emails related to host replacement, you need to be added to any of the following Azure RBAC roles in the subscription: 'ServiceAdmin', 'CoAdmin', 'Owner', 'Contributor'.
Azure VMware Solution monitors the following conditions on the host:
Azure VMware Solution monitors the following conditions on the host:
- Connection failure > [!NOTE]
-> Azure VMware Solution tenant admins must not edit or delete the above defined VMware vCenter Server alarms, as these are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
+> Azure VMware Solution tenant admins must not edit or delete the previously defined VMware vCenter Server alarms because they are managed by the Azure VMware Solution control plane on vCenter Server. These alarms are used by Azure VMware Solution monitoring to trigger the Azure VMware Solution host remediation process.
## Backup and restoration
azure-vmware Rotate Cloudadmin Credentials https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/rotate-cloudadmin-credentials.md
Instead of using the cloudadmin user to connect services to vCenter Server, we r
To begin using Azure CLI: 1. In your Azure VMware Solution private cloud, open an Azure Cloud Shell session.
azure-vmware Tutorial Configure Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/tutorial-configure-networking.md
description: Learn to create and configure the networking needed to deploy your
Previously updated : 05/31/2022 Last updated : 01/13/2023
Now that you've created a virtual network, you'll create a virtual network gatew
| **Name** | Enter a unique name for the virtual network gateway. | | **Region** | Select the geographical location of the virtual network gateway. | | **Gateway type** | Select **ExpressRoute**. |
- | **SKU** | Leave the default value: **standard**. |
+ | **SKU** | Select the gateway SKU appropriate for your workload. <br> For Azure NetApp Files datastores, select UltraPerformance or ErGw3Az. |
| **Virtual network** | Select the virtual network you created previously. If you don't see the virtual network, make sure the gateway's region matches the region of your virtual network. | | **Gateway subnet address range** | This value is populated when you select the virtual network. Don't change the default value. | | **Public IP address** | Select **Create new**. |
azure-web-pubsub Quickstart Cli Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-cli-create.md
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-web-pubsub Quickstart Cli Try https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-cli-try.md
This quickstart shows you how to connect to the Azure Web PubSub instance and pu
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-web-pubsub Quickstart Use Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/quickstart-use-sdk.md
This quickstart shows you how to publish messages to the clients using service S
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-web-pubsub Tutorial Build Chat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-build-chat.md
In this tutorial, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This setup requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-web-pubsub Tutorial Pub Sub Messages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-pub-sub-messages.md
In this tutorial, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This setup requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
azure-web-pubsub Tutorial Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/tutorial-subprotocol.md
In this tutorial, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This setup requires version 2.22.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
backup Backup Afs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-afs-cli.md
By the end of this tutorial, you'll learn how to perform the operations below wi
* Enable backup for Azure file shares * Trigger an on-demand backup for file shares - This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
backup Backup Azure Sql Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-sql-backup-cli.md
In this article, you'll learn how to:
See the [currently supported scenarios](sql-support-matrix.md) for SQL in Azure VM. ## Create a Recovery Services vault
backup Encryption At Rest With Cmk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk.md
Title: Encryption of backup data using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK). Previously updated : 11/24/2022 Last updated : 01/13/2023 -+ -+ # Encryption of backup data using customer-managed keys
To assign the key and follow the steps, choose a client:
1. Enter the **Key URI** with which you want to encrypt the data in this Recovery Services vault. You also need to specify the subscription in which the Azure Key Vault (that contains this key) is present. This key URI can be obtained from the corresponding key in your Azure Key Vault. Ensure the key URI is copied correctly. It's recommended that you use the **Copy to clipboard** button provided with the key identifier. >[!NOTE]
- >When specifying the encryption key using the Key URI, the key will not be auto-rotated. So key updates will need to be done manually, by specifying the new key when required.
+ >When specifying the encryption key using the full Key URI, the key will not be auto-rotated, and you need to perform key updates manually by specifying the new key when required. Alternatively, remove the Version component of the Key URI to get automatic rotation.
![Enter key URI](./media/encryption-at-rest-with-cmk/key-uri.png)
Using the **Select from Key Vault** option helps to enable auto-rotation for the
Azure Backup allows you to use Azure Polices to audit and enforce encryption, using customer-managed keys, of data in the Recovery Services vault. Using the Azure Policies: - The audit policy can be used for auditing vaults with encryption using customer-managed keys that are enabled after 04/01/2021. For vaults with the CMK encryption enabled before this date, the policy may fail to apply or may show false negative results (that is, these vaults may be reported as non-compliant, despite having **CMK encryption** enabled).-- To use the audit policy for auditing vaults with **CMK encryption** enabled before 04/01/2021, use the Azure portal to update an encryption key. This helps to upgrade to the new model. If you do not want to change the encryption key, provide the same key again through the key URI or the key selection option.
+- To use the audit policy for auditing vaults with **CMK encryption** enabled before 04/01/2021, use the Azure portal to update an encryption key. This helps to upgrade to the new model. If you don't want to change the encryption key, provide the same key again through the key URI or the key selection option.
>[!Warning] >If you are using PowerShell for managing encryption keys for Backup, we do not recommend to update the keys from the portal.<br>If you update the key from the portal, you can't use PowerShell to update the encryption key further, till a PowerShell update to support the new model is available. However, you can continue updating the key from the Azure portal.
No, CMK encryption can be enabled for new vaults only. So the vault must never h
### I tried to protect an item to my vault, but it failed, and the vault still doesn't contain any items protected to it. Can I enable CMK encryption for this vault?
-No, the vault must have not had any attempts to protect any items to it in the past.
+No, the vault must haven't had any attempts to protect any items to it in the past.
### I have a vault that's using CMK encryption. Can I later revert to encryption using platform-managed keys even if I have backup items protected to the vault?
backup Manage Afs Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-afs-backup-cli.md
This article assumes you already have an Azure file share backed up by [Azure Ba
- **Storage Account**: *afsaccount* - **File Share**: *azurefiles*
- [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
- This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ## Monitor jobs
backup Quick Backup Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-backup-vm-cli.md
The Azure CLI is used to create and manage Azure resources from the command line
This quickstart enables backup on an existing Azure VM. If you need to create a VM, you can [create a VM with the Azure CLI](../virtual-machines/linux/quick-create-cli.md). - This quickstart requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
backup Restore Afs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-afs-cli.md
This article assumes that you already have an Azure file share that's backed up
You can use a similar structure for your file shares to try out the different types of restores explained in this article. - This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
backup Tutorial Restore Disk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-disk.md
For information on using PowerShell to restore a disk and create a recovered VM,
Now, you can also use CLI to directly restore the backup content to a VM (original/new), without performing the above steps separately. For more information, see [Restore data to virtual machine using CLI](#restore-data-to-virtual-machine-using-cli). - This tutorial requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
backup Tutorial Restore Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-restore-files.md
This tutorial requires a Linux VM that has been protected with Azure Backup. To
Prepare your environment: - This article requires version 2.0.18 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
backup Tutorial Sap Hana Backup Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-sap-hana-backup-cli.md
This document assumes that you already have an SAP HANA database installed on an
Check out the [scenarios that we currently support](./sap-hana-backup-support-matrix.md#scenario-support) for SAP HANA. - This tutorial requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
The [az backup job list](/cli/azure/backup/job#az-backup-job-list) cmdlet lists
## Get the container name
-To get container name, run the following command. [Learn about this CLI command](/cli/azure/backup/container?view=azure-cli-latest#az-backup-container-list).
+To get container name, run the following command. [Learn about this CLI command](/cli/azure/backup/container#az-backup-container-list).
```azurecli az backup item list --resource-group <resource group name> --vault-name <vault name>
baremetal-infrastructure Connect Baremetal Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/baremetal-infrastructure/connect-baremetal-infrastructure.md
You'll need to list your subscription in the Azure portal and then double-click
To begin using Azure CLI: Sign in to the Azure subscription you use for the BareMetal instance deployment through the Azure CLI. Register the `BareMetalInfrastructure` resource provider with the [az provider register](/cli/azure/provider#az-provider-register) command:
batch Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/quick-create-cli.md
The Azure CLI is used to create and manage Azure resources from the command line
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.20 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
batch Batch Cli Sample Add Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-add-application.md
This script demonstrates how to add an application for use with an Azure Batch p
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
batch Batch Cli Sample Create Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-account.md
service. Allocated compute nodes are subject to a separate vCPU (core) quota and
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
batch Batch Cli Sample Create User Subscription Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-create-user-subscription-account.md
This script creates an Azure Batch account in user subscription mode. An account
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
batch Batch Cli Sample Manage Linux Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-linux-pool.md
This script demonstrates some of the commands available in the Azure CLI to crea
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
batch Batch Cli Sample Manage Windows Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-manage-windows-pool.md
manage a pool of Windows compute nodes in Azure Batch. A Windows pool can be con
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
batch Batch Cli Sample Run Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/scripts/batch-cli-sample-run-job.md
This script creates a Batch job and adds a series of tasks to the job. It also d
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
cdn Cdn Azure Cli Create Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/scripts/cli/cdn-azure-cli-create-endpoint.md
As an alternative to the Azure portal, you can use these sample Azure CLI script
- Create a CDN origin. - Create a custom domain and enable HTTPS. ## Sample scripts
cognitive-services How To Migrate To Prebuilt Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-migrate-to-prebuilt-neural-voice.md
The prebuilt neural voice provides more natural sounding speech output, and thus
## Action required > [!TIP]
-> Even without an Azure account, you can listen to voice samples at this [Azure website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+> Even without an Azure account, you can listen to voice samples at the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
-1. Review the [price](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) structure and listen to the neural voice [samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) at the bottom of that page to determine the right voice for your business needs.
+1. Review the [price](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) structure.
2. To make the change, [follow the sample code](speech-synthesis-markup-voice.md#voice-element) to update the voice name in your speech synthesis request to the supported neural voice names in chosen languages. Use neural voices for your speech synthesis request, on cloud or on prem. For on-premises container, use the [neural voice containers](../containers/container-image-tags.md) and follow the [instructions](speech-container-howto.md). ## Standard voice details (deprecated)
cognitive-services How To Pronunciation Assessment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-pronunciation-assessment.md
This table lists some of the key configuration parameters for pronunciation asse
| Parameter | Description | |--|-| | `ReferenceText` | The text that the pronunciation will be evaluated against. |
-| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. |
-| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels above or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text.|
+| `GradingSystem` | The point system for score calibration. The `FivePoint` system gives a 0-5 floating point score, and `HundredMark` gives a 0-100 floating point score. Default: `FivePoint`. |
+| `Granularity` | Determines the lowest level of evaluation granularity. Scores for levels above or equal to the minimal value are returned. Accepted values are `Phoneme`, which shows the score on the full text, word, syllable, and phoneme level, `Syllable`, which shows the score on the full text, word, and syllable level, `Word`, which shows the score on the full text and word level, or `FullText`, which shows the score on the full text level only. The provided full reference text can be a word, sentence, or paragraph, and it depends on your input reference text. Default: `Phoneme`.|
| `EnableMiscue` | Enables miscue calculation when the pronounced words are compared to the reference text. If this value is `True`, the `ErrorType` result value can be set to `Omission` or `Insertion` based on the comparison. Accepted values are `False` and `True`. Default: `False`. | | `ScenarioId` | A GUID indicating a customized point system. |
cognitive-services Language Identification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-identification.md
Previously updated : 09/16/2022 Last updated : 01/12/2023 zone_pivot_groups: programming-languages-speech-services-nomore-variant
Language identification is used to identify languages spoken in audio when compa
Language identification (LID) use cases include:
-* [Standalone language identification](#standalone-language-identification) when you only need to identify the language in an audio source.
* [Speech-to-text recognition](#speech-to-text) when you need to identify the language in an audio source and then transcribe it to text. * [Speech translation](#speech-translation) when you need to identify the language in an audio source and then translate it to another language.
Note that for speech recognition, the initial latency is higher with language id
## Configuration options
-Whether you use language identification [on its own](#standalone-language-identification), with [speech-to-text](#speech-to-text), or with [speech translation](#speech-translation), there are some common concepts and configuration options.
+Whether you use language identification with [speech-to-text](#speech-to-text) or with [speech translation](#speech-translation), there are some common concepts and configuration options.
- Define a list of [candidate languages](#candidate-languages) that you expect in the audio. - Decide whether to use [at-start or continuous](#at-start-and-continuous-language-identification) language identification.
You can choose to prioritize accuracy or latency with language identification.
> [!NOTE] > Latency is prioritized by default with the Speech SDK. You can choose to prioritize accuracy or latency with the Speech SDKs for C#, C++, Java ([for speech to text only](#speech-to-text)), and Python.+ Prioritize `Latency` if you need a low-latency result such as during live streaming. Set the priority to `Accuracy` if the audio quality may be poor, and more latency is acceptable. For example, a voicemail could have background noise, or some silence at the beginning. Allowing the engine more time will improve language identification results. * **At-start:** With at-start LID in `Latency` mode the result is returned in less than 5 seconds. With at-start LID in `Accuracy` mode the result is returned within 30 seconds. You set the priority for at-start LID with the `SpeechServiceConnection_SingleLanguageIdPriority` property.
-* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. With continuous LID in `Accuracy` mode the results are returned within no set time frame for the duration of the audio. You set the priority for continuous LID with the `SpeechServiceConnection_ContinuousLanguageIdPriority` property.
+* **Continuous:** With continuous LID in `Latency` mode the results are returned every 2 seconds for the duration of the audio. Continuous LID in `Accuracy` mode isn't supported with [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition.
> [!IMPORTANT]
-> With [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition, do not set `Accuracy`with the SpeechServiceConnection_ContinuousLanguageIdPriority property. The setting will be ignored without error, and the default priority of `Latency` will remain in effect. Only [standalone language identification](#standalone-language-identification) supports continuous LID with `Accuracy` prioritization.
+> With [speech-to-text](#speech-to-text) and [speech translation](#speech-translation) continuous recognition, do not set `Accuracy` with the SpeechServiceConnection_ContinuousLanguageIdPriority property. The setting will be ignored without error, and the default priority of `Latency` will remain in effect.
+
Speech uses at-start LID with `Latency` prioritization by default. You need to set a priority property for any other LID configuration. ::: zone pivot="programming-language-csharp"
Language identification is completed with recognition objects and operations. Yo
> [!NOTE] > Don't confuse recognition with identification. Recognition can be used with or without language identification.+ Let's map these concepts to the code. You will either call the recognize once method, or the start and stop continuous recognition methods. You choose from: - Recognize once with at-start LID
recognizer.stop_continuous_recognition()
::: zone-end
-## Standalone language identification
-
-You use standalone language identification when you only need to identify the language in an audio source.
-
-> [!NOTE]
-> Standalone source language identification is only supported with the Speech SDKs for C#, C++, and Python.
-
-See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/csharp/sharedcontent/console/standalone_language_detection_samples.cs).
-
-### [Recognize once](#tab/once)
--
-### [Continuous recognition](#tab/continuous)
------
-See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/cpp/windows/console/samples/standalone_language_detection_samples.cpp).
-
-### [Recognize once](#tab/once)
--
-### [Continuous recognition](#tab/continuous)
------
-See more examples of standalone language identification on [GitHub](https://github.com/Azure-Samples/cognitive-services-speech-sdk/blob/master/samples/python/console/speech_language_detection_sample.py).
-
-### [Recognize once](#tab/once)
--
-### [Continuous recognition](#tab/continuous)
----- ## Speech-to-text You use Speech-to-text recognition when you need to identify the language in an audio source and then transcribe it to text. For more information, see [Speech-to-text overview](speech-to-text.md).
var endpointString = $"wss://{region}.stt.speech.microsoft.com/speech/universal/
var endpointUrl = new Uri(endpointString); var config = SpeechConfig.FromEndpoint(endpointUrl, "YourSubscriptionKey");
-// can switch "Latency" to "Accuracy" depending on priority
config.SetProperty(PropertyId.SpeechServiceConnection_ContinuousLanguageIdPriority, "Latency"); var autoDetectSourceLanguageConfig = AutoDetectSourceLanguageConfig.FromLanguages(new string[] { "en-US", "de-DE", "zh-CN" });
result.close();
- ::: zone-end ::: zone pivot="programming-language-python"
translation_config = speechsdk.translation.SpeechTranslationConfig(
target_languages=('de', 'fr')) audio_config = speechsdk.audio.AudioConfig(filename=weatherfilename)
-# Set the Priority (optional, default Latency, either Latency or Accuracy is accepted)
-translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, value='Accuracy')
+# Set the Priority (optional, default Latency)
+translation_config.set_property(property_id=speechsdk.PropertyId.SpeechServiceConnection_SingleLanguageIdPriority, value='Latency')
# Specify the AutoDetectSourceLanguageConfig, which defines the number of possible languages auto_detect_source_language_config = speechsdk.languageconfig.AutoDetectSourceLanguageConfig(languages=["en-US", "de-DE", "zh-CN"])
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
The tables in this section summarizes the locales and voices supported for Text-
Additional remarks for Text-to-speech locales are included in the [Voice styles and roles](#voice-styles-and-roles), [Prebuilt neural voices](#prebuilt-neural-voices), and [Custom Neural Voice](#custom-neural-voice) sections below. > [!TIP]
-> Check the [voice samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+> Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
[!INCLUDE [Language support include](includes/language-support/tts.md)]
Use the following table to determine supported styles and roles for each neural
### Prebuilt neural voices
-Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices on [this website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
+Each prebuilt neural voice supports a specific language and dialect, identified by locale. You can try the demo and hear the voices in the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
> [!IMPORTANT] > Pricing varies for Prebuilt Neural Voice (see *Neural* on the pricing page) and Custom Neural Voice (see *Custom Neural* on the pricing page). For more information, see the [Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) page.
cognitive-services Migration Overview Neural Voice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/migration-overview-neural-voice.md
Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-s
Go to [this article](how-to-migrate-to-prebuilt-neural-voice.md) to learn how to migrate to prebuilt neural voice.
-Prebuilt neural voice is powered by deep neural networks. You need to create an Azure account and Speech service subscription. Then you can use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal), and select prebuilt neural voices to get started. Listening to the voice sample without creating an Azure account, you can visit [here](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+Prebuilt neural voice is powered by deep neural networks. You need to create an Azure account and Speech service subscription. Then you can use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal), and select prebuilt neural voices to get started. Listening to the voice sample without creating an Azure account, you can visit the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
Go to the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/) and check the pricing details. Prebuilt standard voice (retired) is referred as **Standard**, and prebuilt neural voice is referred as **Neural**.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/overview.md
The base model may not be sufficient if the audio contains ambient noise or incl
With [text to speech](text-to-speech.md), you can convert input text into humanlike synthesized speech. Use neural voices, which are humanlike voices powered by deep neural networks. Use the [Speech Synthesis Markup Language (SSML)](speech-synthesis-markup.md) to fine-tune the pitch, pronunciation, speaking rate, volume, and more. -- Prebuilt neural voice: Highly natural out-of-the-box voices. Check the prebuilt neural voice samples [here](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs.
+- Prebuilt neural voice: Highly natural out-of-the-box voices. Check the prebuilt neural voice samples the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs.
- Custom neural voice: Besides the pre-built neural voices that come out of the box, you can also create a [custom neural voice](custom-neural-voice.md) that is recognizable and unique to your brand or product. Custom neural voices are private and can offer a competitive advantage. Check the custom neural voice samples [here](https://aka.ms/customvoice). ### Speech translation
cognitive-services Speech Synthesis Markup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-synthesis-markup.md
Speech Synthesis Markup Language (SSML) is an XML-based markup language that can be used to fine-tune the text-to-speech output attributes such as pitch, pronunciation, speaking rate, volume, and more. You have more control and flexibility compared to plain text input. > [!TIP]
-> You can hear voices in different styles and pitches reading example text by using this [text-to-speech website](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#features).
+> You can hear voices in different styles and pitches reading example text via the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery).
## Scenarios
cognitive-services Spx Basics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/spx-basics.md
spx synthesize --voices
Here's a command for using one of the voices you've discovered. ```console
-spx synthesize --text "Bienvenue chez moi." --voice fr-CA-Caroline --speakers
+spx synthesize --text "Bienvenue chez moi." --voice fr-FR-AlainNeural --speakers
``` > [!TIP]
cognitive-services Text To Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/text-to-speech.md
Text-to-speech includes the following features:
| Feature | Summary | Demo | | | | |
-| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://azure.microsoft.com/services/cognitive-services/text-to-speech/#overview) and determine the right voice for your business needs. |
+| Prebuilt neural voice (called *Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Highly natural out-of-the-box voices. Create an Azure account and Speech service subscription, and then use the [Speech SDK](./get-started-text-to-speech.md) or visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select prebuilt neural voices to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the the [Voice Gallery](https://speech.microsoft.com/portal/voicegallery) and determine the right voice for your business needs. |
| Custom Neural Voice (called *Custom Neural* on the [pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/)) | Easy-to-use self-service for creating a natural brand voice, with limited access for responsible use. Create an Azure account and Speech service subscription (with the S0 tier), and [apply](https://aka.ms/customneural) to use the custom neural feature. After you've been granted access, visit the [Speech Studio portal](https://speech.microsoft.com/portal) and select **Custom Voice** to get started. Check the [pricing details](https://azure.microsoft.com/pricing/details/cognitive-services/speech-services/). | Check the [voice samples](https://aka.ms/customvoice). | ### More about neural text-to-speech features
confidential-ledger Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-ledger/quickstart-cli.md
For more information on Azure confidential ledger, and for examples of what can
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create a resource group
container-instances Container Instances Container Group Ssl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-container-group-ssl.md
In this example, the container group only exposes port 443 for Nginx with its pu
See [Next steps](#next-steps) for other approaches to enabling TLS in a container group. - This article requires version 2.0.55 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
container-instances Container Instances Egress Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-egress-ip-address.md
You then validate ingress and egress from example container groups through the f
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../includes/cli-launch-cloud-shell-sign-in.md)]
container-instances Container Instances Encrypt Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-encrypt-data.md
This article reviews two flows for encrypting data with a customer-managed key:
* Encrypt data with a customer-managed key stored in a network-protected Azure Key Vault with [Trusted Services](../key-vault/general/network-security.md) enabled. ## Encrypt data with a customer-managed key stored in a standard Azure Key Vault ### Create Service Principal for ACI The first step is to ensure that your [Azure tenant](../active-directory/develop/quickstart-create-new-tenant.md) has a service principal assigned for granting permissions to the Azure Container Instances service.
container-instances Container Instances Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-managed-identity.md
Azure Container Instances supports both types of managed Azure identities: user-
To use a managed identity, the identity must be granted access to one or more Azure service resources (such as a web app, a key vault, or a storage account) in the subscription. Using a managed identity in a running container is similar to using an identity in an Azure VM. See the VM guidance for using a [token](../active-directory/managed-identities-azure-resources/how-to-use-vm-token.md), [Azure PowerShell or Azure CLI](../active-directory/managed-identities-azure-resources/how-to-use-vm-sign-in.md), or the [Azure SDKs](../active-directory/managed-identities-azure-resources/how-to-use-vm-sdk.md). - This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
container-instances Container Instances Multi Container Group https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-group.md
A Resource Manager template can be readily adapted for scenarios when you need t
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Configure a template
container-instances Container Instances Multi Container Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-multi-container-yaml.md
In this tutorial, you follow steps to run a simple two-container sidecar configu
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Configure a YAML file
container-instances Container Instances Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-nat-gateway.md
You then validate egress from example container groups through the NAT gateway.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../includes/cli-launch-cloud-shell-sign-in.md)]
container-instances Container Instances Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-quickstart.md
In this quickstart, you use the Azure CLI to deploy an isolated Docker container
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.55 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
container-registry Container Registry Event Grid Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-event-grid-quickstart.md
After you complete the steps in this article, events sent from your container re
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - The Azure CLI commands in this article are formatted for the **Bash** shell. If you're using a different shell like PowerShell or Command Prompt, you may need to adjust line continuation characters or variable assignment lines accordingly. This article uses variables to minimize the amount of command editing required.
container-registry Container Registry Quickstart Task Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-quickstart-task-cli.md
After this quickstart, explore more advanced features of ACR Tasks using the [tu
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.58 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
container-registry Container Registry Tasks Scheduled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tasks-scheduled.md
Scheduling a task is useful for scenarios like the following:
* Run a container workload for scheduled maintenance operations. For example, run a containerized app to remove unneeded images from your registry. * Run a set of tests on a production image during the workday as part of your live-site monitoring. ## About scheduling a task
container-registry Container Registry Tutorial Base Image Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-base-image-update.md
If you haven't already done so, complete the following tutorials before proceedi
### Configure the environment - This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. Populate these shell environment variables with values appropriate for your environment. This step isn't strictly required, but makes executing the multiline Azure CLI commands in this tutorial a bit easier. If you don't populate these environment variables, you must manually replace each value wherever it appears in the example commands.
container-registry Container Registry Tutorial Build Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-build-task.md
In this tutorial:
This tutorial assumes you've already completed the steps in the [previous tutorial](container-registry-tutorial-quick-task.md). If you haven't already done so, complete the steps in the [Prerequisites](container-registry-tutorial-quick-task.md#prerequisites) section of the previous tutorial before proceeding. [!INCLUDE [container-registry-task-tutorial-prereq.md](../../includes/container-registry-task-tutorial-prereq.md)] ## Create the build task
container-registry Container Registry Tutorial Multistep Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-multistep-task.md
In this tutorial:
This tutorial assumes you've already completed the steps in the [previous tutorial](container-registry-tutorial-quick-task.md). If you haven't already done so, complete the steps in the [Prerequisites](container-registry-tutorial-quick-task.md#prerequisites) section of the previous tutorial before proceeding. [!INCLUDE [container-registry-task-tutorial-prereq.md](../../includes/container-registry-task-tutorial-prereq.md)] ## Create a multi-step task
container-registry Container Registry Tutorial Quick Task https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-tutorial-quick-task.md
cd acr-build-helloworld-node
The commands in this tutorial series are formatted for the Bash shell. If you prefer to use PowerShell, Command Prompt, or another shell, you may need to adjust the line continuation and environment variable format accordingly. ## Build in Azure with ACR Tasks
container-registry Data Loss Prevention https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/data-loss-prevention.md
Export policy is a property introduced in API version **2021-06-01-preview** for
* A Premium container registry configured with a [private endpoint](container-registry-private-link.md). ## Other requirements to disable exports
container-registry Manual Regional Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/manual-regional-move.md
While [Azure Resource Mover](../resource-mover/overview.md) can't currently auto
Azure CLI ## Considerations
container-registry Pull Images From Connected Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/pull-images-from-connected-registry.md
ms.devlang: azurecli
To pull images from a [connected registry](intro-connected-registry.md), configure a [client token](overview-connected-registry-access.md#client-tokens) and pass the token credentials to access registry content. * Connected registry resource in Azure. For deployment steps, see [Quickstart: Create a connected registry using the Azure CLI][quickstart-connected-registry-cli]. * Connected registry instance deployed on an IoT Edge device. For deployment steps, see [Quickstart: Deploy a connected registry to an IoT Edge device](quickstart-deploy-connected-registry-iot-edge-cli.md) or [Tutorial: Deploy a connected registry to nested IoT Edge devices](tutorial-deploy-connected-registry-nested-iot-edge-cli.md). In the commands in this article, the connected registry name is stored in the environment variable *$CONNECTED_REGISTRY_RW*.
container-registry Quickstart Connected Registry Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-cli.md
Here you create two connected registry resources for a cloud registry: one conne
After creating a connected registry, you can follow other guides to deploy and use it on your on-premises or remote infrastructure. * Azure Container registry - If you don't already have a container registry, [create one](container-registry-get-started-azure-cli.md) (Premium tier required) in a [region](intro-connected-registry.md#available-regions) that supports connected registries.
container-registry Quickstart Connected Registry Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-connected-registry-portal.md
After creating a connected registry, you can follow other guides to deploy and u
* Azure Container registry - If you don't already have a container registry, [create one](container-registry-get-started-portal.md) (Premium tier required) in a [region](intro-connected-registry.md#available-regions) that supports connected registries. To import images to the container registry, use the Azure CLI: ## Enable the dedicated data endpoint for the cloud registry
container-registry Quickstart Deploy Connected Registry Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/quickstart-deploy-connected-registry-iot-edge-cli.md
In this quickstart, you use the Azure CLI to deploy a [connected registry](intro
For an overview of using a connected registry with IoT Edge, see [Using connected registry with Azure IoT Edge](overview-connected-registry-and-iot-edge.md). This scenario corresponds to a device at the [top layer](overview-connected-registry-and-iot-edge.md#top-layer) of an IoT Edge hierarchy. * Azure IoT Hub and IoT Edge device. For deployment steps, see [Quickstart: Deploy your first IoT Edge module to a virtual Linux device](../iot-edge/quickstart-linux.md). > [!IMPORTANT] > For later access to the modules deployed on the IoT Edge device, make sure that you open the ports 8000, 5671, and 8883 on the device. For configuration steps, see [How to open ports to a virtual machine with the Azure portal](../virtual-machines/windows/nsg-quickstart-portal.md).
container-registry Tutorial Deploy Connected Registry Nested Iot Edge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-deploy-connected-registry-nested-iot-edge-cli.md
In this tutorial, you use Azure CLI commands to create a two-layer hierarchy of
For an overview of using a connected registry with IoT Edge, see [Using connected registry with Azure IoT Edge](overview-connected-registry-and-iot-edge.md). * Azure IoT Hub. For deployment steps, see [Create an IoT hub using the Azure portal](../iot-hub/iot-hub-create-through-portal.md). * Two connected registry resources in Azure. For deployment steps, see quickstarts using the [Azure CLI][quickstart-connected-registry-cli] or [Azure portal][quickstart-connected-registry-portal].
cosmos-db How To Setup Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-cross-tenant-customer-managed-keys.md
Last updated 09/27/2022
-# Configure cross-tenant customer-managed keys for your Azure Cosmos DB account with Azure Key Vault (preview)
+# Configure cross-tenant customer-managed keys for your Azure Cosmos DB account with Azure Key Vault
[!INCLUDE[NoSQL, MongoDB, Cassandra, Gremlin, Table](includes/appliesto-nosql-mongodb-cassandra-gremlin-table.md)]
Data stored in your Azure Cosmos DB account is automatically and seamlessly encr
This article walks through how to configure encryption with customer-managed keys at the time that you create an Azure Cosmos DB account. In this example cross-tenant scenario, the Azure Cosmos DB account resides in a tenant managed by an Independent Software Vendor (ISV) referred to as the service provider. The key used for encryption of the Azure Cosmos DB account resides in a key vault in a different tenant that is managed by the customer.
-## About the preview
-
-To use the preview, you must register for the Azure Active Directory federated client identity feature in the service provider's tenant. Follow these instructions to register with Azure PowerShell or Azure CLI:
-
-### [Portal](#tab/azure-portal)
-
-Not yet supported.
-
-### [PowerShell](#tab/azure-powershell)
-
-To register with Azure PowerShell, use the [Register-AzProviderFeature](/powershell/module/az.resources/register-azproviderfeature) cmdlet.
-
-```azurepowershell
-$parameters = @{
- FeatureName = "FederatedClientIdentity"
- ProviderNamespace = "Microsoft.Storage"
-}
-Register-AzProviderFeature @parameters
-```
-
-To check the status of your registration, use [Get-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature).
-
-```azurepowershell
-$parameters = @{
- FeatureName = "FederatedClientIdentity"
- ProviderNamespace = "Microsoft.Storage"
-}
-Get-AzProviderFeature @parameters
-```
-
-After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with PowerShell, use [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider).
-
-```azurepowershell
-$parameters = @{
- ProviderNamespace = "Microsoft.Storage"
-}
-Register-AzResourceProvider @parameters
-```
-
-### [Azure CLI](#tab/azure-cli)
-
-To register with Azure CLI, use the [az feature register](/cli/azure/feature#az-feature-register) command.
-
-```azurecli
-az feature register \
- --name FederatedClientIdentity \
- --namespace Microsoft.Storage
-```
-
-To check the status of your registration with Azure CLI, use [az feature show](/cli/azure/feature#az-feature-show).
-
-```azurecli
-az feature show \
- --name FederatedClientIdentity \
- --namespace Microsoft.Storage
-```
-
-After your registration is approved, you must re-register the Azure Storage resource provider. To re-register the resource provider with Azure CLI, use [az provider register](/cli/azure/provider#az-provider-register).
-
-```azurecli
-az provider register \
- --namespace 'Microsoft.Storage'
-```
---
-> [!IMPORTANT]
-> Using cross-tenant customer-managed keys with Azure Cosmos DB encryption is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
[!INCLUDE [active-directory-msi-cross-tenant-cmk-overview](../../includes/active-directory-msi-cross-tenant-cmk-overview.md)]
az provider register \
## Create a new Azure Cosmos DB account encrypted with a key from a different tenant
-> [!NOTE]
-> Cross-tenant customer-managed keys with Azure Cosmos DB encryption PREVIEW is not compatible with Continuous Backup or Azure Synapse link features.
Up to this point, you've configured the multi-tenant application on the service provider's tenant. You've also installed the application on the customer's tenant and configured the key vault and key on the customer's tenant. Next you can create an Azure Cosmos DB account on the service provider's tenant and configure customer-managed keys with the key from the customer's tenant.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
In this variation, use the Azure Cosmos DB principal to create an access policy
:::image type="content" source="media/how-to-setup-customer-managed-keys/access-control-assign-role.png" lightbox="media/how-to-setup-customer-managed-keys/access-control-assign-role.png" alt-text="Screenshot of a role assignment on the Access control page.":::
-1. Then, the necessary permissions must be assigned to Cosmos DBΓÇÖs principal. So, like the last role assignment, go to the assignment page but this time look for the **ΓÇ£Key Vault Crypto Service Encryption UserΓÇ¥** role and on the members tab look for Cosmos DBΓÇÖs principal. To find the principal, search for **Azure Cosmos DB** principal and select it (to make it easier to find, you can also search by application ID: `a232010e-820c-4083-83bb-3ace5fc29d0b`.
+1. Then, the necessary permissions must be assigned to Cosmos DBΓÇÖs principal. So, like the last role assignment, go to the assignment page but this time look for the **ΓÇ£Key Vault Crypto Service Encryption UserΓÇ¥** role and on the members tab look for Cosmos DBΓÇÖs principal. To find the principal, search for **Azure Cosmos DB** principal and select it.
:::image type="content" source="media/how-to-setup-customer-managed-keys/assign-permission-principal.png" lightbox="media/how-to-setup-customer-managed-keys/assign-permission-principal.png" alt-text="Screenshot of the Azure Cosmos DB principal being assigned to a permission.":::
cosmos-db Quickstart Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/quickstart-go.md
The sample application is a command-line based `todo` management tool written in
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free). Or [try Azure Cosmos DB for free](https://azure.microsoft.com/try/cosmosdb/) without an Azure subscription. You can also use the [Azure Cosmos DB Emulator](https://aka.ms/cosmosdb-emulator) with the connection string `.mongodb://localhost:C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==@localhost:10255/admin?ssl=true`. - [Go](https://go.dev/) installed on your computer, and a working knowledge of Go. - [Git](https://git-scm.com/downloads). ## Clone the sample application
cosmos-db Manage With Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/manage-with-cli.md
The following guide describes common commands to automate management of your Azure Cosmos DB accounts, databases and containers using Azure CLI. Reference pages for all Azure Cosmos DB CLI commands are available in the [Azure CLI Reference](/cli/azure/cosmosdb). You can also find more examples in [Azure CLI samples for Azure Cosmos DB](cli-samples.md), including how to create and manage Azure Cosmos DB accounts, databases and containers for MongoDB, Gremlin, Cassandra and API for Table. - This article requires version 2.22.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Howto Ingest Azure Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/howto-ingest-azure-blob-storage.md
Previously updated : 11/28/2022 Last updated : 01/13/2023 # How to ingest data using pg_azure_storage
extension in your database:
SELECT * FROM create_extension('azure_storage'); ```
+> [!IMPORTANT]
+>
+> The pg_azure_storage extension is available only on Azure Cosmos DB for
+> PostgreSQL clusters running PostgreSQL 13 and above.
+ We've prepared a public demonstration dataset for this article. To use your own dataset, follow [migrate your on-premises data to cloud storage](../../storage/common/storage-use-azcopy-migrate-on-premises-data.md)
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/create.md
The script in this article demonstrates creating an Azure Cosmos DB account, key
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/lock.md
The script in this article demonstrates preventing resources from being deleted
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/serverless.md
The script in this article demonstrates creating a serverless Azure Cosmos DB ac
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/cassandra/throughput.md
The script in this article creates a Cassandra keyspace with shared throughput a
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires Azure CLI version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Free Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/free-tier.md
Each Azure subscription can have up to one Azure Cosmos DB free-tier account. If
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Ipfirewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/ipfirewall.md
The script in this article demonstrates creating an Azure Cosmos DB account with
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/keys.md
The script in this article demonstrates four operations.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/regions.md
This script uses a API for NoSQL account, but these operations are identical acr
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Service Endpoints Ignore Missing Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints-ignore-missing-vnet.md
This script uses a API for NoSQL account. To use this sample for other APIs, app
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Service Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/common/service-endpoints.md
This script uses a API for NoSQL account. To use this sample for other APIs, app
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/create.md
The script in this article demonstrates creating a Gremlin database and graph.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/lock.md
The script in this article demonstrates performing resource lock operations for
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/gremlin/throughput.md
Last updated 02/21/2022
The script in this article creates a Gremlin database with shared throughput and a Gremlin graph with dedicated throughput, then updates the throughput for both the database and graph. The script then migrates from standard to autoscale throughput then reads the value of the autoscale throughput after it has been migrated. - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Autoscale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/autoscale.md
The script in this article demonstrates creating a API for MongoDB database with
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/create.md
The script in this article demonstrates creating a API for MongoDB database and
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/lock.md
The script in this article demonstrates performing resource lock operations for
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/serverless.md
The script in this article demonstrates creating a API for MongoDB serverless ac
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/mongodb/throughput.md
The script in this article creates a MongoDB database with shared throughput and
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.30 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/create.md
The script in this article demonstrates creating a API for NoSQL database and co
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Lock https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/lock.md
The script in this article demonstrates performing resource lock operations for
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/serverless.md
The script in this article demonstrates creating a API for NoSQL serverless acco
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.9.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/nosql/throughput.md
The script in this article creates a API for NoSQL database with shared throughp
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.12.1 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/create.md
The script in this article demonstrates creating a API for Table table.
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cosmos-db Throughput https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/scripts/cli/table/throughput.md
The script in this article creates a API for Table table then updates the throug
[!INCLUDE [quickstarts-free-trial-note](../../../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.12.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). If using Azure Cloud Shell, the latest version is already installed.
cost-management-billing Get Usage Data Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/get-usage-data-azure-cli.md
This article explains how you get cost and usage data with the Azure CLI. If you
Start by preparing your environment for the Azure CLI. ## Configure an export job to export cost data to Azure storage
cost-management-billing Tutorial Export Acm Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-export-acm-data.md
When you create an export programmatically, you must manually register the `Micr
Start by preparing your environment for the Azure CLI: 1. After you sign in, to see your current exports, use the [az costmanagement export list](/cli/azure/costmanagement/export#az-costmanagement-export-list) command:
cost-management-billing Download Azure Daily Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/download-azure-daily-usage.md
If you have a Microsoft Customer Agreement, you can download month-to-date usage
Start by preparing your environment for the Azure CLI: Then use the [az costmanagement export](/cli/azure/costmanagement/export) commands to export usage data to an Azure storage account. You can download the data from there.
cost-management-billing Mca Understand Your Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/mca-understand-your-usage.md
tags: billing
Previously updated : 10/13/2021 Last updated : 01/13/2023
Usage charges are the total **monthly** charges on a subscription. The usage cha
## Changes from Azure EA usage and charges
-If you were an EA customer, you'll notice that the terms in the Azure billing profile usage CSV file differ from the terms in the Azure EA usage CSV file. Here's a mapping of EA usage terms to billing profile usage terms:
-
-| Azure EA usage CSV | Microsoft Customer Agreement Azure usage and charges CSV |
-| | |
-| Date | date |
-| Month| date |
-| Day | date |
-| Year | date |
-| Product | product |
-| MeterId | meterID |
-| MeterCategory | meterCategory |
-| MeterSubCategory | meterSubCategory |
-| MeterRegion | meterRegion |
-| MeterName | meterName |
-| ConsumedQuantity | quantity |
-| ResourceRate | effectivePrice |
-| ExtendedCost | cost |
-| ResourceLocation | resourceLocation |
-| ConsumedService | consumedService |
-| InstanceId | instanceId |
-| ServiceInfo1 | serviceInfo1 |
-| ServiceInfo2 | serviceInfo2 |
-| AdditionalInfo | additionalInfo |
-| Tags | tags |
-| StoreServiceIdentifier | N/A |
-| DepartmentName | invoiceSection |
-| CostCenter | costCenter |
-| UnitOfMeasure | unitofMeasure |
-| ResourceGroup | resourceGroup |
-| ChargesBilledSeparately | isAzureCreditEligible |
+If you're an EA customer, you'll notice that the terms in the Azure billing profile usage CSV file differ from the terms in the Azure EA usage CSV file. Here's a mapping of EA usage terms to billing profile usage terms:
+
+| Azure EA usage CSV | Microsoft Customer Agreement Azure usage and charges CSV | Description |
+| | | |
+| Date | date | Date that the resource was consumed. |
+| Month | date | Month that the resource was consumed. |
+| Day | date | Day that the resource was consumed. |
+| Year | date | Year that the resourced was consumed. |
+| Product | product | Name of the product. |
+| MeterId | meterID | The unique identifier for the meter. |
+| MeterCategory | meterCategory | Name of the classification category for the meter. Same as the service in the Microsoft Customer Agreement Price Sheet. Exact string values differ. |
+| MeterSubCategory | meterSubCategory | Azure usage meter subclassification. |
+| MeterRegion | meterRegion | Detail required for a service. Useful to find the region context of the resource. |
+| MeterName | meterName | Name of the meter. Represents the Azure service deployable resource. |
+| ConsumedQuantity | quantity | Measured quantity purchased or consumed. The amount of the meter used during the billing period. |
+| ResourceRate | effectivePrice | The price represents the actual rate that you end up paying per unit, after discounts are taken into account. It's the price that should be used with the `Quantity` to do `Price` \* `Quantity` calculations to reconcile charges. The price takes into account the following scenarios and the scaled unit price that's also present in the files. As a result, it might differ from the scaled unit price. |
+| ExtendedCost | cost | Cost of the charge in the billing currency before credits or taxes. |
+| ResourceLocation | resourceLocation | Location of the used resource's data center. |
+| ConsumedService | consumedService | Name of the service. |
+| InstanceId | instanceId | Identifier of the resource instance. Shown as a ResourceURI that includes complete resource properties. |
+| ServiceInfo1 | serviceInfo1 | Legacy field that captures optional service-specific metadata. |
+| ServiceInfo2 | serviceInfo2 | Legacy field with optional service-specific metadata. |
+| AdditionalInfo | additionalInfo | Service-specific metadata. For example, an image type for a virtual machine. |
+| Tags | tags | Tags assigned to the resource. Doesn't include resource group tags. Can be used to group or distribute costs for internal chargeback. For more information, see [Organize your Azure resources with tags](https://azure.microsoft.com/updates/organize-your-azure-resources-with-tags/). |
+| StoreServiceIdentifier | N/A | |
+| DepartmentName | invoiceSection | `DepartmentName` is the department ID. You can see department IDs in the Azure portal on the **Cost Management + Billing** \> **Departments** page. `invoiceSection` is the MCA invoice section name. |
+| CostCenter | costCenter | Cost center associated to the subscription. |
+| UnitOfMeasure | unitofMeasure | The unit of measure for billing for the service. For example, compute services are billed per hour. |
+| ResourceGroup | resourceGroup | Name of the resource group associated with the resource. |
+| ChargesBilledSeparately | isAzureCreditEligible | Indicates if the charge is eligible to be paid for using Azure credits. |
## Detailed terms and descriptions
invoiceSectionName | Name of the invoice section
costCenter | The cost center defined on the subscription for tracking costs (only available in open billing periods) billingPeriodStartDate | The start date of the billing period for which the invoice is generated billingPeriodEndDate | The end date of the billing period for which the invoice is generated
-servicePeriodStartDate | The start date of the rating period which has defined and locked pricing for the consumed or purchased service
-servicePeriodEndDate | The end date of the rating period which has defined and locked pricing for the consumed or purchased service
-date | For Azure and Marketplace usage-based charges, this is the rating date. For one-time purchases (Reservations, Marketplace) or fixed recurring charges (support offers), this is the purchase date.
+servicePeriodStartDate | The start date of the rating period that has defined and locked pricing for the consumed or purchased service
+servicePeriodEndDate | The end date of the rating period that has defined and locked pricing for the consumed or purchased service
+date | For Azure and Marketplace usage-based charges, it's the rating date. For one-time purchases (Reservations, Marketplace) or fixed recurring charges (support offers), it's the purchase date.
serviceFamily | Service family that the service belongs to productOrderId | Unique identifier for the product order productOrderName | Unique name for the product order
consumedService | Name of the consumed service
meterId | The unique identifier for the meter meterName | The name of the meter meterCategory | Name of the classification category for the meter. For example, *Cloud services*, *Networking*, etc.
-meterSubCategory | Name of the meter sub-classification category
+meterSubCategory | Name of the meter subclassification category
meterRegion | Name of the region where the meter for the service is available. Identifies the location of the data center for certain services that are priced based on data center location. offer | Name of the offer purchased PayGPrice | Retail price for the resource.
subscription ID | Unique identifier for the subscription accruing the charges
subscriptionName | Name of the subscription accruing the charges reservationId | Unique identifier for the purchased reservation instance reservationName | Name of the purchased reservation instance
-publisherType | Microsoft/Azure, Marketplace, and AWS costs. Values are `Microsoft` for Microsoft Customer Agreement accounts and `Azure` for EA and pay-as-you-go accounts.
+publisherType | Microsoft, Azure, Marketplace, and AWS costs. Values are `Microsoft` for Microsoft Customer Agreement accounts and `Azure` for EA and pay-as-you-go accounts.
publisherName | Publisher for Marketplace services resourceGroupId | Unique identifier for the resource group associated with the resource resourceGroupName | Name of the resource group associated with the resource
resourceLocation | Identifies the location of the data center where the resource
location | Normalized location of the resource if different resource locations are configured for the same regions quantity | The number of units purchased or consumed unitOfMeasure | The unit of measure for billing for the service. For example, compute services are billed per hour.
-chargeType | The type of charge. Values: <ul><li>AsCharged-Usage: Charges that are accrued based on usage of an Azure service. This includes usage against VMs that are not charged because of reserved instances.</li><li>AsCharged-PurchaseMarketplace: Either one-time or fixed recurring charges from Marketplace purchases</li><li>AsCharged-UsageMarketplace: Charges for Marketplace services that are charged based on units of consumption</li></ul>
+chargeType | The type of charge. Values: <br><br>ΓÇó AsCharged-Usage - Charges that are accrued based on usage of an Azure service. It includes usage against VMs that aren't charged because of reserved instances.<br><br>ΓÇó AsCharged-PurchaseMarketplace - Either one-time or fixed recurring charges from Marketplace purchases<br><br>ΓÇó AsCharged-UsageMarketplace - Charges for Marketplace services that are charged based on units of consumption
isAzureCreditEligible | Flag that indicates if the charge against the service is eligible to be paid for using Azure credits (Values: True, False) serviceInfo1 | Service-specific metadata serviceInfo2 | Legacy field that captures optional service-specific metadata
-additionalInfo | Additional service-specific metadata.
+additionalInfo | Other service-specific metadata.
tags | Tags you assign to the resource ### Make sure that charges are correct
cost-management-billing Pay Bill https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/understand/pay-bill.md
tags: billing, past due, pay now, bill, invoice, pay
Previously updated : 01/04/2023 Last updated : 01/13/2023
If you have a Microsoft Online Services Program account, your default payment me
If you have Azure credits, they automatically apply to your invoice each billing period.
+> [!NOTE]
+> Regardless of the payment method selected to complete your payment, you must specify the invoice number in the payment details.
+ ## Reserve Bank of India **The Reserve Bank of India has issued new directives.**
If the default payment method of your billing profile is check or wire transfer,
Alternatively, if your invoice is under the threshold amount for your currency, you can make a one-time payment in the Azure portal with a credit or debit card using **Pay now**. If your invoice amount exceeds the threshold, you can't pay your invoice with a credit or debit card. You'll find the threshold amount for your currency in the Azure portal after selecting **Pay now**.
+> [!NOTE]
+> When multiple invoices are remitted in a single check or wire transfer, you must specify the invoice numbers for all of the invoices.
+ #### Bank details used to send wire transfer payments <a name="wire-bank-details"></a>
data-factory How To Create Schedule Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-schedule-trigger.md
This section shows you how to use Azure CLI to create, start, and monitor a sche
### Prerequisites ### Sample Code
data-factory How To Create Tumbling Window Trigger https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/how-to-create-tumbling-window-trigger.md
This section shows you how to use Azure CLI to create, start, and monitor a trig
### Prerequisites - Follow the instructions in [Create an Azure Data Factory using Azure CLI](./quickstart-create-data-factory-azure-cli.md) to create a data factory and a pipeline.
data-factory Quickstart Create Data Factory Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/quickstart-create-data-factory-azure-cli.md
For an introduction to the Azure Data Factory service, see [Introduction to Azur
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin. > [!NOTE] > To create Data Factory instances, the user account that you use to sign in to Azure must be a member of the contributor or owner role, or an administrator of the Azure subscription. For more information, see [Azure roles](quickstart-create-data-factory-powershell.md#azure-roles).
data-lake-analytics Data Lake Analytics Account Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-account-policies.md
Last updated 04/30/2018
# Manage Azure Data Lake Analytics using Account Policies
-Account policies help you control how resources an Azure Data Lake Analytics account are used. These policies allow you to control the cost of using Azure Data Lake Analytics. For example, with these policies you can prevent unexpected cost spikes by limiting how many AUs the account can simultaneously use.## Account-level policies
+Account policies help you control how resources an Azure Data Lake Analytics account are used. These policies allow you to control the cost of using Azure Data Lake Analytics. For example, with these policies you can prevent unexpected cost spikes by limiting how many AUs the account can simultaneously use.
++
+## Account-level policies
These policies apply to all jobs in a Data Lake Analytics account.
-## Maximum number of AUs in a Data Lake Analytics account
+### Maximum number of AUs in a Data Lake Analytics account
A policy controls the total number of Analytics Units (AUs) your Data Lake Analytics account can use. By default, the value is set to 250. For example, if this value is set to 250 AUs, you can have one job running with 250 AUs assigned to it, or 10 jobs running with 25 AUs each. Additional jobs that are submitted are queued until the running jobs are finished. When running jobs are finished, AUs are freed up for the queued jobs to run.
To change the number of AUs for your Data Lake Analytics account:
> [!NOTE] > If you need more than the default (250) AUs, in the portal, click **Help+Support** to submit a support request. The number of AUs available in your Data Lake Analytics account can be increased.
-## Maximum number of jobs that can run simultaneously
+### Maximum number of jobs that can run simultaneously
This policy limits how many jobs can run simultaneously. By default, this value is set to 20. If your Data Lake Analytics has AUs available, new jobs are scheduled to run immediately until the total number of running jobs reaches the value of this policy. When you reach the maximum number of jobs that can run simultaneously, subsequent jobs are queued in priority order until one or more running jobs complete (depending on available AUs).
To change the number of jobs that can run simultaneously:
> [!NOTE] > If you need to run more than the default (20) number of jobs, in the portal, click **Help+Support** to submit a support request. The number of jobs that can run simultaneously in your Data Lake Analytics account can be increased.
-## How long to keep job metadata and resources
+### How long to keep job metadata and resources
When your users run U-SQL jobs, the Data Lake Analytics service keeps all related files. These files include the U-SQL script, the DLL files referenced in the U-SQL script, compiled resources, and statistics. The files are in the /system/ folder of the default Azure Data Lake Storage account. This policy controls how long these resources are stored before they are automatically deleted (the default is 30 days). You can use these files for debugging, and for performance-tuning of jobs that you'll rerun in the future.
data-lake-analytics Data Lake Analytics Add Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-add-users.md
Last updated 05/24/2018
# Adding a user in the Azure portal + ## Start the Add User Wizard 1. Open your Azure Data Lake Analytics via https://portal.azure.com. 2. Click **Add User Wizard**.
data-lake-analytics Data Lake Analytics Analyze Weblogs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-analyze-weblogs.md
Last updated 12/05/2016
# Analyze Website logs using Azure Data Lake Analytics Learn how to analyze website logs using Data Lake Analytics, especially on finding out which referrers ran into errors when they tried to visit the website. + ## Prerequisites * **Visual Studio 2015 or Visual Studio 2013**. * **[Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs)**.
data-lake-analytics Data Lake Analytics Cicd Manage Assemblies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-manage-assemblies.md
Last updated 10/30/2018
In this article, you learn how to manage U-SQL assembly source code with the newly introduced U-SQL database project. You also learn how to set up a continuous integration and deployment (CI/CD) pipeline for assembly registration by using Azure DevOps. + ## Use the U-SQL database project to manage assembly source code [The U-SQL database project](data-lake-analytics-data-lake-tools-develop-usql-database.md) is a project type in Visual Studio that helps developers develop, manage, and deploy their U-SQL databases quickly and easily. You can manage all U-SQL database objects (except for credentials) with the U-SQL database project.
data-lake-analytics Data Lake Analytics Cicd Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-overview.md
In this article, you learn how to set up a continuous integration and deployment (CI/CD) pipeline for U-SQL jobs and U-SQL databases. + [!INCLUDE [updated-for-az](../../includes/updated-for-az.md)] ## Use CI/CD for U-SQL jobs
data-lake-analytics Data Lake Analytics Cicd Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-cicd-test.md
Last updated 08/30/2019
Azure Data Lake provides the [U-SQL](data-lake-analytics-u-sql-get-started.md) language. U-SQL combines declarative SQL with imperative C# to process data at any scale. In this document, you learn how to create test cases for U-SQL and extended C# user-defined operator (UDO) code. + ## Test U-SQL scripts The U-SQL script is compiled and optimized for executable code to run in Azure or on your local computer. The compilation and optimization process treats the entire U-SQL script as a whole. You can't do a traditional unit test for every statement. However, by using the U-SQL test SDK and the local run SDK, you can do script-level tests.
data-lake-analytics Data Lake Analytics Data Lake Tools Data Skew Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-data-skew-solutions.md
Last updated 12/16/2016
# Resolve data-skew problems by using Azure Data Lake Tools for Visual Studio + ## What is data skew? Briefly stated, data skew is an over-represented value. Imagine that you have assigned 50 tax examiners to audit tax returns, one examiner for each US state. The Wyoming examiner, because the population there is small, has little to do. In California, however, the examiner is kept very busy because of the state's large population.
data-lake-analytics Data Lake Analytics Data Lake Tools Debug Recurring Job https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-debug-recurring-job.md
Last updated 05/20/2018
# Troubleshoot an abnormal recurring job + This article shows how to use [Azure Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs) to troubleshoot problems with recurring jobs. Learn more about pipeline and recurring jobs from the [Azure Data Lake and Azure HDInsight blog](/archive/blogs/azuredatalake/managing-pipeline-recurring-jobs-in-azure-data-lake-analytics-made-easy). Recurring jobs usually share the same query logic and similar input data. For example, imagine that you have a recurring job running every Monday morning at 8 A.M. to count last weekΓÇÖs weekly active user. The scripts for these jobs share one script template that contains the query logic. The inputs for these jobs are the usage data for last week. Sharing the same query logic and similar input usually means that performance of these jobs is similar and stable. If one of your recurring jobs suddenly performs abnormally, fails, or slows down a lot, you might want to:
data-lake-analytics Data Lake Analytics Data Lake Tools Develop Usql Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-develop-usql-database.md
Last updated 07/03/2018
# Use a U-SQL database project to develop a U-SQL database for Azure Data Lake + U-SQL database provides structured views over unstructured data and managed structured data in tables. It also provides a general metadata catalog system for organizing your structured data and custom code. The database is the concept that groups these related objects together. Learn more about [U-SQL database and Data Definition Language (DDL)](/u-sql/data-definition-language-ddl-statements).
data-lake-analytics Data Lake Analytics Data Lake Tools Export Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-export-database.md
Last updated 11/27/2017
# Export a U-SQL database + In this article, learn how to use [Azure Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs) to export a U-SQL database as a single U-SQL script and downloaded resources. You can import the exported database to a local account in the same process. Customers usually maintain multiple environments for development, test, and production. These environments are hosted on both a local account, on a developer's local computer, and in an Azure Data Lake Analytics account in Azure.
data-lake-analytics Data Lake Analytics Data Lake Tools For Vscode Access Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-for-vscode-access-resource.md
Last updated 02/09/2018
# Accessing resources with Azure Data Lake Tools + You can access Azure Data Lake Analytics resources with Azure Data Tools commands or actions in VS Code easily. ## Integrate with Azure Data Lake Analytics through a command
data-lake-analytics Data Lake Analytics Data Lake Tools For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-for-vscode.md
Last updated 10/17/2022
# Use Azure Data Lake Tools for Visual Studio Code In this article, learn how you can use Azure Data Lake Tools for Visual Studio Code (VS Code) to create, test, and run U-SQL scripts. The information is also covered in the following video:
data-lake-analytics Data Lake Analytics Data Lake Tools Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-get-started.md
Last updated 11/15/2022
[!INCLUDE [get-started-selector](../../includes/data-lake-analytics-selector-get-started.md)] Azure Data Lake and Stream Analytics Tools include functionality related to two Azure services, Azure Data Lake Analytics and Azure Stream Analytics. For more information about the Azure Stream Analytics scenarios, see [Azure Stream Analytics tools for Visual Studio](../stream-analytics/stream-analytics-tools-for-visual-studio-install.md).
data-lake-analytics Data Lake Analytics Data Lake Tools Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-install.md
Last updated 11/15/2022
# Install Data Lake Tools for Visual Studio + Learn how to use Visual Studio to create Azure Data Lake Analytics accounts. You can define jobs in [U-SQL](data-lake-analytics-u-sql-get-started.md) and submit jobs to the Data Lake Analytics service. For more information about Data Lake Analytics, see [Azure Data Lake Analytics overview](data-lake-analytics-overview.md).
data-lake-analytics Data Lake Analytics Data Lake Tools Local Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-local-debug.md
Last updated 07/03/2018
# Debug Azure Data Lake Analytics code locally + You can use Azure Data Lake Tools for Visual Studio to run and debug Azure Data Lake Analytics code on your local workstation, just as you can in the Azure Data Lake Analytics service. Learn how to [run U-SQL script on your local machine](data-lake-analytics-data-lake-tools-local-run.md).
data-lake-analytics Data Lake Analytics Data Lake Tools Local Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-local-run.md
Last updated 07/03/2018
# Run U-SQL scripts on your local machine + When you develop U-SQL scripts, you can save time and expense by running the scripts locally. Azure Data Lake Tools for Visual Studio supports running U-SQL scripts on your local machine. ## Basic concepts for local runs
data-lake-analytics Data Lake Analytics Data Lake Tools Use Vertex Execution View https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-use-vertex-execution-view.md
Last updated 10/13/2016
# Use the Vertex Execution View in Data Lake Tools for Visual Studio Learn how to use the Vertex Execution View to exam Data Lake Analytics jobs. ## Open the Vertex Execution View Open a U-SQL job in Data Lake Tools for Visual Studio. Click **Vertex Execution View** in the bottom left corner. You may be prompted to load profiles first and it can take some time depending on your network connectivity.
data-lake-analytics Data Lake Analytics Data Lake Tools View Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-data-lake-tools-view-jobs.md
Last updated 08/02/2017 # Use Job Browser and Job View for Azure Data Lake Analytics++ The Azure Data Lake Analytics service archives submitted jobs in a query store. In this article, you learn how to use Job Browser and Job View in Azure Data Lake Tools for Visual Studio to find the historical job information. By default, the Data Lake Analytics service archives the jobs for 30 days. The expiration period can be configured from the Azure portal by configuring the customized expiration policy. You will not be able to access the job information after expiration.
data-lake-analytics Data Lake Analytics Debug U Sql Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-debug-u-sql-jobs.md
Last updated 11/30/2017
# Debug user-defined C# code for failed U-SQL jobs + U-SQL provides an extensibility model using C#. In U-SQL scripts, it is easy to call C# functions and perform analytic functions that SQL-like declarative language does not support. To learn more for U-SQL extensibility, see [U-SQL programmability guide](./data-lake-analytics-u-sql-programmability-guide.md#use-user-defined-functions-udf). In practice, any code may need debugging, but it is hard to debug a distributed job with custom code on the cloud with limited log files. [Azure Data Lake Tools for Visual Studio](https://aka.ms/adltoolsvs) provides a feature called **Failed Vertex Debug**, which helps you more easily debug the failures that occur in your custom code. When U-SQL job fails, the service keeps the failure state and the tool helps you to download the cloud failure environment to the local machine for debugging. The local download captures the entire cloud environment, including any input data and user code.
data-lake-analytics Data Lake Analytics Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-diagnostic-logs.md
Last updated 11/15/2022
# Accessing diagnostic logs for Azure Data Lake Analytics + Diagnostic logging allows you to collect data access audit trails. These logs provide information such as: * A list of users that accessed the data.
data-lake-analytics Data Lake Analytics Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-disaster-recovery.md
Last updated 06/03/2019
# Disaster recovery guidance for Azure Data Lake Analytics + Azure Data Lake Analytics is an on-demand analytics job service that simplifies big data. Instead of deploying, configuring, and tuning hardware, you write queries to transform your data and extract valuable insights. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need. You only pay for your job when it is running, making it cost-effective. This article provides guidance on how to protect your jobs from rare region-wide outages or accidental deletions. ## Disaster recovery guidance
data-lake-analytics Data Lake Analytics Get Started Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-get-started-cli.md
[!INCLUDE [get-started-selector](../../includes/data-lake-analytics-selector-get-started.md)] This article describes how to use the Azure CLI command-line interface to create Azure Data Lake Analytics accounts, submit USQL jobs, and catalogs. The job reads a tab separated values (TSV) file and converts it into a comma-separated values (CSV) file.
data-lake-analytics Data Lake Analytics Get Started Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-get-started-portal.md
Last updated 10/14/2022
# Get started with Azure Data Lake Analytics using the Azure portal [!INCLUDE [get-started-selector](../../includes/data-lake-analytics-selector-get-started.md)] This article describes how to use the Azure portal to create Azure Data Lake Analytics accounts, define jobs in [U-SQL](data-lake-analytics-u-sql-get-started.md), and submit jobs to the Data Lake Analytics service.
data-lake-analytics Data Lake Analytics Get Started Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-get-started-powershell.md
[!INCLUDE [get-started-selector](../../includes/data-lake-analytics-selector-get-started.md)] Learn how to use Azure PowerShell to create Azure Data Lake Analytics accounts and then submit and run U-SQL jobs. For more information about Data Lake Analytics, see [Azure Data Lake Analytics overview](data-lake-analytics-overview.md).
data-lake-analytics Data Lake Analytics Manage Use Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-cli.md
Last updated 01/29/2018
[!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] + Learn how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using the Azure CLI. To see management topics using other tools, click the tab select above. ## Prerequisites
data-lake-analytics Data Lake Analytics Manage Use Dotnet Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-dotnet-sdk.md
[!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] + This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using an app written using the Azure .NET SDK. ## Prerequisites
data-lake-analytics Data Lake Analytics Manage Use Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-java-sdk.md
# Manage Azure Data Lake Analytics using a Java app [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] + This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using an app written using the Azure Java SDK. ## Prerequisites
data-lake-analytics Data Lake Analytics Manage Use Nodejs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-nodejs.md
# Manage Azure Data Lake Analytics using Azure SDK for Node.js [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] + This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs using an app written using the Azure SDK for Node.js. The following versions are supported:
data-lake-analytics Data Lake Analytics Manage Use Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-portal.md
# Manage Azure Data Lake Analytics using the Azure portal [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs by using the Azure portal.
data-lake-analytics Data Lake Analytics Manage Use Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-powershell.md
[!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] + This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs by using Azure PowerShell. ## Prerequisites
data-lake-analytics Data Lake Analytics Manage Use Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-manage-use-python-sdk.md
# Manage Azure Data Lake Analytics using Python [!INCLUDE [manage-selector](../../includes/data-lake-analytics-selector-manage.md)] + This article describes how to manage Azure Data Lake Analytics accounts, data sources, users, and jobs by using Python. ## Supported Python versions
data-lake-analytics Data Lake Analytics Monitor And Troubleshoot Jobs Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-monitor-and-troubleshoot-jobs-tutorial.md
Last updated 12/05/2016
# Monitor jobs in Azure Data Lake Analytics using the Azure Portal + ## To see all the jobs 1. From the Azure portal, click **Microsoft Azure** in the upper left corner.
data-lake-analytics Data Lake Analytics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-overview.md
Last updated 10/17/2022
Azure Data Lake Analytics is an on-demand analytics job service that simplifies big data. Instead of deploying, configuring, and tuning hardware, you write queries to transform your data and extract valuable insights. The analytics service can handle jobs of any scale instantly by setting the dial for how much power you need. You only pay for your job when it's running, making it cost-effective.
- > [!NOTE]
- > Azure Data Lake Analytics will be retired on 29 February 2024. Learn more [with this announcement](https://azure.microsoft.com/updates/migrate-to-azure-synapse-analytics/).
## Azure Data Lake analytics recent update information
data-lake-analytics Data Lake Analytics Quota Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-quota-limits.md
Last updated 03/15/2018
# Adjust quotas and limits in Azure Data Lake Analytics + Learn how to adjust and increase the quota and limits in Azure Data Lake Analytics (ADLA) accounts. Knowing these limits will help you understand your U-SQL job behavior. All quota limits are soft, so you can increase the maximum limits by contacting Azure support. ## Azure subscriptions limits
data-lake-analytics Data Lake Analytics Schedule Jobs Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-schedule-jobs-ssis.md
Last updated 07/17/2018
In this document, you learn how to orchestrate and create U-SQL jobs using SQL Server Integration Service (SSIS). + ## Prerequisites [Azure Feature Pack for Integration Services](/sql/integration-services/azure-feature-pack-for-integration-services-ssis#scenario-managing-data-in-the-cloud) provides the [Azure Data Lake Analytics task](/sql/integration-services/control-flow/azure-data-lake-analytics-task) and the [Azure Data Lake Analytics Connection Manager](/sql/integration-services/connection-manager/azure-data-lake-analytics-connection-manager) that helps connect to Azure Data Lake Analytics service. To use this task, make sure you install:
data-lake-analytics Data Lake Analytics Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-secure.md
Last updated 05/30/2018
# Configure user access to job information to job information in Azure Data Lake Analytics + In Azure Data Lake Analytics, you can use multiple user accounts or service principals to run jobs. In order for those same users to see the detailed job information, the users need to be able to read the contents of the job folders. The job folders are located in `/system/` directory.
data-lake-analytics Data Lake Analytics U Sql Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-catalog.md
Last updated 05/09/2017
# Get started with the U-SQL Catalog in Azure Data Lake Analytics + ## Create a TVF In the previous U-SQL script, you repeated the use of EXTRACT to read from the same source file. With the U-SQL table-valued function (TVF), you can encapsulate the data for future reuse.
data-lake-analytics Data Lake Analytics U Sql Develop With Python R Csharp In Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-develop-with-python-r-csharp-in-vscode.md
Learn how to use Visual Studio Code (VSCode) to write Python, R and C# code behi
Before writing code-behind custom code, you need to open a folder or a workspace in VSCode. - ## Prerequisites for Python and R Register Python and, R extensions assemblies for your ADL account. 1. Open your account in portal.
data-lake-analytics Data Lake Analytics U Sql Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-get-started.md
Last updated 10/14/2022
# Get started with U-SQL in Azure Data Lake Analytics + U-SQL is a language that combines declarative SQL with imperative C# to let you process data at any scale. Through the scalable, distributed-query capability of U-SQL, you can efficiently analyze data across relational stores such as Azure SQL Database. With U-SQL, you can process unstructured data by applying schema on read and inserting custom logic and UDFs. Additionally, U-SQL includes extensibility that gives you fine-grained control over how to execute at scale. ## Learning resources
data-lake-analytics Data Lake Analytics U Sql Programmability Guide UDO https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-UDO.md
Last updated 06/30/2017
# U-SQL user-defined objects overview - ## U-SQL: user-defined objects: UDO U-SQL enables you to define custom programmability objects, which are called user-defined objects or UDO.
data-lake-analytics Data Lake Analytics U Sql Programmability Guide UDT AGG https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide-UDT-AGG.md
Last updated 06/30/2017
# U-SQL programmability guide - UDT and UDAGG -- ## Use user-defined types: UDT User-defined types, or UDT, is another programmability feature of U-SQL. U-SQL UDT acts like a regular C# user-defined type. C# is a strongly typed language that allows the use of built-in and custom user-defined types.
data-lake-analytics Data Lake Analytics U Sql Programmability Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-programmability-guide.md
Last updated 06/30/2017
# U-SQL programmability guide overview + U-SQL is a query language that's designed for big data-type of workloads. One of the unique features of U-SQL is the combination of the SQL-like declarative language with the extensibility and programmability that's provided by C#. In this guide, we concentrate on the extensibility and programmability of the U-SQL language that's enabled by C#. ## Requirements
data-lake-analytics Data Lake Analytics U Sql Python Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-python-extensions.md
# Extend U-SQL scripts with Python code in Azure Data Lake Analytics + ## Prerequisites Before you begin, ensure the Python extensions are installed in your Azure Data Lake Analytics account.
data-lake-analytics Data Lake Analytics U Sql R Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-r-extensions.md
Last updated 06/20/2017
# Extend U-SQL scripts with R code in Azure Data Lake Analytics + The following example illustrates the basic steps for deploying R code: * Use the `REFERENCE ASSEMBLY` statement to enable R extensions for the U-SQL Script.
data-lake-analytics Data Lake Analytics U Sql Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-u-sql-sdk.md
Last updated 03/01/2017
# Run and test U-SQL with Azure Data Lake U-SQL SDK + When developing U-SQL script, it is common to run and test U-SQL script locally before submit it to cloud. Azure Data Lake provides a Nuget package called Azure Data Lake U-SQL SDK for this scenario, through which you can easily scale U-SQL run and test. It is also possible to integrate this U-SQL test with CI (Continuous Integration) system to automate the compile and test. If you care about how to manually local run and debug U-SQL script with GUI tooling, then you can use Azure Data Lake Tools for Visual Studio for that. You can learn more from [here](data-lake-analytics-data-lake-tools-local-run.md).
data-lake-analytics Data Lake Analytics Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-analytics-whats-new.md
Last updated 11/16/2022
# What's new in Data Lake Analytics? Azure Data Lake Analytics is updated on an aperiodic basis for certain components. To stay updated with the most recent update, this article provides you with information about:
data-lake-analytics Data Lake Tools For Vscode Local Run And Debug https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/data-lake-tools-for-vscode-local-run-and-debug.md
Last updated 07/14/2017 # Run U-SQL and debug locally in Visual Studio Code++ This article describes how to run U-SQL jobs on a local development machine to speed up early coding phases or to debug code locally in Visual Studio Code. For instructions on Azure Data Lake Tool for Visual Studio Code, see [Use Azure Data Lake Tools for Visual Studio Code](data-lake-analytics-data-lake-tools-for-vscode.md). Only Windows installations of the Azure Data Lake Tools for Visual Studio support the action to run U-SQL locally and debug U-SQL locally. Installations on macOS and Linux-based operating systems do not support this feature.
data-lake-analytics Dotnet Upgrade Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/dotnet-upgrade-troubleshoot.md
Last updated 10/11/2019
# Azure Data Lake Analytics is upgrading to the .NET Framework v4.7.2 + The Azure Data Lake Analytics default runtime is upgrading from .NET Framework v4.5.2 to .NET Framework v4.7.2. This change introduces a small risk of breaking changes if your U-SQL code uses custom assemblies, and those custom assemblies use .NET libraries. This upgrade from .NET Framework 4.5.2 to version 4.7.2 means that the .NET Framework deployed in a U-SQL runtime (the default runtime) will now always be 4.7.2. There isn't a side-by-side option for .NET Framework versions.
data-lake-analytics Migrate Azure Data Lake Analytics To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/migrate-azure-data-lake-analytics-to-synapse.md
Last updated 11/15/2022
# Migrate Azure Data Lake Analytics to Azure Synapse Analytics
-Microsoft launched the Azure Synapse Analytics that aims at bringing both data lakes and data warehouse together for a unique big data analytics experience. It will help customers gather and analyze all the varying data, to solve data inefficiency, and work together. Moreover, SynapseΓÇÖs integration with Azure Machine Learning and Power BI will allow the improved ability for organizations to get insights from its data and execute machine learning to all its smart apps.
+Azure Data Lake Analytics will be retired on **29 February 2024**. Learn more [with this announcement](https://azure.microsoft.com/updates/migrate-to-azure-synapse-analytics/).
-The document shows you how to do the migration from Azure Data Lake Analytics to Azure Synapse Analytics.
+If you're already using Azure Data Lake Analytics, you can create a migration plan to Azure Synapse Analytics for your organization.
+
+Microsoft launched Azure Synapse Analytics that aims at bringing both data lakes and data warehouse together for a unique big data analytics experience. It will help you gather and analyze your data to solve data inefficiency, and help your teams work together. Moreover, SynapseΓÇÖs integration with Azure Machine Learning and Power BI will allow the improved ability for organizations to get insights from its data and execute machine learning to all its smart apps.
+
+The document shows you how to do the migration from Azure Data Lake Analytics to Azure Synapse Analytics.
## Recommended approach+ - Step 1: Assess readiness - Step 2: Prepare to migrate - Step 3: Migrate data and application workloads
data-lake-analytics Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md
# Azure Policy built-in definitions for Azure Data Lake Analytics + This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy definitions for Azure Data Lake Analytics. For additional Azure Policy built-ins for other services, see [Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
data-lake-analytics Runtime Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/runtime-troubleshoot.md
Last updated 10/10/2019
# Learn how to troubleshoot U-SQL runtime failures due to runtime changes + The Azure Data Lake U-SQL runtime, including the compiler, optimizer, and job manager, is what processes your U-SQL code. ## Choosing your U-SQL runtime version
data-lake-analytics Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md
# Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics + [Regulatory Compliance in Azure Policy](../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
data-lake-analytics Understand Spark Code Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-code-concepts.md
Last updated 05/17/2022
# Understand Apache Spark code for U-SQL developers + This section provides high-level guidance on transforming U-SQL Scripts to Apache Spark. - It starts with a [comparison of the two language's processing paradigms](#understand-the-u-sql-and-spark-language-and-processing-paradigms)
data-lake-analytics Understand Spark Data Formats https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-data-formats.md
Last updated 01/31/2019
# Understand differences between U-SQL and Spark data formats + If you want to use either [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) or [Azure HDInsight Spark](../hdinsight/spark/apache-spark-overview.md), we recommend that you migrate your data from [Azure Data Lake Storage Gen1](../data-lake-store/data-lake-store-overview.md) to [Azure Data Lake Storage Gen2](../storage/blobs/data-lake-storage-introduction.md). In addition to moving your files, you'll also want to make your data, stored in U-SQL tables, accessible to Spark.
data-lake-analytics Understand Spark For Usql Developers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/understand-spark-for-usql-developers.md
Last updated 10/15/2019
# Understand Apache Spark for U-SQL developers + Microsoft supports several Analytics services such as [Azure Databricks](/azure/databricks/scenarios/what-is-azure-databricks) and [Azure HDInsight](../hdinsight/hdinsight-overview.md) as well as Azure Data Lake Analytics. We hear from developers that they have a clear preference for open-source-solutions as they build analytics pipelines. To help U-SQL developers understand Apache Spark, and how you might transform your U-SQL scripts to Apache Spark, we've created this guidance. It includes a number of steps you can take, and several alternatives.
data-share Share Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/share-your-data.md
Create an Azure Data Share resource in an Azure resource group.
Start by preparing your environment for the Azure CLI: Use these commands to create the resource:
data-share Subscribe To Data Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-share/subscribe-to-data-share.md
Prepare your Azure CLI environment and then view your invitations.
Start by preparing your environment for the Azure CLI: Run the [az datashare consumer-invitation list-invitation](/cli/azure/datashare/consumer-invitation) command to see your current invitations:
databox-online Azure Stack Edge Gpu Create Virtual Machine Marketplace Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-create-virtual-machine-marketplace-image.md
For more information, go to [Deploy a VM on your Azure Stack Edge Pro device usi
Before you can use Azure Marketplace images for Azure Stack Edge, make sure you're connected to Azure in either of the following ways. ## Search for Azure Marketplace images
databox-online Azure Stack Edge Gpu Deploy Virtual Machine Cli Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-virtual-machine-cli-python.md
Before you begin creating and managing a VM on your Azure Stack Edge Pro device
7. Prepare your environment for the Azure CLI:
- [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](../../includes/azure-cli-prepare-your-environment-no-header.md)]
+ [!INCLUDE [azure-cli-prepare-your-environment-no-header.md](~/articles/reusable-content/azure-cli/azure-cli-prepare-your-environment-no-header.md)]
## Step 1: Set up Azure CLI/Python on the client
databox-online Azure Stack Edge Mini R Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-prep.md
After a device is delivered, a **Configure hardware** link is added to the order
If necessary, prepare your environment for Azure CLI. To create an Azure Stack Edge resource, run the following commands in Azure CLI.
databox-online Azure Stack Edge Pro R Deploy Prep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-prep.md
After a device is shipped, a **Configure hardware** link is added to the order i
If necessary, prepare your environment for Azure CLI. To create an Azure Stack Edge resource, run the following commands in Azure CLI.
databox Data Box Disk Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-quickstart-portal.md
Once the order is created, the disks are prepared for shipment.
Use these Azure CLI commands to create a Data Box Disk job. 1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group or use an existing resource group:
databox Data Box Heavy Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-heavy-quickstart-portal.md
Once the order is created, the device is prepared for shipment.
Use these Azure CLI commands to create a Data Box Heavy job. 1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group or use an existing resource group:
dedicated-hsm Quickstart Hsm Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dedicated-hsm/quickstart-hsm-azure-cli.md
This article describes how to create and manage an Azure Dedicated HSM by using
```azurecli-interactive az account set --subscription 00000000-0000-0000-0000-000000000000 ``` - All requirements met for a dedicated HSM, including registration, approval, and a virtual network and virtual machine to use for provisioning. For more information about dedicated HSM requirements and prerequisites, see [Tutorial: Deploying HSMs into an existing virtual network using the Azure CLI](tutorial-deploy-hsm-cli.md).
defender-for-cloud Plan Defender For Servers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/plan-defender-for-servers.md
Before you review the series of articles in the Defender for Servers planning gu
The following diagram shows an overview of the Defender for Servers deployment process: - Learn more about [foundational cloud security posture management (CSPM)](concept-cloud-security-posture-management.md#defender-cspm-plan-options). - Learn more about [Azure Arc](../azure-arc/index.yml) onboarding.
digital-twins How To Manage Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-manage-routes.md
Select your instance from the results to see these details in the Overview for y
Follow the instructions below if you intend to use the Azure CLI while following this guide. ## Create an endpoint for Azure Digital Twins
digital-twins How To Set Up Instance Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-set-up-instance-cli.md
This article covers the steps to set up a new Azure Digital Twins instance, incl
[!INCLUDE [digital-twins-setup-steps.md](../../includes/digital-twins-setup-steps.md)] [!INCLUDE [CLI setup for Azure Digital Twins](../../includes/digital-twins-cli.md)]
digital-twins How To Use Data History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/how-to-use-data-history.md
It also contains a sample twin graph that you can use to see the historized twin
## Prerequisites >[!NOTE] > You can also use Azure Cloud Shell in the PowerShell environment instead of the Bash environment, if you prefer. The commands on this page are written for the Bash environment, so they may require some small adjustments to be run in PowerShell.
digital-twins Tutorial Command Line Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-command-line-cli.md
The tutorial uses two pre-written models that are part of the C# [end-to-end sam
To get the files on your machine, use the navigation links above and copy the file bodies into local files on your machine with the same names (*Room.json* and *Floor.json*). [!INCLUDE [CLI setup for Azure Digital Twins](../../includes/digital-twins-cli.md)]
digital-twins Tutorial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/tutorial-end-to-end.md
In this tutorial, you will...
[!INCLUDE [Azure Digital Twins tutorial: sample prerequisites](../../includes/digital-twins-tutorial-sample-prereqs.md)] [!INCLUDE [CLI setup for Azure Digital Twins](../../includes/digital-twins-cli.md)]
dns Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-cli.md
Azure DNS also supports private DNS zones. To learn more about private DNS zones
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
dns Private Dns Getstarted Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/private-dns-getstarted-cli.md
A DNS zone is used to host the DNS records for a particular domain. To start hos
## Prerequisites - You can also complete this quickstart using [Azure PowerShell](private-dns-getstarted-powershell.md).
event-grid Custom Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-quickstart.md
When you're finished, you see that the event data has been sent to the web app.
[!INCLUDE [quickstarts-free-trial-note.md](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.70 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
event-grid Custom Event To Hybrid Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-event-to-hybrid-connection.md
Azure Event Grid is an eventing service for the cloud. Azure Relay Hybrid Connec
- This article assumes you already have a hybrid connection and a listener application. To get started with hybrid connections, see [Get started with Relay Hybrid Connections - .NET](../azure-relay/relay-hybrid-connections-dotnet-get-started.md) or [Get started with Relay Hybrid Connections - Node](../azure-relay/relay-hybrid-connections-node-get-started.md). - This article requires version 2.0.56 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
event-grid Publish Iot Hub Events To Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-iot-hub-events-to-logic-apps.md
This article walks through a sample configuration that uses IoT Hub and Event Gr
* An email account from any email provider that is supported by Azure Logic Apps, such as Office 365 Outlook or Outlook.com. This email account is used to send the event notifications. ## Create an IoT hub
event-grid Event Grid Cli Subscribe Custom Topic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/event-grid-cli-subscribe-custom-topic.md
This article provides a sample Azure CLI script that shows how to create a custo
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
event-grid Storage Upload Process Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/storage-upload-process-images.md
az webapp config appsettings set --name $webapp --resource-group myResourceGroup
``` ```powershell
-az webapp config appsettings set --name $webapp --resource-group myResourceGroup `
- --settings AzureStorageConfig__AccountName=$blobStorageAccount `
- AzureStorageConfig__ImageContainer=images `
- AzureStorageConfig__ThumbnailContainer=thumbnails `
- AzureStorageConfig__AccountKey=$blobStorageAccountKey
+New-AzStaticWebAppSetting -ResourceGroupName myResourceGroup -Name $webapp `
+ -AppSetting @{ `
+ AzureStorageConfig__AccountName = $blobStorageAccount `
+ AzureStorageConfig__ImageContainer = images `
+ AzureStorageConfig__ThumbnailContainer = thumbnails `
+ AzureStorageConfig__AccountKey = $blobStorageAccountKey `
+ }
``` # [JavaScript v12 SDK](#tab/javascript)
event-hubs Event Hubs Dotnet Standard Getstarted Send https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
This section shows you how to create a .NET Core console application to send eve
``` - ### Authenticate the app to Azure [!INCLUDE [event-hub-passwordless-template-tabbed](../../includes/passwordless/event-hub/event-hub-passwordless-template-tabbed.md)]
Here are the important steps from the code:
1. Sends the batch of messages to the event hub using the [EventHubProducerClient.SendAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.sendasync) method. In the code sample below, replace the `<EVENT_HUB_NAMESPACE>` and `<HUB_NAME>` placeholder values for the `EventHubProducerClient` parameters.
-
```csharp using Azure.Identity; using Azure.Messaging.EventHubs;
finally
} ``` - 5. Build the project, and ensure that there are no errors. 6. Run the program and wait for the confirmation message.
finally
This section shows how to write a .NET Core console application that receives events from an event hub using an event processor. The event processor simplifies receiving events from event hubs by managing persistent checkpoints and parallel receptions from those event hubs. An event processor is associated with a specific event Hub and a consumer group. It receives events from multiple partitions in the event hub, passing them to a handler delegate for processing using code that you provide. + > [!WARNING] > If you run this code on **Azure Stack Hub**, you will experience runtime errors unless you target a specific Storage API version. That's because the Event Hubs SDK uses the latest available Azure Storage API available in Azure that may not be available on your Azure Stack Hub platform. Azure Stack Hub may support a different version of Storage Blob SDK than those typically available on Azure. If you are using Azure Blob Storage as a checkpoint store, check the [supported Azure Storage API version for your Azure Stack Hub build](/azure-stack/user/azure-stack-acs-differences?#api-version) and target that version in your code. > > For example, If you are running on Azure Stack Hub version 2005, the highest available version for the Storage service is version 2019-02-02. By default, the Event Hubs SDK client library uses the highest available version on Azure (2019-07-07 at the time of the release of the SDK). In this case, besides following steps in this section, you will also need to add code to target the Storage service API version 2019-02-02. For an example on how to target a specific Storage API version, see [this sample on GitHub](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventhub/Azure.Messaging.EventHubs.Processor/samples/). + ### Create an Azure Storage Account and a blob container In this quickstart, you use Azure Storage as the checkpoint store. Follow these steps to create an Azure Storage account.
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
## [Passwordless](#tab/passwordless) [!INCLUDE [event-hub-storage-assign-roles](../../includes/passwordless/event-hub/event-hub-storage-assign-roles.md)]
-
## [Connection String](#tab/connection-string) [Get the connection string to the storage account](../storage/common/storage-configure-connection-string.md)
In this quickstart, you use Azure Storage as the checkpoint store. Follow these
Note down the connection string and the container name. You'll use them in the receive code. - ### Create a project for the receiver 1. In the Solution Explorer window, right-click the **EventHubQuickStart** solution, point to **Add**, and select **New Project**.
Note down the connection string and the container name. You'll use them in the r
``` - ### Update the code Replace the contents of **Program.cs** with the following code:
Here are the important steps from the code:
1. Creates an [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object using the event hub namespace and the event hub name. You need to build [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object for the container in the Azure storage you created earlier. 1. Specifies handlers for the [ProcessEventAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.processeventasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.processerrorasync) events of the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object. 1. Starts processing events by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.startprocessingasync) on the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object.
-1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.stopprocessingasync) on the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object.
-
+1. Stops processing events after 30 seconds by invoking [StopProcessingAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.stopprocessingasync) on the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object.
+
In the code sample below, replace the `<STORAGE_ACCOUNT_NAME>` and `<BLOB_CONTAINER_NAME>` placeholder values for the `BlobContainerClient` URI. Replace the `<EVENT_HUB_NAMESPACE>` and `<HUB_NAME>` placeholder values for the `EventProcessorClient` as well. ```csharp
Here are the important steps from the code:
1. Creates an [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object using the primary connection string to the namespace and the event hub. You need to build [BlobContainerClient](/dotnet/api/azure.storage.blobs.blobcontainerclient) object for the container in the Azure storage you created earlier. 1. Specifies handlers for the [ProcessEventAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.processeventasync) and [ProcessErrorAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.processerrorasync) events of the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object. 1. Starts processing events by invoking the [StartProcessingAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.startprocessingasync) on the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object.
-1. When user presses a key to end the processing, invokes the [StopProcessingAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.stopprocessingasync) on the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object.
-
+1. Stops processing events after 30 seconds by invoking [StopProcessingAsync](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient.stopprocessingasync) on the [EventProcessorClient](/dotnet/api/azure.messaging.eventhubs.eventprocessorclient) object.
+
In the code sample below, replace the `<AZURE_STORAGE_CONNECTION_STRING>` and `<BLOB_CONTAINER_NAME>` placeholder values for the `BlobContainerClient` URI. Replace the `<EVENT_HUB_NAMESPACE_CONNECTION_STRING>` and `<HUB_NAME>` placeholder values for the `EventProcessorClient` as well. ```csharp
Task ProcessErrorHandler(ProcessErrorEventArgs eventArgs)
``` - 1. Build the project, and ensure that there are no errors. > [!NOTE]
See the following tutorial:
> [!div class="nextstepaction"] > [Tutorial: Visualize data anomalies in real-time events sent to Azure Event Hubs](event-hubs-tutorial-visualize-anomalies.md)+
event-hubs Event Hubs Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/event-hubs-quickstart-cli.md
In this quickstart, you create an event hub using Azure CLI.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
expressroute Expressroute Troubleshooting Arp Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-troubleshooting-arp-resource-manager.md
Title: 'Azure ExpressRoute: ARP tables - Troubleshooting'
description: This page provides instructions on getting the Address Resolution Protocol (ARP) tables for an ExpressRoute circuit - Previously updated : 12/15/2020 Last updated : 01/05/2022 # Getting ARP tables in the Resource Manager deployment model+ > [!div class="op_single_selector"] > * [PowerShell - Resource Manager](expressroute-troubleshooting-arp-resource-manager.md) > * [PowerShell - Classic](expressroute-troubleshooting-arp-classic.md) >
->
This article walks you through the steps to learn the ARP tables for your ExpressRoute circuit. > [!IMPORTANT] > This document is intended to help you diagnose and fix simple issues. It is not intended to be a replacement for Microsoft support. You must open a support ticket with [Microsoft support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) if you are unable to solve the problem using the guidance described below. >
->
[!INCLUDE [updated-for-az](../../includes/hybrid-az-ps.md)] ## Address Resolution Protocol (ARP) and ARP tables+ Address Resolution Protocol (ARP) is a layer 2 protocol defined in [RFC 826](https://tools.ietf.org/html/rfc826). ARP is used to map the Ethernet address (MAC address) with an ip address. The ARP table provides the following information for both the primary and secondary interfaces for each peering types:
firewall Deploy Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/deploy-cli.md
If you prefer, you can complete this procedure using the [Azure portal](tutorial
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
frontdoor Create Front Door Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/create-front-door-cli.md
In this quickstart, you'll learn how to create an Azure Front Door Standard/Prem
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create a resource group
frontdoor Front Door Custom Domain Https https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/front-door-custom-domain-https.md
Azure Front Door can now access this key vault and the certificates it contains.
- The available secret versions. > [!NOTE]
- > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, please set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed.
+ > In order for the certificate to be automatically rotated to the latest version when a newer version of the certificate is available in your Key Vault, set the secret version to 'Latest'. If a specific version is selected, you have to re-select the new version manually for certificate rotation. It takes up to 72 hours for the new version of the certificate/secret to be deployed.
> > :::image type="content" source="./media/front-door-custom-domain-https/certificate-version.png" alt-text="Screenshot of selecting secret version on update custom domain page.":::
+ > [!WARNING]
+ > This is an Azure portal only warning. You need to configure your service principal to have a GET permission on the Key Vault. In order for a user to see the certificate in the portal drop-down, the user account must have LIST and GET permissions on the Key Vault. If a user doesn't have these permissions, they'll see an inaccessible error message in portal. An inaccessible error message doesn't have any impact on certificate auto-rotation or any HTTPS function. No actions are required for this error message if you don't intend to make changes to the certificate or the version. If you want to change the information on this page, see [provide permission to Key Vault](../key-vault/general/rbac-guide.md?tabs=azure-cli) to add your account to the LIST and GET permission of the Key Vault.
++ 5. When you use your own certificate, domain validation isn't required. Continue to [Wait for propagation](#wait-for-propagation). ## Validate the domain
frontdoor Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/scripts/custom-domain.md
This Azure CLI script example deploys a custom domain name and TLS certificate o
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
frontdoor Front Door Add Rules Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/front-door-add-rules-cli.md
In this tutorial, you'll learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Create an Azure Front Door
frontdoor How To Cache Purge Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-cache-purge-cli.md
Azure Front Door caches assets until the asset's time-to-live (TTL) expires. Whe
Best practice is to make sure your users always obtain the latest copy of your assets. The way to do that is to version your assets for each update and publish them as new URLs. Azure Front Door Standard/Premium will immediately retrieve the new assets for the next client requests. Sometimes you may wish to purge cached contents from all edge nodes and force them all to retrieve new updated assets. The reason you want to purge cached contents is because you've made new updates to your application, or you want to update assets that contain incorrect information. * Review [Caching with Azure Front Door](../front-door-caching.md) to understand how caching works. * Have a functioning Azure Front Door profile. Refer to [Create a Front Door - CLI](../create-front-door-cli.md) to learn how to create one.
frontdoor How To Enable Private Link Storage Account Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-storage-account-cli.md
This article will guide you through how to configure Azure Front Door Premium tier to connect to your Storage Account privately using the Azure Private Link service with Azure CLI. * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Have a functioning Azure Front Door Premium profile, an endpoint and an origin group. For more information on how to create an Azure Front Door profile, see [Create a Front Door - CLI](../create-front-door-cli.md).
frontdoor How To Enable Private Link Web App Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/frontdoor/standard-premium/how-to-enable-private-link-web-app-cli.md
This article will guide you through how to configure Azure Front Door Premium tier to connect to your App service privately using the Azure Private Link service with Azure CLI. * An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * Have a functioning Azure Front Door Premium profile, an endpoint and an origin group. For more information on how to create an Azure Front Door profile, see [Create a Front Door - CLI](../create-front-door-cli.md).
governance Machine Configuration Create Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/machine-configuration/machine-configuration-create-setup.md
Title: How to install the machine configuration authoring module description: Learn how to install the PowerShell module for creating and testing machine configuration policy definitions and assignments. Previously updated : 10/17/2022 Last updated : 01/13/2023
custom content including:
Support for applying configurations through machine configuration is introduced in version `3.4.2`.
-> [!IMPORTANT]
-> Custom packages that audit the state of an environment are Generally Available,
-> but packages that apply configurations are **in preview**. **The following limitations apply:**
->
-> To test creating and applying configurations on Linux, the
-> `GuestConfiguration` module is only available on Ubuntu 18 but the package
-> and policy definitions produced by the module can be used on any Linux distro/version
-> supported in Azure or Arc.
->
-> Testing packages on MacOS isn't available.
- ### Base requirements
-Operating Systems where the module can be installed:
+Operating systems where the module can be installed:
- Ubuntu 18 - Windows
-The module can be installed on a machine running PowerShell 7. Install the
+The module can be installed on a machine running PowerShell 7.x. Install the
versions of PowerShell listed below. | OS | PowerShell Version |
hdinsight Hdinsight Hadoop Create Linux Clusters Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-cli.md
The steps in this document walk-through creating a HDInsight 4.0 cluster using t
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Create a cluster
hdinsight Apache Spark Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/spark/apache-spark-create-cluster-cli.md
If you're using multiple clusters together, you'll want to create a virtual netw
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Create an Apache Spark cluster
healthcare-apis Fhir Paas Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/fhir-paas-cli-quickstart.md
In this quickstart, you'll learn how to deploy Azure API for FHIR in Azure using
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Add Azure Health Data Services (for example, HealthcareAPIs) extension
healthcare-apis Get Healthcare Apis Access Token Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/get-healthcare-apis-access-token-cli.md
In this article, you'll learn how to obtain an access token for the Azure API for FHIR using the Azure CLI. When you [provision the Azure API for FHIR](fhir-paas-portal-quickstart.md), you configure a set of users or service principals that have access to the service. If your user object ID is in the list of allowed object IDs, you can access the service using a token obtained using the Azure CLI. ## Obtain a token
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Below tutorials describe steps to enable SMART on FHIR applications with FHIR Se
- [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md) - [Register public client application in Azure AD](https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app) - After registering the application, make note of the applicationId for client application.
+- Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using samples (preferred approach)
-As a prerequisite, ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
+## SMART on FHIR using AHDS Samples OSS
### Step 1: Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
### Step 2: FHIR server integration with samples [Follow the steps](https://github.com/Azure-Samples/azure-health-data-services-samples/blob/main/samples/Patient%20and%20Population%20Services%20G10/docs/deployment.md) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
-This is our preferred approach, as it demonstrates to Health IT developers steps needed to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services criterion.
- > [!NOTE]
-> These samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
+> Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
## SMART on FHIR proxy
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/smart-on-fhir.md
Below tutorials provide steps to enable SMART on FHIR applications with FHIR Ser
- [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md) - [Register public client application in Azure AD](https://learn.microsoft.com/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app) - After registering the application, make note of the applicationId for client application.
+- Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
-## SMART on FHIR using samples (Preferred approach)
-
-As a pre-requisite , ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
+## SMART on FHIR using AHDS Samples OSS
### Step 1 : Set up FHIR SMART user role Follow the steps listed under section [Manage Users: Assign Users to Role](https://learn.microsoft.com/azure/active-directory/fundamentals/active-directory-users-assign-role-azure-portal). Any user added to this role will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
Follow the steps listed under section [Manage Users: Assign Users to Role](https
### Step 2 : FHIR server integration with samples [Follow the steps](https://github.com/Azure-Samples/azure-health-data-services-samples/blob/main/samples/Patient%20and%20Population%20Services%20G10/docs/deployment.md) under Azure Health Data Service Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
-This is our preferred approach, as it demonstrates to Health IT developers steps needed to comply with 21st Century Act Criterion §170.315(g)(10) Standardized API for patient and population services criterion.
-- > [!NOTE]
-> These samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
-
+> Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Azure Active Directory as the identity provider workflow.
## SMART on FHIR Proxy
+<details>
+ <summary> Click to expand! </summary>
+
+> [!NOTE]
+> This is another option to using "SMART on FHIR using AHDS Samples OSS" mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
### Step 1 : Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources.
Notice that the SMART on FHIR app launcher updates the **Launch URL** informatio
![Screenshot showing SMART on FHIR app.](media/smart-on-fhir/smart-on-fhir-app.png) Inspect the token response to see how the launch context fields are passed on to the app.-
+ </details>
FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Concepts Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-machine-learning.md
Previously updated : 12/27/2022 Last updated : 1/12/2023
In this article, you learned about the MedTech service and Machine Learning serv
For an overview of the MedTech service, see > [!div class="nextstepaction"]
-> [The MedTech service overview](overview.md)
+> [What is the MedTech service?](overview.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Concepts Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-power-bi.md
Previously updated : 12/27/2021 Last updated : 1/12/2023
In this article, you've learned about the MedTech service and Power BI integrati
For an overview of the MedTech service, see > [!div class="nextstepaction"]
-> [The MedTech service overview](overview.md)
+> [What is the MedTech service?](overview.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Concepts Teams https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/concepts-teams.md
Previously updated : 12/27/2022 Last updated : 1/12/2023
In this article, you've learned about the MedTech service and Teams notification
For an overview of the MedTech service, see > [!div class="nextstepaction"]
-> [The MedTech service overview](overview.md)
+> [What is the MedTech service?](overview.md)
FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Git Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/git-projects.md
Title: GitHub projects for the MedTech service - Azure Health Data Services
-description: MedTech service has a robust open-source (GitHub) library for ingesting device messages from popular wearable devices.
+description: The MedTech service has a robust open-source (GitHub) library for ingesting device messages from popular wearable devices.
Previously updated : 12/15/2022 Last updated : 1/12/2023 # Open-source projects
Health Data Sync
In this article, you learned about the open-source projects for the MedTech service.
-Learn about the different deployment methods for the MedTech service, see
+To learn about the different deployment methods for the MedTech service, see
> [!div class="nextstepaction"] > [Choose a deployment method for the MedTech service](deploy-new-choose.md)
healthcare-apis How To Configure Device Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-device-mappings.md
Previously updated : 12/27/2022 Last updated : 1/12/2023
The content payload itself is an Azure Event Hubs message, which is composed of
} } ```+
+## Device mappings validations
+
+The validation process validates the device mappings before allowing them to be saved for use. These elements are required in the device mapping templates.
+
+**Device mappings**
+
+|Element|Required|
+|:-|:|
+|TypeName|True|
+|TypeMatchExpression|True|
+|DeviceIdExpression|True|
+|TimestampExpression|True|
+|Values[].ValueName|True|
+|Values[].ValueExpression|True|
+
+> [!NOTE]
+> `Values[].ValueName and Values[].ValueExpression` elements are only required if you have a value entry in the array. It's valid to have no values mapped. This is used when the telemetry being sent is an event.
+>
+> For example:
+>
+> Some IoMT scenarios may require creating an Observation Resource in the FHIR service that does not contain a value.
+ ## CollectionContentTemplate The CollectionContentTemplate is the **root** template type used by the MedTech service device mappings template and represents a list of all templates that will be used during the normalization process.
You can define one or more templates within the MedTech service device mapping.
|[IotJsonPathContentTemplate](how-to-use-iot-jsonpath-content-mappings.md)|A template that supports messages sent from Azure Iot Hub or the Legacy Export Data feature of Azure Iot Central. > [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing common MedTech service errors.
## Next steps
healthcare-apis How To Configure Fhir Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-fhir-mappings.md
Previously updated : 12/27/2022 Last updated : 1/12/2023
configuration controls.
> [!NOTE] > Mappings are stored in an underlying blob storage and loaded from blob per compute execution. Once updated they should take effect immediately.
+## FHIR destination mappings validations
+
+The validation process validates the FHIR destination mappings before allowing them to be saved for use. These elements are required in the FHIR destination mappings templates.
+
+**FHIR destination mappings**
+
+|Element|Required|
+|:|:-|
+|TypeName|True|
+
+> [!NOTE]
+> This is the only required FHIR destination mapping element validated at this time.
+ ### CodeValueFhirTemplate The CodeValueFhirTemplate is currently the only template supported in FHIR destination mapping at this time. It allows you to define codes, the effective period, and the value of the observation. Multiple value types are supported: [SampledData](https://www.hl7.org/fhir/datatypes.html#SampledData), [CodeableConcept](https://www.hl7.org/fhir/datatypes.html#CodeableConcept), and [Quantity](https://www.hl7.org/fhir/datatypes.html#Quantity). Along with these configurable values, the identifier for the Observation resource and linking to the proper Device and Patient resources are handled automatically.
Represents the [CodeableConcept](http://hl7.org/fhir/datatypes.html#CodeableConc
``` > [!TIP]
-> See the MedTech service article [Troubleshoot the MedTech service device and FHIR destination mappings](troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing common MedTech service errors.
## Next steps
healthcare-apis How To Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-configure-metrics.md
Title: Configure the MedTech service metrics - Azure Health Data Services
-description: This article explains how to display MedTech service metrics.
+description: This article explains how to configure the MedTech service metrics.
Previously updated : 12/27/2022 Last updated : 1/12/2023
To learn how to create an Azure portal dashboard and pin tiles, see [Create a da
In this article, you learned about how to configure the MedTech service metrics.
-To learn how to enable the MedTech service diagnostic settings to export logs and metrics to another location (for example: an Azure storage account) for audit, backup, or troubleshooting, see
+To learn how to enable the MedTech service diagnostic settings to export logs and metrics to another location (for example: Azure Log Analytics workspace) for audit, backup, or troubleshooting, see
> [!div class="nextstepaction"] > [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
healthcare-apis How To Create Mappings Copies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-create-mappings-copies.md
Previously updated : 12/15/2022 Last updated : 1/12/2023
This article provides steps for creating copies of your MedTech service's device
In this article, you learned about how to make copies of your MedTech service device and FHIR destination mappings.
-To learn how to troubleshoot device and FHIR destination mappings, see
+To learn how to troubleshoot MedTech service errors, see
> [!div class="nextstepaction"]
-> [Troubleshoot the MedTech service device and FHIR destination mappings](troubleshoot-mappings.md)
+> [Troubleshoot MedTech service errors](troubleshoot-errors.md)
(FHIR&#174;) is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis How To Use Calculatedcontenttemplate Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-calculatedcontenttemplate-mappings.md
Previously updated : 12/27/2022 Last updated : 1/12/2023
In the below example, height data arrives in either inches or meters. We want al
``` > [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](troubleshoot-mappings.md) for assistance fixing common errors and issues related to the MedTech service mappings.
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
## Next steps
-In this article, you learned how to configure the MedTech service device mappings.
+In this article, you learned how to configure the MedTech service device mappings using CalculatedContentTemplate mappings.
To learn how to configure FHIR destination mappings, see
healthcare-apis How To Use Custom Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-custom-functions.md
Previously updated : 12/15/2022 Last updated : 1/12/2023
Examples:
| {"unix": 0} | fromUnixTimestampMs(unix) | "1970-01-01T00:00:00+0" | > [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service device and FHIR destination mappings](troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
## Next steps
-In this article, you learned how to use the MedTech service custom functions.
+In this article, you learned how to use the MedTech service custom functions with the device mappings.
-To learn how to configure the MedTech service device mapping, see
+To learn how to configure the MedTech service device mappings, see
> [!div class="nextstepaction"] > [How to configure device mappings](how-to-configure-device-mappings.md)
healthcare-apis How To Use Iotjsonpathcontenttemplate Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-iotjsonpathcontenttemplate-mappings.md
Previously updated : 12/15/2022 Last updated : 1/12/2023
With each of these examples, you're provided with:
``` > [!TIP]
-> See the MedTech service article [Troubleshoot MedTech service Device and FHIR destination mappings](troubleshoot-mappings.md) for assistance fixing common errors and issues related to MedTech service mappings.
+> See the MedTech service article [Troubleshoot MedTech service errors](troubleshoot-errors.md) for assistance fixing MedTech service errors.
## Next steps
-In this article, you learned how to use IotJsonPathContentTemplate mappings with the MedTech service device mapping.
+In this article, you learned how to use IotJsonPathContentTemplate mappings with the MedTech service device mappings.
-To learn how to configure the MedTech service FHIR destination mapping, see
+To learn how to configure the MedTech service FHIR destination mappings, see
> [!div class="nextstepaction"] > [How to configure FHIR destination mappings](how-to-configure-fhir-mappings.md)
healthcare-apis How To Use Monitoring Tab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/how-to-use-monitoring-tab.md
Previously updated : 12/27/2022 Last updated : 1/12/2023
Metric category|Metric name|Metric description|
## Next steps
-In this article, you learned about how to use the MedTech service monitoring tab.
+In this article, you learned how to use the MedTech service monitoring tab.
To learn how to configure the MedTech service metrics, see
healthcare-apis Troubleshoot Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-errors.md
+
+ Title: Troubleshoot MedTech service errors - Azure Health Data Services
+description: This article assists troubleshooting and resolving MedTech service errors.
+++++ Last updated : 1/12/2023+++
+# Troubleshoot MedTech service errors
+
+This article provides assistance troubleshooting and fixing MedTech service errors.
+
+> [!TIP]
+> Having access to metrics and logs are essential tools for assisting you in troubleshooting and assessing the overall performance of your MedTech service. Check out these MedTech service articles to learn more about how to enable, configure, and use these monitoring features:
+>
+> [How to use the MedTech service monitoring tab](how-to-use-monitoring-tab.md)
+>
+> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
+>
+> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
+
+> [!NOTE]
+> When you open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your device and FHIR destination mappings](how-to-create-mappings-copies.md) to assist in the troubleshooting process.
+
+## Errors and fixes
+
+### The operation being performed by the MedTech service
+
+This property represents the operation being performed by the MedTech service when the error has occurred. An operation generally represents the data flow stage while processing a device message. Here's a list of possible values for this property.
+
+> [!NOTE]
+> For information about the different stages of data flow in the MedTech service, see [The MedTech service data flow](data-flow.md).
+
+|Data flow stage|Description|
+||--|
+|Setup|The setup data flow stage is the operation specific to setting up your instance of the MedTech service.|
+|Normalization|Normalization is the data flow stage where the device data gets normalized.|
+|Grouping|The grouping data flow stage where the normalized data gets grouped.|
+|FHIRConversion|FHIRConversion is the data flow stage where the grouped-normalized data is transformed into a FHIR resource.|
+|Unknown|Unknown is the operation type that's unknown when an error occurs.|
+
+#### The severity of the error
+
+This property represents the severity of the occurred error. Here's a list of possible values for this property.
+
+|Severity|Description|
+||--|
+|Warning|Some minor issue exists in the data flow process, but processing of the device message doesn't stop.|
+|Error|This message occurs when the processing of a specific device message has run into an error and other messages may continue to execute as expected.|
+|Critical|This error is when some system level issue exists with the MedTech service and no messages are expected to process.|
+
+#### The type of error
+
+This property signifies a category for a given error, which it basically represents a logical grouping for similar types of errors. Here's a list of possible values for this property.
+
+|Error type|Description|
+|-|--|
+|`DeviceTemplateError`|This error type is related to the Device mapping.|
+|`DeviceMessageError`|This error type occurs when processing a specific device message.|
+|`FHIRTemplateError`|This error type is related to the FHIR destination mapping|
+|`FHIRConversionError`|This error type occurs when transforming a message into a FHIR resource.|
+|`FHIRResourceError`|This error type is related to existing resources in the FHIR service that are referenced by the MedTech service.|
+|`FHIRServerError`|This error type occurs when communicating with the FHIR service.|
+|`GeneralError`|This error type is about all other types of errors.|
+
+#### The name of the error
+
+This property provides the name for a specific error. Here's the list of all error names with their description and associated error type(s), severity, and data flow stage(s).
+
+|Error name|Description|Error type(s)|Error severity|Data flow stage(s)|
+|-|--|-|--||
+|`MultipleResourceFoundException`|This error occurs when multiple patient or device resources are found in the FHIR service for the respective identifiers present in the device message.|`FHIRResourceError`|Error|`FHIRConversion`|
+|`TemplateNotFoundException`|A device or FHIR destination mapping that isn't configured with the instance of the MedTech service.|`DeviceTemplateError`, `FHIRTemplateError`|Critical|`Normalization`, `FHIRConversion`|
+|`CorrelationIdNotDefinedException`|The correlation ID isn't specified in the Device mapping. `CorrelationIdNotDefinedException` is a conditional error that occurs only when the FHIR Observation must group device measurements using a correlation ID because it's not configured correctly.|`DeviceMessageError`|Error|Normalization|
+|`PatientDeviceMismatchException`|This error occurs when the device resource on the FHIR service has a reference to a patient resource. This error type means it doesn't match with the patient identifier present in the message.|`FHIRResourceError`|Error|`FHIRConversionError`|
+|`PatientNotFoundException`|No Patient FHIR resource is referenced by the Device FHIR resource associated with the device identifier present in the device message. Note this error will only occur when the MedTech service instance is configured with the *Lookup* resolution type.|`FHIRConversionError`|Error|`FHIRConversion`|
+|`DeviceNotFoundException`|No device resource exists on the FHIR service associated with the device identifier present in the device message.|`DeviceMessageError`|Error|Normalization|
+|`PatientIdentityNotDefinedException`|This error occurs when expression to parse patient identifier from the device message isn't configured on the device mapping or patient identifer isn't present in the device message. Note this error occurs only when MedTech service's resolution type is set to *Create*.|`DeviceTemplateError`|Critical|Normalization|
+|`DeviceIdentityNotDefinedException`|This error occurs when the expression to parse device identifier from the device message isn't configured on the device mapping or device identifer isn't present in the device message.|`DeviceTemplateError`|Critical|Normalization|
+|`NotSupportedException`|Error occurred when device message with unsupported format is received.|`DeviceMessageError`|Error|Normalization|
+
+### The MedTech service resource
+
+|Message|Displayed|Condition|Fix|
+|-||||
+|The maximum number of resource type `iotconnectors` has been reached.|API and Azure portal|MedTech service subscription quota is reached (default is 10 MedTech services per workspace and 10 workspaces per subscription).|Delete one of the existing instances of the MedTech service. Use a different subscription that hasn't reached the subscription quota. Request a subscription quota increase.
+|Invalid `deviceMapping` mapping. Validation errors: {List of errors}|API and Azure portal|The `properties.deviceMapping` provided in the MedTech service Resource provisioning request is invalid.|Correct the errors in the mapping JSON provided in the `properties.deviceMapping` property.
+|`fullyQualifiedEventHubNamespace` is null, empty, or formatted incorrectly.|API and Azure portal|The MedTech service provisioning request `properties.ingestionEndpointConfiguration.fullyQualifiedEventHubNamespace` isn't valid.|Update the MedTech service `properties.ingestionEndpointConfiguration.fullyQualifiedEventHubNamespace` to the correct format. Should be `{YOUR_NAMESPACE}.servicebus.windows.net`.
+|Ancestor resources must be fully provisioned before a child resource can be provisioned.|API|The parent workspace is still provisioning.|Wait until the parent workspace provisioning has completed and submit the provisioning request again.
+|`Location` property of child resources must match the `Location` property of parent resources.|API|The MedTech service provisioning request `location` property is different from the parent workspace `location` property.|Set the `location` property of the MedTech service in the provisioning request to the same value as the parent workspace `location` property.
+
+### Destination resource
+
+|Message|Displayed|Condition|Fix|
+|-||||
+|The maximum number of resource type `iotconnectors/destinations` has been reached.|API and Azure portal|MedTech service Destination Resource quota is reached and the default is 1 per MedTech service).|Delete the existing instance of MedTech service Destination Resource. Only one Destination Resource is permitted per MedTech service.
+|The `fhirServiceResourceId` provided is invalid.|API and Azure portal|The `properties.fhirServiceResourceId` provided in the Destination Resource provisioning request isn't a valid resource ID for an instance of the Azure Health Data Services FHIR service.|Ensure the resource ID is formatted correctly, and make sure the resource ID is for an Azure Health Data Services FHIR service instance. The format should be: `/subscriptions/{SUBSCRIPTION_ID}/resourceGroups/{RESOURCE_GROUP_NAME}/providers/Microsoft.HealthcareApis/workspaces/{workspace_NAME}/fhirservices/{FHIR_SERVICE_NAME}`
+|Ancestor resources must be fully provisioned before a child resource can be provisioned.|API|The parent workspace or the parent MedTech service is still provisioning.|Wait until the parent workspace or the parent MedTech service provisioning completes, and then submit the provisioning request again.
+|`Location` property of child resources must match the `Location` property of parent resources.|API|The Destination provisioning request `location` property is different from the parent MedTech service `location` property.|Set the `location` property of the Destination in the provisioning request to the same value as the parent MedTech service `location` property.
+
+## Why is the MedTech service data not showing up in the FHIR service?
+
+|Potential issues|Fixes|
+|-|--|
+|Data is still being processed.|Data is egressed to the FHIR service in batches (every ~5 minutes). ItΓÇÖs possible the data is still being processed and extra time is needed for the data to be persisted in the FHIR service.|
+|Device mapping hasn't been configured.|Configure and save conforming Device mapping.|
+|FHIR destination mapping hasn't been configured.|Configure and save conforming FHIR destination mapping.|
+|The device message doesn't contain an expected expression defined in the Device mapping.|Verify `JsonPath` expressions defined in the Device mapping match tokens defined in the device message.|
+|A Device Resource hasn't been created in the FHIR service (Resolution Type: Look up only)*.|Create a valid Device Resource in the FHIR service. Ensure the Device Resource contains an identifier that matches the device identifier provided in the incoming message.|
+|A Patient Resource hasn't been created in the FHIR service (Resolution Type: Look up only)*.|Create a valid Patient Resource in the FHIR service.|
+|The `Device.patient` reference isn't set, or the reference is invalid (Resolution Type: Look up only)*.|Make sure the Device Resource contains a valid [Reference](https://www.hl7.org/fhir/device-definitions.html#Device.patient) to a Patient Resource.|
+
+*Reference [Quickstart: Part 2: Configure the MedTech service for manual deployment using the Azure portal](deploy-new-config.md#destination-properties) for a functional description of the MedTech service resolution types (For example: Create or Lookup).
+
+## Next steps
+
+In this article, you learned how to troubleshoot MedTech service error messages and conditions.
+
+To learn about the MedTech service frequently asked question (FAQs), see
+
+> [!div class="nextstepaction"]
+> [Frequently asked questions about the MedTech service](frequently-asked-questions.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Troubleshoot Mappings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/troubleshoot-mappings.md
- Title: Troubleshoot MedTech service device and FHIR destination mappings - Azure Health Data Services
-description: This article helps users troubleshoot the MedTech service device and FHIR destination mappings.
----- Previously updated : 12/15/2022---
-# Troubleshoot MedTech service device and FHIR destination mappings
-
-This article provides the validation steps the MedTech service performs on the device and Fast Healthcare Interoperability Resources (FHIR&#174;) destination mappings and can be used for troubleshooting mappings error messages and conditions.
-
-> [!TIP]
-> Having access to metrics and logs are essential tools for assisting you in troubleshooting and assessing the overall performance of your MedTech service. Check out these MedTech service articles to learn more about how to enable, configure, and use these monitoring features:
->
-> [How to use the MedTech service monitoring tab](how-to-use-monitoring-tab.md)
->
-> [How to configure the MedTech service metrics](how-to-configure-metrics.md)
->
-> [How to enable diagnostic settings for the MedTech service](how-to-enable-diagnostic-settings.md)
-
-> [!NOTE]
-> When you open an [Azure Technical Support](https://azure.microsoft.com/support/create-ticket/) ticket for the MedTech service, include [copies of your device and FHIR destination mappings](how-to-create-mappings-copies.md) to assist in the troubleshooting process.
-
-## Device and FHIR destination mappings validations
-
-The validation process validates the device and FHIR destination mappings before allowing them to be saved for use. These elements are required in the device and FHIR destination mappings templates.
-
-**Device mappings**
-
-|Element|Required|
-|:-|:|
-|TypeName|True|
-|TypeMatchExpression|True|
-|DeviceIdExpression|True|
-|TimestampExpression|True|
-|Values[].ValueName|True|
-|Values[].ValueExpression|True|
-
-> [!NOTE]
-> `Values[].ValueName and Values[].ValueExpression` elements are only required if you have a value entry in the array. It's valid to have no values mapped. This is used when the telemetry being sent is an event.
->
-> For example:
->
-> Some IoMT scenarios may require creating an Observation Resource in the FHIR service that does not contain a value.
-
-**FHIR destination mappings**
-
-|Element|Required|
-|:|:-|
-|TypeName|True|
-
-> [!NOTE]
-> This is the only required FHIR destination mapping element validated at this time.
-
-## Next steps
-
-In this article, you learned the validation process that the MedTech service performs on the device and FHIR destination mappings.
-
-To learn how to troubleshoot MedTech service errors and conditions, see
-
-> [!div class="nextstepaction"]
-> [Troubleshoot the MedTech service error messages and conditions](troubleshoot-error-messages-and-conditions.md)
-
-FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
hpc-cache Az Cli Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hpc-cache/az-cli-prerequisites.md
Follow these steps to prepare your environment before using Azure CLI to create or manage an Azure HPC Cache. - Azure HPC Cache requires version 2.7 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
import-export Storage Import Export Data From Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-from-blobs.md
Perform the following steps to create an export job in the Azure portal using th
Use the following steps to create an export job in the Azure portal. Azure CLI and Azure PowerShell create jobs in the classic Azure Import/Export service and hence create an Azure resource of the type "Import/Export job." ### Create a job
import-export Storage Import Export Data To Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-blobs.md
Perform the following steps to prepare the drives.
Use the following steps to create an import job in the Azure CLI. ### Create a job
import-export Storage Import Export Data To Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/import-export/storage-import-export-data-to-files.md
For additional samples, go to [Samples for journal files](#samples-for-journal-f
Use the following steps to create an import job in the Azure CLI. ### Create a job
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-analytics.md
In this how-to guide, you learn how to:
## Prerequisites ## Run the Script
iot-central Howto Integrate With Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-integrate-with-devops.md
You need the following prerequisites to complete the steps in this guide:
- Visual Studio Code or other tool to edit PowerShell and JSON files.[Get Visual Studio Code](https://code.visualstudio.com/Download). - Git client. Download the latest version from [Git - Downloads (git-scm.com)](https://git-scm.com/downloads). ## Download the sample code
iot-central Howto Manage Iot Central From Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-iot-central-from-cli.md
If you prefer to use a language such as JavaScript, Python, C#, Ruby, or Go, see
# [Azure CLI](#tab/azure-cli) # [PowerShell](#tab/azure-powershell)
iot-central Howto Monitor Devices Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-monitor-devices-azure-cli.md
Use the Azure CLI IoT extension to see messages your devices are sending to IoT
A work or school account in Azure, added as a user in an IoT Central application. ## Install the IoT Central extension
iot-central Quick Export Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/quick-export-data.md
Completing this quickstart incurs a small cost in your Azure account for the Azu
- Complete the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md). The second quickstart, [Configure rules and actions for your device](quick-configure-rules.md), is optional. - You need the IoT Central application *URL prefix* that you chose in the first quickstart [Create an Azure IoT Central application](./quick-deploy-iot-central.md). ## Install Azure services
iot-central Tutorial Industrial End To End https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-industrial-end-to-end.md
In this tutorial, you learn how to:
In this tutorial, you use the Azure CLI to create an app registration in Azure Active Directory: ## Setup
iot-central Tutorial Use Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-rest-api.md
To complete the steps in this tutorial, you need:
You use the Azure CLI to generate the bearer tokens that some of the REST APIs use for authorization. ### Postman
iot-develop Set Up Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/set-up-environment.md
Before you can complete any of the IoT Plug and Play quickstarts and tutorials,
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Create the resources
iot-dps How To Provision Multitenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-provision-multitenant.md
This tutorial uses a simulated device sample from the [Azure IoT C SDK](https://
* Complete the steps in [Set up IoT Hub Device Provisioning Service with the Azure portal](./quick-setup-auto-provision.md). ## Create two regional IoT hubs
iot-dps Quick Setup Auto Provision Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-bicep.md
This quickstart uses [Azure PowerShell](../azure-resource-manager/bicep/deploy-p
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [azure-powershell-requirements-no-header.md](../../includes/azure-powershell-requirements-no-header.md)]
iot-dps Quick Setup Auto Provision Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-cli.md
The Azure CLI is used to create and manage Azure resources from the command line
> Both the IoT hub and the provisioning service you create in this quickstart will be publicly discoverable as DNS endpoints. Make sure to avoid any sensitive information if you decide to change the names used for these resources. > ## Create a resource group
iot-dps Quick Setup Auto Provision Rm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-rm.md
If your environment meets the prerequisites, and you're already familiar with us
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Review the template
iot-dps Tutorial Custom Allocation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-allocation-policies.md
The following prerequisites are for a Windows development environment. For Linux
- Latest version of [Git](https://git-scm.com/download/) installed. ## Create the provisioning service and two divisional IoT hubs
iot-edge Quickstart Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart-linux.md
If you don't have an active Azure subscription, create a [free account](https://
Prepare your environment for the Azure CLI. Cloud resources:
iot-edge Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/quickstart.md
If you don't have an active Azure subscription, create a [free account](https://
Prepare your environment for the Azure CLI. Create a cloud resource group to manage all the resources you'll use in this quickstart.
iot-hub How To Routing Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/how-to-routing-azure-cli.md
The procedures that are described in the article use the following resources:
This article uses the Azure CLI to work with IoT Hub and other Azure services. You can choose how you access the Azure CLI: ### IoT hub
iot-hub Iot Hub Configure File Upload Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-configure-file-upload-cli.md
To use the [file upload functionality in IoT Hub](iot-hub-devguide-file-upload.m
* An Azure Storage account. If you don't have an Azure Storage account, you can use the Azure CLI to create one. For more information, see [Create a storage account](../storage/common/storage-account-create.md). [!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
iot-hub Iot Hub Create Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-create-using-cli.md
This article shows you how to create an IoT hub using Azure CLI. When you create an IoT hub, you must create it in a resource group. Either use an existing resource group, or run the following [command to create a resource group](/cli/azure/resource):
iot-hub Iot Hub How To Android Things https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-how-to-android-things.md
This tutorial outlines the steps to build a device side application on Android T
* Latest version of [Git](https://git-scm.com/) ## Create an IoT hub
iot-hub Iot Hub Live Data Visualization In Web Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-live-data-visualization-in-web-apps.md
In this article, you learn how to visualize real-time sensor data that your IoT
* The steps in this article assume a Windows development machine; however, you can easily perform these steps on a Linux system in your preferred shell. ## Add a consumer group to your IoT hub
iot-hub Quickstart Control Device Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/quickstart-control-device-android.md
In this quickstart, you use a direct method to control a simulated device connec
* Port 8883 open in your firewall. The device sample in this quickstart uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). [!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
iot-hub Tutorial Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-connectivity.md
In this tutorial, you learn how to:
> * Check cloud-to-device connectivity > * Check device twin synchronization [!INCLUDE [iot-hub-cli-version-info](../../includes/iot-hub-cli-version-info.md)]
iot-hub Tutorial Message Enrichments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-message-enrichments.md
There are no other prerequisites for the Azure portal.
# [Azure CLI](#tab/cli)
iot-hub Tutorial Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-routing.md
There are no other prerequisites for the Azure portal.
# [Azure CLI](#tab/cli)
iot-hub Tutorial Use Metrics And Diags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/tutorial-use-metrics-and-diags.md
In this tutorial, you perform the following tasks:
* Make sure that port 8883 is open in your firewall. The device sample in this tutorial uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](iot-hub-mqtt-support.md#connecting-to-iot-hub). ## Set up resources
iot-hub Virtual Network Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/virtual-network-support.md
- Last updated 10/20/2021
+ Last updated 01/13/2023
-# IoT Hub support for virtual networks with Private Link and Managed Identity
+# IoT Hub support for virtual networks with Azure Private Link
-By default, IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in over wide-area networks and on-premises networks can all access it.
+By default, IoT Hub's hostnames map to a public endpoint with a publicly routable IP address over the internet. Different customers share this IoT Hub public endpoint, and IoT devices in wide-area networks and on-premises networks can all access it.
-![IoT Hub public endpoint](./media/virtual-network-support/public-endpoint.png)
+![Diagram of IoT Hub public endpoint.](./media/virtual-network-support/public-endpoint.png)
-IoT Hub features including [message routing](./iot-hub-devguide-messages-d2c.md), [file upload](./iot-hub-devguide-file-upload.md), and [bulk device import/export](./iot-hub-bulk-identity-mgmt.md) also require connectivity from IoT Hub to a customer-owned Azure resource over its public endpoint. These connectivity paths collectively make up the egress traffic from IoT Hub to customer resources.
+Some IoT Hub features, including [message routing](./iot-hub-devguide-messages-d2c.md), [file upload](./iot-hub-devguide-file-upload.md), and [bulk device import/export](./iot-hub-bulk-identity-mgmt.md), also require connectivity from IoT Hub to a customer-owned Azure resource over its public endpoint. These connectivity paths make up the egress traffic from IoT Hub to customer resources.
-You might want to restrict connectivity to your Azure resources (including IoT Hub) through a VNet that you own and operate. These reasons include:
+You might want to restrict connectivity to your Azure resources (including IoT Hub) through a VNet that you own and operate for several reasons, including:
* Introducing network isolation for your IoT hub by preventing connectivity exposure to the public internet.
-* Enabling a private connectivity experience from your on-premises network assets ensuring that your data and traffic
-is transmitted directly to Azure backbone network.
+* Enabling a private connectivity experience from your on-premises network assets, which ensures that your data and traffic is transmitted directly to Azure backbone network.
-* Preventing exfiltration attacks from sensitive on-premises networks.
+* Preventing exfiltration attacks from sensitive on-premises networks.
* Following established Azure-wide connectivity patterns using [private endpoints](../private-link/private-endpoint-overview.md).
This article describes how to achieve these goals using [Azure Private Link](../
## Ingress connectivity to IoT Hub using Azure Private Link
-A private endpoint is a private IP address allocated inside a customer-owned VNet via which an Azure resource is reachable. Through Azure Private Link, you can set up a private endpoint for your IoT hub to allow services inside your VNet to reach IoT Hub without requiring traffic to be sent to IoT Hub's public endpoint. Similarly, your on-premises devices can use [Virtual Private Network (VPN)](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](https://azure.microsoft.com/services/expressroute/) peering to gain connectivity to your VNet and your IoT Hub (via its private endpoint). As a result, you can restrict or completely block off connectivity to your IoT hub's public endpoints by using [IoT Hub IP filter](./iot-hub-ip-filtering.md) or [the public network access toggle](iot-hub-public-network-access.md). This approach keeps connectivity to your Hub using the private endpoint for devices. The main focus of this setup is for devices inside an on-premises network. This setup isn't advised for devices deployed in a wide-area network.
+A private endpoint is a private IP address allocated inside a customer-owned VNet through which an Azure resource is reachable. With Azure Private Link, you can set up a private endpoint for your IoT hub to allow services inside your VNet to reach IoT Hub without requiring traffic to be sent to IoT Hub's public endpoint. Similarly, your on-premises devices can use [Virtual Private Network (VPN)](../vpn-gateway/vpn-gateway-about-vpngateways.md) or [ExpressRoute](https://azure.microsoft.com/services/expressroute/) peering to gain connectivity to your VNet and your IoT hub (via its private endpoint). As a result, you can restrict or completely block off connectivity to your IoT hub's public endpoints by using [IoT Hub IP filter](./iot-hub-ip-filtering.md) or [the public network access toggle](iot-hub-public-network-access.md). This approach keeps connectivity to your hub using the private endpoint for devices. The main focus of this setup is for devices inside an on-premises network. This setup isn't advised for devices deployed in a wide-area network.
-![IoT Hub virtual network engress](./media/virtual-network-support/virtual-network-ingress.png)
+![Diagram of IoT Hub virtual network ingress.](./media/virtual-network-support/virtual-network-ingress.png)
Before proceeding ensure that the following prerequisites are met:
Before proceeding ensure that the following prerequisites are met:
### Set up a private endpoint for IoT Hub ingress
-Private endpoint works for IoT Hub device APIs (like device-to-cloud messages) as well as service APIs (like creating and updating devices).
+Private endpoint works for IoT Hub device APIs (like device-to-cloud messages) and service APIs (like creating and updating devices).
-1. In Azure portal, select **Networking**, **Private access**, and click the **+ Create a private endpoint** option.
+1. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
- :::image type="content" source="media/virtual-network-support/private-link.png" alt-text="Screenshot showing where to add private endpoint for IoT Hub" border="true":::
+1. Select **Networking** > **Private access**, and then select **Create a private endpoint**.
-1. Provide the subscription, resource group, name, and region to create the new private endpoint in. Ideally, private endpoint should be created in the same region as your hub.
+ :::image type="content" source="media/virtual-network-support/private-link.png" alt-text="Screenshot showing where to add private endpoint for IoT Hub." border="true":::
-1. Click **Next: Resource**, and provide the subscription for your IoT Hub resource, and select **"Microsoft.Devices/IotHubs"** as resource type, your IoT Hub name as **resource**, and **iotHub** as target subresource.
+1. Provide the subscription, resource group, name, and region to create the new private endpoint. Ideally, a private endpoint should be created in the same region as your hub.
-1. Click **Next: Configuration** and provide your virtual network and subnet to create the private endpoint in. Select the option to integrate with Azure private DNS zone, if desired.
+1. Select **Next: Resource**, and provide the subscription for your IoT Hub resource, and select **"Microsoft.Devices/IotHubs"** as resource type, your IoT hub name as **resource**, and **iotHub** as target subresource.
-1. Click **Next: Tags**, and optionally provide any tags for your resource.
+1. Select **Next: Configuration** and provide your virtual network and subnet to create the private endpoint in. Select the option to integrate with Azure private DNS zone, if desired.
-1. Click **Review + create** to create your private link resource.
+1. Select **Next: Tags**, and optionally provide any tags for your resource.
-### Built-in Event Hub compatible endpoint
+1. Select **Review + create** to create your private link resource.
-The [built-in Event Hub compatible endpoint](iot-hub-devguide-messages-read-builtin.md) can also be accessed over private endpoint. When private link is configured, you should see an additional private endpoint connection for the built-in endpoint. It's the one with `servicebus.windows.net` in the FQDN.
+### Built-in Event Hubs compatible endpoint
+The [built-in Event Hubs compatible endpoint](iot-hub-devguide-messages-read-builtin.md) can also be accessed over private endpoint. When private link is configured, you should see another private endpoint connection for the built-in endpoint. It's the one with `servicebus.windows.net` in the FQDN.
-IoT Hub's [IP filter](iot-hub-ip-filtering.md) can optionally control public access to the built-in endpoint.
+
+IoT Hub's [IP filter](iot-hub-ip-filtering.md) can optionally control public access to the built-in endpoint.
To completely block public network access to your IoT hub, [turn off public network access](iot-hub-public-network-access.md) or use IP filter to block all IP and select the option to apply rules to the built-in endpoint.
For pricing details, see [Azure Private Link pricing](https://azure.microsoft.co
## Egress connectivity from IoT Hub to other Azure resources
-IoT Hub can connect to your Azure blob storage, event hub, service bus resources for [message routing](./iot-hub-devguide-messages-d2c.md), [file upload](./iot-hub-devguide-file-upload.md), and [bulk device import/export](./iot-hub-bulk-identity-mgmt.md) over the resources' public endpoint. Binding your resource to a VNet blocks connectivity to the resource by default. As a result, this configuration prevents IoT Hub's from working sending data to your resources. To fix this issue, enable connectivity from your IoT Hub resource to your storage account, event hub, or service bus resources via the **trusted Microsoft service** option.
+IoT Hub can connect to your Azure blob storage, event hub, service bus resources for [message routing](./iot-hub-devguide-messages-d2c.md), [file upload](./iot-hub-devguide-file-upload.md), and [bulk device import/export](./iot-hub-bulk-identity-mgmt.md) over the resources' public endpoint. Binding your resource to a VNet blocks connectivity to the resource by default. As a result, this configuration prevents IoT hubs from sending data to your resources. To fix this issue, enable connectivity from your IoT Hub resource to your storage account, event hub, or service bus resources via the **trusted Microsoft service** option.
-To allow other services to find your IoT hub as a trusted Microsoft service, your hub must use the managed identity. Once a managed identity is provisioned, you need to grant the Azure RBAC permission to your hub's managed identity to access your custom endpoint. Follow the article [Managed identities support in IoT Hub](./iot-hub-managed-identity.md) to provision a managed identity with Azure RBAC permission, and add the custom endpoint to your IoT Hub. Make sure you turn on the trusted Microsoft first party exception to allow your IoT Hub's access to the custom endpoint if you have the firewall configurations in place.
+To allow other services to find your IoT hub as a trusted Microsoft service, your hub must use a managed identity. Once a managed identity is provisioned, grant permission to your hub's managed identity to access your custom endpoint. Follow the article [Managed identities support in IoT Hub](./iot-hub-managed-identity.md) to provision a managed identity with Azure role-based access control (RBAC) permission, and add the custom endpoint to your IoT hub. Make sure you turn on the trusted Microsoft first party exception to allow your IoT hubs access to the custom endpoint if you have the firewall configurations in place.
### Pricing for trusted Microsoft service option+ Trusted Microsoft first party services exception feature is free of charge. Charges for the provisioned storage accounts, event hubs, or service bus resources apply separately.+ ## Next steps
-Use the links below to learn more about IoT Hub features:
+Use the following links to learn more about IoT Hub features:
* [Message routing](./iot-hub-devguide-messages-d2c.md) * [File upload](./iot-hub-devguide-file-upload.md)
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-net.md
Title: Quickstart - Azure Key Vault certificates client library for .NET (version 4)
-description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library (version 4)
+ Title: Quickstart - Azure Key Vault certificates client library for .NET
+description: Learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library
Last updated 11/14/2022
ms.devlang: csharp-+
-# Quickstart: Azure Key Vault certificate client library for .NET (SDK v4)
+# Quickstart: Azure Key Vault certificate client library for .NET
Get started with the Azure Key Vault certificate client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for certificates. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete certificates from an Azure key vault using the .NET client library
For more information about Key Vault and certificates, see:
## Prerequisites * An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core)
+* [.NET 6 SDK or later](https://dotnet.microsoft.com/download)
* [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md).
From the command shell, install the Azure Key Vault certificate client library f
dotnet add package Azure.Security.KeyVault.Certificates ```
-For this quickstart, you'll also need to install the Azure SDK client library for Azure Identity:
+For this quickstart, you'll also need to install the Azure Identity client library:
```dotnetcli dotnet add package Azure.Identity
using Azure.Security.KeyVault.Certificates;
### Authenticate and create a client
-In this quickstart, logged in user is used to authenticate to key vault, which is preferred method for local development. For applications deployed to Azure, managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/dotnet/azure/sdk/authentication#defaultazurecredential) class provided by the [Azure Identity client library](/dotnet/api/overview/azure/identity-readme) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
-In below example, the name of your key vault is expanded to the key vault URI, in the format "https://\<your-key-vault-name\>.vault.azure.net". This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity. For more information about authenticating to key vault, see [Developer's Guide](../general/developers-guide.md#authenticate-to-key-vault-in-code).
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
```csharp string keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
await client.PurgeDeletedCertificateAsync("myCertificate");
## Sample code
-Modify the .NET Core console app to interact with the Key Vault by completing the following steps:
+Modify the .NET console app to interact with the Key Vault by completing the following steps:
- Replace the code in *Program.cs* with the following code:
key-vault Common Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-error-codes.md
tags: azure-resource-manager
Previously updated : 09/29/2020 Last updated : 01/12/2023 #Customer intent: As an Azure Key Vault administrator, I want to react to soft-delete being turned on for all key vaults.
The error codes listed in the following table may be returned by an operation on
| VaultNameNotValid | The vault name should be string of 3 to 24 characters and can contain only numbers (0-9), letters (a-z, A-Z), and hyphens (-) | | AccessDenied | You may be missing permissions in access policy to do that operation. | | ForbiddenByFirewall | Client address isn't authorized and caller isn't a trusted service. |
-| ConflictError | You're requesting multiple operations on the same item, e.g., Key Vault, secret, key, certificate, or common components within a Key Vault like VNET. It's recommended to sequence operations or to implement retry logic. |
+| ConflictError | You're requesting multiple operations on the same item, for example, Key Vault, secret, key, certificate, or common components within a Key Vault like VNET. It's recommended to sequence operations or to implement retry logic. |
| RegionNotSupported | Specified Azure region isn't supported for this resource. | | SkuNotSupported | Specified SKU type isn't supported for this resource. | | ResourceNotFound | Specified Azure resource isn't found. |
key-vault Common Parameters And Headers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/common-parameters-and-headers.md
 Title: Common parameters and headers
-description: The parameters and headers common to all operations that you might do related to Key Vault resources.
+description: The parameters and headers common to all operations that you might perform on Key Vault resources.
tags: azure-resource-manager
tags: azure-resource-manager
Previously updated : 01/07/2019 Last updated : 01/11/2023 # Common parameters and headers
-The following information is common to all operations that you might do related to Key Vault resources:
+The following information is common to all operations that you might perform on Key Vault resources:
-- The HTTP `Host` header must always be present and must specify the vault hostname. Example: `Host: contoso.vault.azure.net`. Note that most client technologies populate the `Host` header from the URI. For instance, `GET https://contoso.vault.azure.net/secrets/mysecret{...}` will set the `Host` as `contoso.vault.azure.net`. This means that if you access Key Vault using raw IP address like `GET https://10.0.0.23/secrets/mysecret{...}`, the automatic value of `Host` header will be wrong and you will have to manually insure that the `Host` header contains the vault hostname.
+- The HTTP `Host` header must always be present and must specify the vault hostname. Example: `Host: contoso.vault.azure.net`. Note that most client technologies populate the `Host` header from the URI. For instance, `GET https://contoso.vault.azure.net/secrets/mysecret{...}` will set the `Host` as `contoso.vault.azure.net`. If you access Key Vault using raw IP address like `GET https://10.0.0.23/secrets/mysecret{...}`, the automatic value of `Host` header will be wrong, and you'll have to manually ensure that the `Host` header contains the vault hostname.
- Replace `{api-version}` with the api-version in the URI. - Replace `{subscription-id}` with your subscription identifier in the URI - Replace `{resource-group-name}` with the resource group. For more information, see Using Resource groups to manage your Azure resources. - Replace `{vault-name}` with your key vault name in the URI. - Set the Content-Type header to application/json.-- Set the Authorization header to a JSON Web Token that you obtain from Azure Active Directory (AAD). For more information, see [Authenticating Azure Resource Manager](authentication-requests-and-responses.md) requests.
+- Set the Authorization header to a JSON Web Token that you obtain from Azure Active Directory (Azure AD). For more information, see [Authenticating Azure Resource Manager](authentication-requests-and-responses.md) requests.
## Common error response The service will use HTTP status codes to indicate success or failure. In addition, failures contain a response in the following format:
key-vault Customer Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/customer-data.md
tags: azure-resource-manager
Previously updated : 01/07/2019 Last updated : 01/11/2023
The following information identifies customer data within Azure Key Vault:
The same REST APIs, Portal experience, and SDKs used to create vaults, keys, secrets, certificates, and managed storage accounts, are also able to update and delete these objects.
-Soft-delete allows you to recover deleted data for 90 days after deletion. When using soft-delete, the data may be permanently deleted prior to the 90 days retention period expires by performing a purge operation. If the vault or subscription has been configured to block purge operations, it is not possible to permanently delete data until the scheduled retention period has passed.
+Soft-delete allows you to recover deleted data for 90 days after deletion. When using soft-delete, the data may be permanently deleted prior to the 90 days retention period expires by performing a purge operation. If the vault or subscription has been configured to block purge operations, it isn't possible to permanently delete data until the scheduled retention period has passed.
## Exporting customer data
key-vault Event Grid Logicapps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/event-grid-logicapps.md
tags: azure-resource-manager
Previously updated : 11/11/2019 Last updated : 01/11/2023 # Use Logic Apps to receive email about status changes of key vault secrets
-In this guide you will learn how to respond to Azure Key Vault events that are received via [Azure Event Grid](../../event-grid/index.yml) by using [Azure Logic Apps](../../logic-apps/index.yml). By the end, you will have an Azure logic app set up to send a notification email every time a secret is created in Azure Key Vault.
+In this guide, you will learn how to respond to Azure Key Vault events that are received via [Azure Event Grid](../../event-grid/index.yml) by using [Azure Logic Apps](../../logic-apps/index.yml). By the end, you will have an Azure logic app set up to send a notification email every time a secret is created in Azure Key Vault.
For an overview of Azure Key Vault / Azure Event Grid integration, see [Monitoring Key Vault with Azure Event Grid](event-grid-overview.md).
For an overview of Azure Key Vault / Azure Event Grid integration, see [Monitori
## Create a Logic App via Event Grid
-First, create Logic App with event grid handler and subscribe to Azure Key Vault "SecretNewVersionCreated" events.
+First, create Logic App with Event Grid handler and subscribe to Azure Key Vault "SecretNewVersionCreated" events.
To create an Azure Event Grid subscription, follow these steps:
-1. In the Azure portal, go to your key vault, select **Events > Get Started** and click **Logic Apps**
+1. In the Azure portal, go to your key vault, select **Events > Get Started** and select **Logic Apps**
![Key Vault - events page](../media/eventgrid-logicapps-kvsubs.png)
-1. On **Logic Apps Designer** validate the connection and click **Continue**
+1. On **Logic Apps Designer** validate the connection and select **Continue**
![Logic App Designer - connection](../media/eventgrid-logicappdesigner1.png)
To create an Azure Event Grid subscription, follow these steps:
![Logic App Designer - email body](../media/eventgrid-logicappdesigner4.png)
-8. Click **Save as**.
-9. Enter a **name** for new logic app and click **Create**.
+8. Select **Save as**.
+9. Enter a **name** for new logic app and select **Create**.
![Logic App Designer - create](../media/eventgrid-logicappdesigner5.png)
key-vault Event Grid Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/event-grid-overview.md
Title: 'Monitoring Key Vault with Azure Event Grid'
-description: 'Use Azure Event Grid to subscribe to Key Vault events'
+ Title: Monitoring Key Vault with Azure Event Grid
+description: Use Azure Event Grid to subscribe to Key Vault events
Previously updated : 11/12/2019 Last updated : 01/11/2023
Applications can react to these events using modern serverless architectures, wi
## Key Vault events and schemas
-Event grid uses [event subscriptions](../../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Key Vault events contain all the information you need to respond to changes in your data. You can identify a Key Vault event because the eventType property starts with "Microsoft.KeyVault".
+Event Grid uses [event subscriptions](../../event-grid/concepts.md#event-subscriptions) to route event messages to subscribers. Key Vault events contain all the information you need to respond to changes in your data. You can identify a Key Vault event because the eventType property starts with "Microsoft.KeyVault".
For more information, see the [Key Vault event schema](../../event-grid/event-schema-key-vault.md).
For more information, see the [Key Vault event schema](../../event-grid/event-sc
Applications that handle Key Vault events should follow a few recommended practices:
-* Multiple subscriptions can be configured to route events to the same event handler. It is important not to assume events are from a particular source, but to check the topic of the message to ensure that it comes from the key vault you are expecting.
-* Similarly, check that the eventType is one you are prepared to process, and do not assume that all events you receive will be the types you expect.
+* Multiple subscriptions can be configured to route events to the same event handler. It's important not to assume events are from a particular source, but to check the topic of the message to ensure that it comes from the key vault you're expecting.
+* Similarly, check that the eventType is one you're prepared to process, and do not assume that all events you receive will be the types you expect.
* Ignore fields you don't understand. This practice will help keep you resilient to new features that might be added in the future. * Use the "subject" prefix and suffix matches to limit events to a particular event.
key-vault Event Grid Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/event-grid-tutorial.md
tags: azure-resource-manager
Previously updated : 10/25/2019 Last updated : 01/11/2023
After your Automation account is ready, create a runbook.
![Create a runbook UI](../media/event-grid-tutorial-3.png)
-1. Select the Automation account you just created.
+1. Select the Automation account you created.
1. Select **Runbooks** under **Process Automation**.
write-Error "No input data found."
Create a webhook to trigger your newly created runbook.
-1. Select **Webhooks** from the **Resources** section of the runbook you just published.
+1. Select **Webhooks** from the **Resources** section of the runbook you published.
1. Select **Add Webhook**.
Create a webhook to trigger your newly created runbook.
> [!IMPORTANT] > You can't view the URL after you create it. Make sure you save a copy in a secure location where you can access it for the remainder of this guide.
-1. Select **Parameters and run settings** and then select **OK**. Don't enter any parameters. This will enable the **Create** button.
+1. Select **Parameters and run settings** and then select **OK**. Don't enter any parameters. The **Create** button will be enabled.
1. Select **OK** and then select **Create**.
key-vault Manage With Cli2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/manage-with-cli2.md
Previously updated : 08/12/2019 Last updated : 01/11/2023
If you have an existing key in a .pem file, you can upload it to Azure Key Vault
az keyvault key import --vault-name "ContosoKeyVault" --name "ContosoFirstKey" --pem-file "./softkey.pem" --pem-password "hVFkk965BuUv" --protection software ```
-You can now reference the key that you created or uploaded to Azure Key Vault, by using its URI. Use `https://ContosoKeyVault.vault.azure.net/keys/ContosoFirstKey` to always get the current version. Use https://[keyvault-name].vault.azure.net/keys/[keyname]/[key-unique-id] to get this specific version. For example, `https://ContosoKeyVault.vault.azure.net/keys/ContosoFirstKey/cgacf4f763ar42ffb0a1gca546aygd87`.
+You can now reference the key that you created or uploaded to Azure Key Vault, by using its URI. Use `https://ContosoKeyVault.vault.azure.net/keys/ContosoFirstKey` to always get the current version. Use `https://<keyvault-name>.vault.azure.net/keys/<keyname>/<key-unique-id>` to get this specific version. For example, `https://ContosoKeyVault.vault.azure.net/keys/ContosoFirstKey/cgacf4f763ar42ffb0a1gca546aygd87`.
-Add a secret to the vault, which is a password named SQLPassword, and that has the value of "hVFkk965BuUv" to Azure Key Vaults.
+Add a secret to the vault, which is a password named SQLPassword, and that has the value of "hVFkk965BuUv" to Azure Key Vaults.
```azurecli az keyvault secret set --vault-name "ContosoKeyVault" --name "SQLPassword" --value "hVFkk965BuUv " ```
-Reference this password by using its URI. Use **https://ContosoVault.vault.azure.net/secrets/SQLPassword** to always get the current version, and https://[keyvault-name].vault.azure.net/secret/[secret-name]/[secret-unique-id] to get this specific version. For example, **https://ContosoVault.vault.azure.net/secrets/SQLPassword/90018dbb96a84117a0d2847ef8e7189d**.
+Reference this password by using its URI. Use **https://ContosoVault.vault.azure.net/secrets/SQLPassword** to always get the current version, and `https://<keyvault-name>.vault.azure.net/secret/<secret-name>/<secret-unique-id>` to get this specific version. For example, `https://ContosoVault.vault.azure.net/secrets/SQLPassword/90018dbb96a84117a0d2847ef8e7189d`.
Import a certificate to the vault using a .pem or .pfx.
az keyvault certificate import --vault-name "ContosoKeyVault" --file "c:\cert\ce
Let's view the key, secret, or certificate that you created:
-* To view your keys, type:
+* To view your keys, type:
```azurecli az keyvault key list --vault-name "ContosoKeyVault" ```
-* To view your secrets, type:
+* To view your secrets, type:
```azurecli az keyvault secret list --vault-name "ContosoKeyVault" ```
-* To view certificates, type:
+* To view certificates, type:
```azurecli az keyvault certificate list --vault-name "ContosoKeyVault"
To authorize the same application to read secrets in your vault, type the follow
az keyvault set-policy --name "ContosoKeyVault" --spn 8f8c4bbd-485b-45fd-98f7-ec6300b7b4ed --secret-permissions get ```
-## <a name="bkmk_KVperCLI"></a> Setting key vault advanced access policies
+## Setting key vault advanced access policies
Use [az keyvault update](/cli/azure/keyvault#az-keyvault-update) to enable advanced policies for the key vault.
key-vault Overview Throttling https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/overview-throttling.md
Previously updated : 12/02/2019 Last updated : 01/11/2023
Throttling limits vary based on the scenario. For example, if you are performing
## How does Key Vault handle its limits?
-Service limits in Key Vault prevent misuse of resources and ensure quality of service for all of Key Vault's clients. When a service threshold is exceeded, Key Vault limits any further requests from that client for a period of time, returns HTTP status code 429 (Too many requests), and the request fails. Failed requests that return a 429 do not count towards the throttle limits tracked by Key Vault.
+Service limits in Key Vault prevent misuse of resources and ensure quality of service for all of Key Vault's clients. When a service threshold is exceeded, Key Vault limits any further requests from that client, returns HTTP status code 429 (Too many requests), and the request fails. Failed requests that return a 429 do not count towards the throttle limits tracked by Key Vault.
-Key Vault was originally designed to be used to store and retrieve your secrets at deployment time. The world has evolved, and Key Vault is being used at run-time to store and retrieve secrets, and often apps and services want to use Key Vault like a database. Current limits do not support high throughput rates.
+Key Vault was originally designed to store and retrieve your secrets at deployment time. The world has evolved, and Key Vault is being used at run-time to store and retrieve secrets, and often apps and services want to use Key Vault like a database. Current limits do not support high throughput rates.
Key Vault was originally created with the limits specified in [Azure Key Vault service limits](service-limits.md). To maximize your Key Vault throughput rates, here are some recommended guidelines/best practices for maximizing your throughput:
-1. Ensure you have throttling in place. Client must honor exponential back-off policies for 429's and ensure you are doing retries as per the guidance below.
+1. Ensure you have throttling in place. Client must honor exponential back-off policies for 429s and ensure you are doing retries as per the guidance below.
1. Divide your Key Vault traffic amongst multiple vaults and different regions. Use a separate vault for each security/availability domain. If you have five apps, each in two regions, then we recommend 10 vaults each containing the secrets unique to app and region. A subscription-wide limit for all transaction types is five times the individual key vault limit. For example, HSM-other transactions per subscription are limited to 5,000 transactions in 10 seconds per subscription. Consider caching the secret within your service or app to also reduce the RPS directly to key vault and/or handle burst based traffic. You can also divide your traffic amongst different regions to minimize latency and use a different subscription/vault. Do not send more than the subscription limit to the Key Vault service in a single Azure region. 1. Cache the secrets you retrieve from Azure Key Vault in memory, and reuse from memory whenever possible. Re-read from Azure Key Vault only when the cached copy stops working (e.g. because it got rotated at the source).
-1. Key Vault is designed for your own services secrets. If you are storing your customers' secrets (especially for high-throughput key storage scenarios), consider putting the keys in a database or storage account with encryption, and storing just the master key in Azure Key Vault.
+1. Key Vault is designed for your own services secrets. If you are storing your customers' secrets (especially for high-throughput key storage scenarios), consider putting the keys in a database or storage account with encryption, and storing just the primary key in Azure Key Vault.
1. Encrypt, wrap, and verify public-key operations can be performed with no access to Key Vault, which not only reduces risk of throttling, but also improves reliability (as long as you properly cache the public key material). 1. If you use Key Vault to store credentials for a service, check if that service supports Azure AD Authentication to authenticate directly. This reduces the load on Key Vault, improves reliability and simplifies your code since Key Vault can now use the Azure AD token. Many services have moved to using Azure AD Auth. See the current list at [Services that support managed identities for Azure resources](../../active-directory/managed-identities-azure-resources/services-support-managed-identities.md#azure-services-that-support-managed-identities-for-azure-resources). 1. Consider staggering your load/deployment over a longer period of time to stay under the current RPS limits.
-1. If your app comprises multiple nodes that need to read the same secret(s), then consider using a fan out pattern, where one entity reads the secret from Key Vault, and fans out to all nodes. Cache the retrieved secrets only in memory.
+1. If your app comprises multiple nodes that need to read the same secret(s), then consider using a fan-out pattern, where one entity reads the secret from Key Vault, and fans out to all nodes. Cache the retrieved secrets only in memory.
## How to throttle your app in response to service limits
The following are **best practices** you should implement when your service is t
When you implement your app's error handling, use the HTTP error code 429 to detect the need for client-side throttling. If the request fails again with an HTTP 429 error code, you are still encountering an Azure service limit. Continue to use the recommended client-side throttling method, retrying the request until it succeeds.
-Code that implements exponential backoff is shown below.
+Here is code that implements exponential backoff:
+ ``` SecretClientOptions options = new SecretClientOptions() {
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/quick-create-cli.md
Azure Key Vault is a cloud service that provides a secure store for [keys](../ke
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
key-vault Rest Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rest-error-codes.md
Previously updated : 12/16/2019 Last updated : 01/11/2023 # Azure Key Vault REST API Error Codes
-
+ The following error codes could be returned by an operation on an Azure Key Vault web service.
-
+ ## HTTP 401: Unauthenticated Request 401 means that the request is unauthenticated for Key Vault.
A request is authenticated if:
There are several reasons why a request may return 401.
-### No authentication token attached to the request.
+### No authentication token attached to the request
-Here is an example PUT request, setting the value of a secret:
+Here's an example PUT request, setting the value of a secret:
``` PUT https://putreqexample.vault.azure.net//secrets/DatabaseRotatingPassword?api-version=7.0 HTTP/1.1
Content-Length: 31
The "Authorization" header is the access token that is required with every call to the Key Vault for data-plane operations. If the header is missing, then the response must be 401.
-### The token lacks the correct resource associated with it.
+### The token lacks the correct resource associated with it
When requesting an access token from the Azure OAUTH endpoint, a parameter called "resource" is mandatory. The value is important for the token provider because it scopes the token for its intended use. The resource for **all** tokens to access a Key Vault is *https:\//vault.keyvault.net* (with no trailing slash).
Tokens are base64 encoded and the values can be decoded at websites such as [htt
We can see many important parts in this token: -- aud (audience): The resource of the token. Notice that this is `https://vault.azure.net`. This token will NOT work for any resource that does not explicitly match this value, such as graph.
+- aud (audience): The resource of the token. Notice that this is `https://vault.azure.net`. This token will NOT work for any resource that doesn't explicitly match this value, such as graph.
- iat (issued at): The number of ticks since the start of the epoch when the token was issued. - nbf (not before): The number of ticks since the start of the epoch when this token becomes valid. - exp (expiration): The number of ticks since the start of the epoch when this token expires. - appid (application ID): The GUID for the application ID making this request. - tid (tenant ID): The GUID for the tenant ID of the principal making this request
-It is important that all of the values be properly identified in the token in order for the request to work. If everything is correct, then the request will not result in 401.
+It is important that all of the values be properly identified in the token in order for the request to work. If everything is correct, then the request won't result in 401.
### Troubleshooting 401 401s should be investigated from the point of token generation, before the request is made to the key vault. Generally code is being used to request the token. Once the token is received, it is passed into the Key Vault request. If the code is running locally, you can use Fiddler to capture the request/response to `https://login.microsoftonline.com`. A request looks like this:
-```
+```
+ POST https://login.microsoftonline.com/<key vault tenant ID>/oauth2/token HTTP/1.1 Accept: application/json Content-Type: application/x-www-form-urlencoded; charset=utf-8
The following user-supplied information must be correct:
Ensure the rest of the request is nearly identical.
-If you can only get the response access token, you can decode it (as shown above) to ensure the tenant ID, the client ID (app ID), and the resource.
+If you can only get the response access token, you can decode it to ensure the tenant ID, the client ID (app ID), and the resource.
## HTTP 403: Insufficient Permissions
There is a limited list of "Azure Trusted Services". Azure Web Sites are **not**
You must add the IP address of the Azure Web Site to the Key Vault in order for it to work.
-If due to access policy: find the object ID for the request and ensure that the object ID matches the object to which the user is trying to assign the access policy. There will often be multiple objects in Azure AD which have the same name, so choosing the correct one is very important. By deleting and re-adding the access policy, it is possible to see if multiple objects exist with the same name.
-
-In addition, most access policies do not require the use of the "Authorized application" as shown in the portal. Authorized applications are used for "on-behalf-of" authentication scenarios, which are rare.
+If due to access policy: find the object ID for the request and ensure that the object ID matches the object to which the user is trying to assign the access policy. There will often be multiple objects in Azure AD, which have the same name, so choosing the correct one is important. By deleting and readding the access policy, it is possible to see if multiple objects exist with the same name.
+In addition, most access policies do not require the use of the "Authorized application" as shown in the portal. Authorized applications are used for "on-behalf-of" authentication scenarios, which are rare.
## HTTP 429: Too Many Requests
Throttling occurs when the number of requests exceeds the stated maximum for the
In general, requests to the Key Vault are limited to 4,000 requests/10 seconds. Exceptions are Key Operations, as documented in [Key Vault service limits](service-limits.md) ### Troubleshooting 429+ Throttling is worked around using these techniques: -- Reduce number of requests made to the Key Vault by determining if there are patterns to a requested resource and attempting to cache them in the calling application.
+- Reduce number of requests made to the Key Vault by determining if there are patterns to a requested resource and attempting to cache them in the calling application.
-- When Key Vault throttling occurs, adapt the requesting code to use a exponential backoff for retrying. The algorithm is explained here: [How to throttle your app](overview-throttling.md#how-to-throttle-your-app-in-response-to-service-limits)
+- When Key Vault throttling occurs, adapt the requesting code to use an exponential backoff for retrying. The algorithm is explained here: [How to throttle your app](overview-throttling.md#how-to-throttle-your-app-in-response-to-service-limits)
-- If the number of requests cannot be reduced by caching and timed backoff does not work, then consider splitting the keys up into multiple Key Vaults. The service limit for a single subscription is 5x the individual Key Vault limit. If using more than 5 Key Vaults, consideration should be given to using multiple subscriptions.
+- If the number of requests cannot be reduced by caching and timed backoff does not work, then consider splitting the keys up into multiple Key Vaults. The service limit for a single subscription is 5x the individual Key Vault limit. If using more than five Key Vaults, consideration should be given to using multiple subscriptions.
Detailed guidance including request to increase limits, can be found here: [Key Vault throttling guidance](overview-throttling.md)
key-vault Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/versions.md
Previously updated : 06/30/2019 Last updated : 01/11/2023
Private endpoints now available in preview. Azure Private Link Service enables y
New features and integrations released this year: -- Integration with Azure Functions. For an example scenario leveraging [Azure Functions](../../azure-functions/index.yml) for key vault operations, see [Automate the rotation of a secret](../secrets/tutorial-rotation.md).
+- Integration with Azure Functions. For an example scenario using [Azure Functions](../../azure-functions/index.yml) for key vault operations, see [Automate the rotation of a secret](../secrets/tutorial-rotation.md).
- [Integration with Azure Databricks](./integrate-databricks-blob-storage.md). With this, Azure Databricks now supports two types of secret scopes: Azure Key Vault-backed and Databricks-backed. For more information, see [Create an Azure Key Vault-backed secret scope](/azure/databricks/security/secrets/secret-scopes#--create-an-azure-key-vault-backed-secret-scope) - [Virtual network service endpoints for Azure Key Vault](overview-vnet-service-endpoints.md).
New features and integrations released this year:
New features released this year: -- Managed storage account keys. Storage Account Keys feature added easier integration with Azure Storage. See the overview topic for more information, [Managed Storage Account Keys overview](../secrets/overview-storage-keys.md).-- Soft delete. Soft-delete feature improves data protection of your key vaults and key vault objects. See the overview topic for more information, [Soft-delete overview](./soft-delete-overview.md).
+- Managed storage account keys. Storage Account Keys feature added easier integration with Azure Storage. For more information, see [Managed Storage Account Keys overview](../secrets/overview-storage-keys.md).
+- Soft delete. Soft-delete feature improves data protection of your key vaults and key vault objects. For more information, see [Soft-delete overview](./soft-delete-overview.md).
## 2015
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-net.md
Title: Quickstart - Azure Key Vault keys client library for .NET (version 4)
-description: Learn how to create, retrieve, and delete keys from an Azure key vault using the .NET client library (version 4)
+ Title: Quickstart - Azure Key Vault keys client library for .NET
+description: Learn how to create, retrieve, and delete keys from an Azure key vault using the .NET client library
Last updated 01/04/2023
ms.devlang: csharp-+
-# Quickstart: Azure Key Vault key client library for .NET (SDK v4)
+# Quickstart: Azure Key Vault key client library for .NET
Get started with the Azure Key Vault key client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for cryptographic keys. You can securely store cryptographic keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete keys from an Azure key vault using the .NET key client library
For more information about Key Vault and keys, see:
## Prerequisites * An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core)
+* [.NET 6 SDK or later](https://dotnet.microsoft.com/download)
* [Azure CLI](/cli/azure/install-azure-cli) * A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md).
From the command shell, install the Azure Key Vault key client library for .NET:
dotnet add package Azure.Security.KeyVault.Keys ```
-For this quickstart, you'll also need to install the Azure SDK client library for Azure Identity:
+For this quickstart, you'll also need to install the Azure Identity client library:
```dotnetcli dotnet add package Azure.Identity
using Azure.Security.KeyVault.Keys;
### Authenticate and create a client
-In this quickstart, logged in user is used to authenticate to key vault, which is preferred method for local development. For applications deployed to Azure, managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/dotnet/azure/sdk/authentication#defaultazurecredential) class provided by the [Azure Identity client library](/dotnet/api/overview/azure/identity-readme) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
-In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity. Fore more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
```csharp var keyVaultName = Environment.GetEnvironmentVariable("KEY_VAULT_NAME");
-var kvUri = "https://" + keyVaultName + ".vault.azure.net";
+var kvUri = $"https://{keyVaultName}.vault.azure.net";
var client = new KeyClient(new Uri(kvUri), new DefaultAzureCredential()); ```
await client.PurgeDeletedKeyAsync("myKey");
## Sample code
-Modify the .NET Core console app to interact with the Key Vault by completing the following steps:
+Modify the .NET console app to interact with the Key Vault by completing the following steps:
- Replace the code in *Program.cs* with the following code:
key-vault Quick Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/quick-create-template.md
If your environment meets the prerequisites and you're familiar with using ARM t
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Review the template
key-vault Overview Storage Keys Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys-powershell.md
Previously updated : 09/10/2019 Last updated : 01/11/2023 # Customer intent: As a developer I want storage credentials and SAS tokens to be managed securely by Azure Key Vault. + # Manage storage account keys with Key Vault and Azure PowerShell (legacy)+ > [!IMPORTANT] > Key Vault Managed Storage Account Keys (legacy) is supported as-is with no more updates planned. Only Account SAS are supported with SAS definitions signed storage service version no later than 2018-03-28. > [!IMPORTANT] > We recommend using Azure Storage integration with Azure Active Directory (Azure AD), Microsoft's cloud-based identity and access management service. Azure AD integration is available for [Azure blobs and queues](../../storage/blobs/authorize-access-azure-active-directory.md), and provides OAuth2 token-based access to Azure Storage (just like Azure Key Vault).
-> Azure AD allows you to authenticate your client application by using an application or user identity, instead of storage account credentials. You can use an [Azure AD managed identity](../../active-directory/managed-identities-azure-resources/index.yml) when you run on Azure. Managed identities remove the need for client authentication and storing credentials in or with your application. Use below solution only when Azure AD authentication is not possible.
+> Azure AD allows you to authenticate your client application by using an application or user identity, instead of storage account credentials. You can use an [Azure AD managed identity](../../active-directory/managed-identities-azure-resources/index.yml) when you run on Azure. Managed identities remove the need for client authentication and storing credentials in or with your application. Use this solution only when Azure AD authentication is not possible.
An Azure storage account uses credentials comprising an account name and a key. The key is autogenerated and serves as a password, rather than an as a cryptographic key. Key Vault manages storage account keys by periodically regenerating them in storage account and provides shared access signature tokens for delegated access to resources in your storage account.
To complete this guide, you must first do the following:
- [Create a key vault](quick-create-powershell.md) - [Create an Azure storage account](../../storage/common/storage-account-create.md?tabs=azure-powershell). The storage account name must use only lowercase letters and numbers. The length of the name must be between 3 and 24 characters. - ## Manage storage account keys ### Connect to your Azure account
Authenticate your PowerShell session using the [Connect-AzAccount](/powershell/m
```azurepowershell-interactive Connect-AzAccount ```+ If you have multiple Azure subscriptions, you can list them using the [Get-AzSubscription](/powershell/module/az.accounts/get-azsubscription) cmdlet, and specify the subscription you wish to use with the [Set-AzContext](/powershell/module/az.accounts/set-azcontext) cmdlet. ```azurepowershell-interactive
Set-AzContext -SubscriptionId <subscriptionId>
### Set variables
-First, set the variables to be used by the PowerShell cmdlets in the following steps. Be sure to update the "YourResourceGroupName", "YourStorageAccountName", and "YourKeyVaultName" placeholders, and set $keyVaultSpAppId to `cfa8b339-82a2-471a-a3c9-0fc0be7a4093` (as specified in [Service principal application ID](#service-principal-application-id), above).
+First, set the variables to be used by the PowerShell cmdlets in the following steps. Be sure to update the "YourResourceGroupName", "YourStorageAccountName", and "YourKeyVaultName" placeholders, and set $keyVaultSpAppId to `cfa8b339-82a2-471a-a3c9-0fc0be7a4093` (as specified in [Service principal application ID](#service-principal-application-id)).
-We will also use the Azure PowerShell [Get-AzContext](/powershell/module/az.accounts/get-azcontext) and [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) cmdlets to get your user ID and the context of your Azure storage account.
+We'll also use the Azure PowerShell [Get-AzContext](/powershell/module/az.accounts/get-azcontext) and [Get-AzStorageAccount](/powershell/module/az.storage/get-azstorageaccount) cmdlets to get your user ID and the context of your Azure storage account.
```azurepowershell-interactive $resourceGroupName = <YourResourceGroupName>
Use the Azure PowerShell [Set-AzKeyVaultAccessPolicy](/powershell/module/az.keyv
Set-AzKeyVaultAccessPolicy -VaultName $keyVaultName -UserPrincipalName $userId -PermissionsToStorage get, list, delete, set, update, regeneratekey, getsas, listsas, deletesas, setsas, recover, backup, restore, purge ```
-Note that permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal.
+The permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal.
### Add a managed storage account to your Key Vault instance
Tags :
### Enable key regeneration
-If you want Key Vault to regenerate your storage account keys periodically, you can use the Azure PowerShell [Add-AzKeyVaultManagedStorageAccount](/powershell/module/az.keyvault/add-azkeyvaultmanagedstorageaccount) cmdlet to set a regeneration period. In this example, we set a regeneration period of thirty days. When it is time to rotate, Key Vault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time. This is the active key.
+If you want Key Vault to regenerate your storage account keys periodically, you can use the Azure PowerShell [Add-AzKeyVaultManagedStorageAccount](/powershell/module/az.keyvault/add-azkeyvaultmanagedstorageaccount) cmdlet to set a regeneration period. In this example, we set a regeneration period of 30 days. When it's time to rotate, Key Vault regenerates the inactive key and then sets the newly created key as active. The key used to issue SAS tokens is the active key.
```azurepowershell-interactive $regenPeriod = [System.Timespan]::FromDays(30)
$sasTemplate="sv=2018-03-28&ss=bfqt&srt=sco&sp=rw&spr=https"
|-|--| |`SignedVersion (sv)`|Required. Specifies the signed storage service version to use to authorize requests made with this account SAS. Must be set to version 2015-04-05 or later. **Key Vault supports versions no later than 2018-03-28**| |`SignedServices (ss)`|Required. Specifies the signed services accessible with the account SAS. Possible values include:<br /><br /> - Blob (`b`)<br />- Queue (`q`)<br />- Table (`t`)<br />- File (`f`)<br /><br /> You can combine values to provide access to more than one service. For example, `ss=bf` specifies access to the Blob and File endpoints.|
-|`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.<br /><br /> - Service (`s`): Access to service-level APIs (*e.g.*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)<br />- Container (`c`): Access to container-level APIs (*e.g.*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)<br />- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*e.g.* Put Blob, Query Entity, Get Messages, Create File, etc.)<br /><br /> You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.|
+|`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.<br /><br /> - Service (`s`): Access to service-level APIs (*for example*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)<br />- Container (`c`): Access to container-level APIs (*for example*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)<br />- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*for example,* Put Blob, Query Entity, Get Messages, Create File, etc.)<br /><br /> You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.|
|`SignedPermission (sp)`|Required. Specifies the signed permissions for the account SAS. Permissions are only valid if they match the specified signed resource type; otherwise they are ignored.<br /><br /> - Read (`r`): Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.<br />- Write (`w`): Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.<br />- Delete (`d`): Valid for Container and Object resource types, except for queue messages.<br />- Permanent Delete (`y`): Valid for Object resource type of Blob only.<br />- List (`l`): Valid for Service and Container resource types only.<br />- Add (`a`): Valid for the following Object resource types only: queue messages, table entities, and append blobs.<br />- Create (`c`): Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.<br />- Update (`u`): Valid for the following Object resource types only: queue messages and table entities.<br />- Process (`p`): Valid for the following Object resource type only: queue messages.<br/>- Tag (`t`): Valid for the following Object resource type only: blobs. Permits blob tag operations.<br/>- Filter (`f`): Valid for the following Object resource type only: blob. Permits filtering by blob tag.<br/>- Set Immutability Policy (`i`): Valid for the following Object resource type only: blob. Permits set/delete immutability policy and legal hold on a blob.|
-|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> Note that HTTP only is not a permitted value.|
+|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> HTTP only is not a permitted value.|
For more information about account SAS, see: [Create an account SAS](/rest/api/storageservices/create-account-sas)
For more information about account SAS, see:
### Set shared access signature definition in Key Vault
-Use the the Azure PowerShell [Set-AzKeyVaultManagedStorageSasDefinition](/powershell/module/az.keyvault/set-azkeyvaultmanagedstoragesasdefinition) cmdlet to create a shared access signature definition. You can provide the name of your choice to the `-Name` parameter.
+Use the Azure PowerShell [Set-AzKeyVaultManagedStorageSasDefinition](/powershell/module/az.keyvault/set-azkeyvaultmanagedstoragesasdefinition) cmdlet to create a shared access signature definition. You can provide the name of your choice to the `-Name` parameter.
```azurepowershell-interactive Set-AzKeyVaultManagedStorageSasDefinition -AccountName $storageAccountName -VaultName $keyVaultName -Name <YourSASDefinitionName> -TemplateUri $sasTemplate -SasType 'account' -ValidityPeriod ([System.Timespan]::FromDays(1))
key-vault Overview Storage Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/overview-storage-keys.md
Previously updated : 09/18/2019 Last updated : 01/11/2023 # Customer intent: As a developer, I want to use Azure Key Vault and Azure CLI for secure management of my storage credentials and shared access signature tokens.
Use the Azure CLI [az keyvault-set-policy](/cli/azure/keyvault?#az-keyvault-set-
az keyvault set-policy --name <YourKeyVaultName> --upn user@domain.com --storage-permissions get list delete set update regeneratekey getsas listsas deletesas setsas recover backup restore purge ```
-Note that permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal.
+Permissions for storage accounts aren't available on the storage account "Access policies" page in the Azure portal.
### Create a Key Vault Managed storage account
- Create a Key Vault managed storage account using the Azure CLI [az keyvault storage](/cli/azure/keyvault/storage?#az-keyvault-storage-add) command. Set a regeneration period of 30 days. When it is time to rotate, KeyVault regenerates the key that is not active, and then sets the newly created key as active. Only one of the keys are used to issue SAS tokens at any one time, this is the active key. Provide the command the following parameter values:
+ Create a Key Vault managed storage account using the Azure CLI [az keyvault storage](/cli/azure/keyvault/storage?#az-keyvault-storage-add) command. Set a regeneration period of 30 days. When it's time to rotate, KeyVault regenerates the key that isn't active, and then sets the newly created key as active. Only one of the keys is used to issue SAS tokens at any one time, this is the active key. Provide the command the following parameter values:
- `--vault-name`: Pass the name of your key vault. To find the name of your key vault, use the Azure CLI [az keyvault list](/cli/azure/keyvault?#az-keyvault-list) command. - `-n`: Pass the name of your storage account. To find the name of your storage account, use the Azure CLI [az storage account list](/cli/azure/storage/account?#az-storage-account-list) command.
SAS definition template will be the passed to the `--template-uri` parameter in
|`SignedServices (ss)`|Required. Specifies the signed services accessible with the account SAS. Possible values include:<br /><br /> - Blob (`b`)<br />- Queue (`q`)<br />- Table (`t`)<br />- File (`f`)<br /><br /> You can combine values to provide access to more than one service. For example, `ss=bf` specifies access to the Blob and File endpoints.| |`SignedResourceTypes (srt)`|Required. Specifies the signed resource types that are accessible with the account SAS.<br /><br /> - Service (`s`): Access to service-level APIs (*for example*, Get/Set Service Properties, Get Service Stats, List Containers/Queues/Tables/Shares)<br />- Container (`c`): Access to container-level APIs (*for example*, Create/Delete Container, Create/Delete Queue, Create/Delete Table, Create/Delete Share, List Blobs/Files and Directories)<br />- Object (`o`): Access to object-level APIs for blobs, queue messages, table entities, and files(*for example,* Put Blob, Query Entity, Get Messages, Create File, etc.)<br /><br /> You can combine values to provide access to more than one resource type. For example, `srt=sc` specifies access to service and container resources.| |`SignedPermission (sp)`|Required. Specifies the signed permissions for the account SAS. Permissions are only valid if they match the specified signed resource type; otherwise they're ignored.<br /><br /> - Read (`r`): Valid for all signed resources types (Service, Container, and Object). Permits read permissions to the specified resource type.<br />- Write (`w`): Valid for all signed resources types (Service, Container, and Object). Permits write permissions to the specified resource type.<br />- Delete (`d`): Valid for Container and Object resource types, except for queue messages.<br />- Permanent Delete (`y`): Valid for Object resource type of Blob only.<br />- List (`l`): Valid for Service and Container resource types only.<br />- Add (`a`): Valid for the following Object resource types only: queue messages, table entities, and append blobs.<br />- Create (`c`): Valid for the following Object resource types only: blobs and files. Users can create new blobs or files, but may not overwrite existing blobs or files.<br />- Update (`u`): Valid for the following Object resource types only: queue messages and table entities.<br />- Process (`p`): Valid for the following Object resource type only: queue messages.<br/>- Tag (`t`): Valid for the following Object resource type only: blobs. Permits blob tag operations.<br/>- Filter (`f`): Valid for the following Object resource type only: blob. Permits filtering by blob tag.<br/>- Set Immutability Policy (`i`): Valid for the following Object resource type only: blob. Permits set/delete immutability policy and legal hold on a blob.|
-|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> Note that HTTP only isn't a permitted value.|
+|`SignedProtocol (spr)`|Optional. Specifies the protocol permitted for a request made with the account SAS. Possible values are both HTTPS and HTTP (`https,http`) or HTTPS only (`https`). The default value is `https,http`.<br /><br /> HTTP only isn't a permitted value. |
For more information about account SAS, see: [Create an account SAS](/rest/api/storageservices/create-account-sas)
key-vault Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-cli.md
In this quickstart, you create a key vault in Azure Key Vault with Azure CLI. Az
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] This quickstart requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
key-vault Quick Create Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-java.md
description: Provides a quickstart for the Azure Key Vault Secret client library
Previously updated : 10/20/2019 Last updated : 01/11/2023
ms.devlang: java
# Quickstart: Azure Key Vault Secret client library for Java
-Get started with the Azure Key Vault Secret client library for Java. Follow the steps below to install the package and try out example code for basic tasks.
+
+Get started with the Azure Key Vault Secret client library for Java. Follow these steps to install the package and try out example code for basic tasks.
Additional resources:
Additional resources:
- [Apache Maven](https://maven.apache.org) - [Azure CLI](/cli/azure/install-azure-cli)
-This quickstart assumes you are running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window.
+This quickstart assumes you're running [Azure CLI](/cli/azure/install-azure-cli) and [Apache Maven](https://maven.apache.org) in a Linux terminal window.
## Setting up This quickstart is using the Azure Identity library with Azure CLI to authenticate user to Azure Services. Developers can also use Visual Studio or Visual Studio Code to authenticate their calls, for more information, see [Authenticate the client with Azure Identity client library](/java/api/overview/azure/identity-readme).
export KEY_VAULT_NAME=<your-key-vault-name>
## Object model The Azure Key Vault Secret client library for Java allows you to manage secrets. The [Code examples](#code-examples) section shows how to create a client, set a secret, retrieve a secret, and delete a secret.
-The entire console app is [below](#sample-code).
- ## Code examples+ ### Add directives Add the following directives to the top of your code:
import com.azure.security.keyvault.secrets.models.KeyVaultSecret;
``` ### Authenticate and create a client+ In this quickstart, a logged in user is used to authenticate to Key Vault, which is preferred method for local development. For applications deployed to Azure, a Managed Identity should be assigned to an App Service or Virtual Machine. For more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
-In the example below, the name of your key vault is expanded to the key vault URI, in the format `https://\<your-key-vault-name\>.vault.azure.net`. This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://\<your-key-vault-name\>.vault.azure.net`. This example is using the ['DefaultAzureCredential()'](/java/api/com.azure.identity.defaultazurecredential) class, which allows to use the same code across different environments with different options to provide identity. For more information, see [Default Azure Credential Authentication](/java/api/overview/azure/identity-readme).
```java String keyVaultName = System.getenv("KEY_VAULT_NAME");
SecretClient secretClient = new SecretClientBuilder()
``` ### Save a secret
-Now that your application is authenticated, you can put a secret into your key vault using the `secretClient.setSecret` method. This requires a name for the secret -- we've assigned the value "mySecret" to the `secretName` variable in this sample.
+Now that your application is authenticated, you can put a secret into your key vault using the `secretClient.setSecret` method. This requires a name for the secretΓÇöwe've assigned the value "mySecret" to the `secretName` variable in this sample.
```java secretClient.setSecret(new KeyVaultSecret(secretName, secretValue));
public class App {
``` ## Next steps
-In this quickstart you created a key vault, stored a secret, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a key vault, stored a secret, retrieved it, and then deleted it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - See the [Azure Key Vault developer's guide](../general/developers-guide.md)
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
Title: Quickstart - Azure Key Vault secrets client library for .NET (version 4)
-description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the .NET client library (version 4)
+ Title: Quickstart - Azure Key Vault secrets client library for .NET
+description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the .NET client library
Last updated 09/23/2020
ms.devlang: csharp-+
-# Quickstart: Azure Key Vault secret client library for .NET (SDK v4)
+# Quickstart: Azure Key Vault secret client library for .NET
Get started with the Azure Key Vault secret client library for .NET. [Azure Key Vault](../general/overview.md) is a cloud service that provides a secure store for secrets. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you learn how to create, retrieve, and delete secrets from an Azure key vault using the .NET client library
For more information about Key Vault and secrets, see:
## Prerequisites * An Azure subscription - [create one for free](https://azure.microsoft.com/free/dotnet)
-* [.NET Core 3.1 SDK or later](https://dotnet.microsoft.com/download/dotnet-core)
-* [Azure CLI](/cli/azure/install-azure-cli)
-* [Azure PowerShell](/powershell/azure/install-az-ps)
-* A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md) [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md)
+* [.NET 6 SDK or later](https://dotnet.microsoft.com/download)
+* [Azure CLI](/cli/azure/install-azure-cli) or [Azure PowerShell](/powershell/azure/install-az-ps)
+* A Key Vault - you can create one using [Azure portal](../general/quick-create-portal.md), [Azure CLI](../general/quick-create-cli.md), or [Azure PowerShell](../general/quick-create-powershell.md)
This quickstart is using `dotnet` and Azure CLI or Azure PowerShell.
From the command shell, install the Azure Key Vault secret client library for .N
dotnet add package Azure.Security.KeyVault.Secrets ```
-For this quickstart, you'll also need to install the Azure SDK client library for Azure Identity:
+For this quickstart, you'll also need to install the Azure Identity client library:
```dotnetcli dotnet add package Azure.Identity
Add the following directives to the top of *Program.cs*:
### Authenticate and create a client
-In this quickstart, logged in user is used to authenticate to key vault, which is preferred method for local development. For applications deployed to Azure, managed identity should be assigned to App Service or Virtual Machine, for more information, see [Managed Identity Overview](../../active-directory/managed-identities-azure-resources/overview.md).
+Application requests to most Azure services must be authorized. Using the [DefaultAzureCredential](/dotnet/azure/sdk/authentication#defaultazurecredential) class provided by the [Azure Identity client library](/dotnet/api/overview/azure/identity-readme) is the recommended approach for implementing passwordless connections to Azure services in your code. `DefaultAzureCredential` supports multiple authentication methods and determines which method should be used at runtime. This approach enables your app to use different authentication methods in different environments (local vs. production) without implementing environment-specific code.
-In below example, the name of your key vault is expanded to the key vault URI, in the format "https://\<your-key-vault-name\>.vault.azure.net". This example is using ['DefaultAzureCredential()'](/dotnet/api/azure.identity.defaultazurecredential) class from [Azure Identity Library](/dotnet/api/overview/azure/identity-readme), which allows to use the same code across different environments with different options to provide identity. For more information about authenticating to key vault, see [Developer's Guide](../general/developers-guide.md#authenticate-to-key-vault-in-code).
+In this quickstart, `DefaultAzureCredential` authenticates to key vault using the credentials of the local development user logged into the Azure CLI. When the application is deployed to Azure, the same `DefaultAzureCredential` code can automatically discover and use a managed identity that is assigned to an App Service, Virtual Machine, or other services. For more information, see [Managed Identity Overview](/azure/active-directory/managed-identities-azure-resources/overview).
+
+In this example, the name of your key vault is expanded to the key vault URI, in the format `https://<your-key-vault-name>.vault.azure.net`. For more information about authenticating to key vault, see [Developer's Guide](/azure/key-vault/general/developers-guide#authenticate-to-key-vault-in-code).
[!code-csharp[](~/samples-key-vault-dotnet-quickstart/key-vault-console-app/Program.cs?name=authenticate)]
await client.PurgeDeletedSecretAsync("mySecret");
## Sample code
-Modify the .NET Core console app to interact with the Key Vault by completing the following steps:
+Modify the .NET console app to interact with the Key Vault by completing the following steps:
1. Replace the code in *Program.cs* with the following code:
key-vault Quick Create Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-portal.md
Previously updated : 09/03/2019 Last updated : 01/11/2023 #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store keys and passwords in Azure
Azure Key Vault is a cloud service that provides a secure store for secrets. You can securely store keys, passwords, certificates, and other secrets. Azure key vaults may be created and managed through the Azure portal. In this quickstart, you create a key vault, then use it to store a secret.
-For more information about, see
-- [Key Vault Overview](../general/overview.md)-- [Secrets Overview](about-secrets.md).
+For more information, see [Key Vault Overview](../general/overview.md) and [Secrets Overview](about-secrets.md).
## Prerequisites
To add a secret to the vault, follow the steps:
1. Navigate to your new key vault in the Azure portal 1. On the Key Vault settings pages, select **Secrets**.
-1. Click on **Generate/Import**.
+1. Select on **Generate/Import**.
1. On the **Create a secret** screen choose the following values: - **Upload options**: Manual. - **Name**: Type a name for the secret. The secret name must be unique within a Key Vault. The name must be a 1-127 character string, starting with a letter and containing only 0-9, a-z, A-Z, and -. For more information on naming, see [Key Vault objects, identifiers, and versioning](../general/about-keys-secrets-certificates.md#objects-identifiers-and-versioning) - **Value**: Type a value for the secret. Key Vault APIs accept and return secret values as strings.
- - Leave the other values to their defaults. Click **Create**.
+ - Leave the other values to their defaults. Select **Create**.
-Once that you receive the message that the secret has been successfully created, you may click on it on the list.
+Once that you receive the message that the secret has been successfully created, you may select on it on the list.
For more information on secrets attributes, see [About Azure Key Vault secrets](./about-secrets.md) ## Retrieve a secret from Key Vault
-If you click on the current version, you can see the value you specified in the previous step.
+If you select on the current version, you can see the value you specified in the previous step.
:::image type="content" source="../media/quick-create-portal/current-version-hidden.png" alt-text="Secret properties":::
When no longer needed, delete the resource group, which deletes the Key Vault an
## Next steps
-In this quickstart, you created a Key Vault and stored a secret in it. To learn more about Key Vault and how to integrate it with your applications, continue on to the articles below.
+In this quickstart, you created a Key Vault and stored a secret in it. To learn more about Key Vault and how to integrate it with your applications, continue on to these articles.
- Read an [Overview of Azure Key Vault](../general/overview.md) - Read [Secure access to a Key Vault](../general/security-features.md)
key-vault Storage Keys Sas Tokens Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/storage-keys-sas-tokens-code.md
Previously updated : 09/10/2019 Last updated : 01/11/2023 ms.devlang: csharp
kubernetes-fleet L4 Load Balancing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/kubernetes-fleet/l4-load-balancing.md
In this how-to guide, you'll set up layer 4 load balancing across workloads depl
az aks get-credentials --resource-group ${GROUP} --name ${MEMBER_CLUSTER_1} --file aks-member-1 ``` ## Deploy a sample workload to demo clusters
In this how-to guide, you'll set up layer 4 load balancing across workloads depl
KUBECONFIG=aks-member-1 kubectl apply -f https://raw.githubusercontent.com/Azure/AKS/master/examples/fleet/kuard/kuard-mcs.yaml ```
+ > [!NOTE]
+ > To expose the service via the internal IP instead of public one, add the annotation to the MultiClusterService:
+ >
+ > ```yaml
+ > apiVersion: networking.fleet.azure.com/v1alpha1
+ > kind: MultiClusterService
+ > metadata:
+ > name: kuard
+ > namespace: kuard-demo
+ > annotations:
+ > service.beta.kubernetes.io/azure-load-balancer-internal: "true"
+ > ...
+ > ```
++ Output will look similar to the following example: ```console
lighthouse Tenants Users Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/concepts/tenants-users-roles.md
Title: Tenants, users, and roles in Azure Lighthouse scenarios description: Understand how Azure Active Directory tenants, users, and roles can be used in Azure Lighthouse scenarios. Previously updated : 08/02/2022 Last updated : 01/13/2023
All [built-in roles](../../role-based-access-control/built-in-roles.md) are curr
In some cases, a role that had previously been supported with Azure Lighthouse may become unavailable. For example, if the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission is added to a role that previously didn't have that permission, that role can no longer be used when onboarding new delegations. Users who had already been assigned the role will still be able to work on previously delegated resources, but they won't be able to perform tasks that use the [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission.
+> [!IMPORTANT]
+> When assigning roles, be sure to review the [actions](../../role-based-access-control/role-definitions.md) specified for each role. In some cases, even though roles with [`DataActions`](../../role-based-access-control/role-definitions.md#dataactions) permission are not supported, the actions included in a role may allow access to data, where data is exposed through access keys and not accessed via the user's identity. For example, the [Virtual Machine Contributor](/azure/role-based-access-control/built-in-roles) role includes the `Microsoft.Storage/storageAccounts/listKeys/action` action, which returns storage account access keys that could be used to retrieve certain customer data.
+ > [!NOTE] > As soon as a new applicable built-in role is added to Azure, it can be assigned when [onboarding a customer using Azure Resource Manager templates](../how-to/onboard-customer.md). There may be a delay before the newly-added role becomes available in Partner Center when [publishing a managed service offer](../how-to/publish-managed-services-offers.md). Similarly, if a role becomes unavailable, you may still see it in Partner Center for a period of time; however, you won't be able to publish new offers using such roles.
load-balancer Quickstart Basic Internal Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-internal-load-balancer-cli.md
Get started with Azure Load Balancer by using the Azure CLI to create an interna
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] This quickstart requires version 2.0.28 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
load-balancer Quickstart Basic Public Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/quickstart-basic-public-load-balancer-cli.md
Get started with Azure Load Balancer by using the Azure portal to create a basic
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/basic/virtual-network-ipv4-ipv6-dual-stack-cli.md
To deploy a dual stack (IPV4 + IPv6) application using Standard Load Balancer, s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Configure Vm Scale Set Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/configure-vm-scale-set-cli.md
In this article, you'll learn how to configure a Virtual Machine Scale Set with
- You need an Azure Virtual Network for the Virtual Machine Scale Set. - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Cross Region Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/cross-region-overview.md
This region doesn't affect how the traffic will be routed. If a home region goes
* East Asia * US Gov Virginia * UK South
+* West Europe
> [!NOTE] > You can only deploy your cross-region load balancer or Public IP in Global tier in one of the regions above.
load-balancer Ipv6 Add To Existing Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/ipv6-add-to-existing-vnet-cli.md
This article shows you how to add IPv6 addresses to an application that is using
- This article assumes that you deployed a Standard Load Balancer as described in [Quickstart: Create a Standard Load Balancer - Azure CLI](../load-balancer/quickstart-load-balancer-standard-public-cli.md). - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-distribution-mode.md
Set the value of the `LoadDistribution` element for the type of load balancing r
# [**CLI**](#tab/azure-cli) Use Azure CLI to change the load-balancer distribution settings on an existing load-balancing rule. The following command updates the distribution mode:
load-balancer Manage Inbound Nat Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/manage-inbound-nat-rules.md
In this article, you'll learn how to add and remove an inbound NAT rule for both
- A standard public load balancer in your subscription. For more information on creating an Azure Load Balancer, see [Quickstart: Create a public load balancer to load balance VMs using the Azure portal](quickstart-load-balancer-standard-public-portal.md). The load balancer name for the examples in this article is **myLoadBalancer**. - If you choose to install and use PowerShell locally, this article requires the Azure PowerShell module version 5.4.1 or later. Run `Get-Module -ListAvailable Az` to find the installed version. If you need to upgrade, see [Install Azure PowerShell module](/powershell/azure/install-Az-ps). If you're running PowerShell locally, you also need to run `Connect-AzAccount` to create a connection with Azure.
load-balancer Quickstart Load Balancer Standard Internal Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-internal-cli.md
Get started with Azure Load Balancer by using the Azure CLI to create an interna
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] This quickstart requires version 2.0.28 or later of the Azure CLI. If you're using Azure Cloud Shell, the latest version is already installed.
load-balancer Quickstart Load Balancer Standard Public Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/quickstart-load-balancer-standard-public-cli.md
Get started with Azure Load Balancer by using the Azure CLI to create a public l
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Load Balancer Linux Cli Load Balance Multiple Websites Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-load-balance-multiple-websites-vm.md
This Azure CLI script sample creates a virtual network with two virtual machines
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
load-balancer Load Balancer Linux Cli Sample Nlb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-nlb.md
This Azure CLI script example creates everything needed to run several Ubuntu vi
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
load-balancer Load Balancer Linux Cli Sample Zonal Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zonal-frontend.md
This Azure CLI script example creates everything needed to run several Ubuntu vi
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
load-balancer Load Balancer Linux Cli Sample Zone Redundant Frontend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/scripts/load-balancer-linux-cli-sample-zone-redundant-frontend.md
This Azure CLI script example creates everything needed to run several Ubuntu vi
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
load-balancer Tutorial Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-gateway-cli.md
In this tutorial, you learn how to:
> * Create a gateway load balancer. > * Chain a load balancer frontend to gateway load balancer. - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Upgrade Basic Standard Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/upgrade-basic-standard-virtual-machine-scale-sets.md
The PowerShell module performs the following functions:
>[!NOTE] > Migrating _internal_ Basic Load Balancers where the backend VMs or VMSS instances do not have Public IP Addresses assigned requires additional action post-migration to enable backend pool members to connect to the internet. The recommended approach is to create a NAT Gateway and assign it to the backend pool members' subnet (see: [**Integrate NAT Gateway with Internal Load Balancer**](../virtual-network/nat-gateway/tutorial-nat-gateway-load-balancer-internal-portal.md)). Alternatively, Public IP Addresses can be allocated to each VMSS instance by adding a Public IP Configuration to the Network Profile (see: [**VMSS Public IPv4 Address Per Virtual Machine**](../virtual-machine-scale-sets/virtual-machine-scale-sets-networking.md)).
+>[!NOTE]
+> If the Virtual Machine Scale Set in the Load Balancer backend pool has Public IP Addresses in its network configuration, the Public IP Addresses will change during migration (the Public IPs must be removed prior to the migration, then added back post migration with a Standard SKU configuration)
+ ### Unsupported Scenarios - Basic Load Balancers with a Virtual Machine Scale Set backend pool member that is also a member of a backend pool on a different load balancer
The PowerShell module performs the following functions:
- Basic Load Balancers with IPV6 frontend IP configurations - Basic Load Balancers with a Virtual Machine Scale Set backend pool member configured with 'Flexible' orchestration mode - Basic Load Balancers with a Virtual Machine Scale Set backend pool member where one or more Virtual Machine Scale Set instances have ProtectFromScaleSetActions Instance Protection policies enabled-- Basic Load Balancers with a Public IP Configuration in the associated Virtual Machine Scale Sets' Network Profile (where a Basic SKU Public IP Address is assigned to each instance) - Migrating a Basic Load Balancer to an existing Standard Load Balancer ### Prerequisites
load-balancer Virtual Network Ipv4 Ipv6 Dual Stack Standard Load Balancer Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/virtual-network-ipv4-ipv6-dual-stack-standard-load-balancer-cli.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application usin
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
load-balancer Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/whats-new.md
The product group is actively working on resolutions for the following known iss
| - ||| | IP based LB outbound IP | IP based LB leverages Azure's Default Outbound Access IP for outbound | In order to prevent outbound access from this IP, please leverage NAT Gateway for a predictable IP address and to prevent SNAT port exhaustion | | numberOfProbes, "Unhealthy threshold" | Health probe configuration property numberOfProbes, otherwise known as "Unhealthy threshold" in Portal, is not respected. Load Balancer health probes will probe up/down immediately after 1 probe regardless of the property's configured value | To reflect the current behavior, please set the value of numberOfProbes ("Unhealthy threshold" in Portal) as 1 |
+|Cross region balancer in West Europe| Currently, there are a limited amount of IP addresses available in West Europe for Azure's cross region Load Balancer. This may impact customers' ability to deploy cross region load balancers in the West Europe region.| We recommend that customers use another home region as part of their cross region deployment.|
logic-apps Logic Apps Enterprise Integration Create Integration Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/logic-apps-enterprise-integration-create-integration-account.md
For this task, you can use the Azure portal, [Azure CLI](/cli/azure/resource#az-
### [Azure CLI](#tab/azure-cli) 1. To add the [az logic integration-account](/cli/azure/logic/integration-account) extension, use the [az extension add](/cli/azure/extension#az-extension-add) command:
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
> [!TIP] > Azure Machine Learning workspaces are MLflow-compatible, which means you can use Azure Machine Learning workspaces in the same way that you use an MLflow tracking server. Such compatibility has the following advantages:
-> * You can use Azure Machine Learning workspaces as your tracking server for any experiment you're running with MLflow, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen.
-> * You can run any training routine that uses MLflow in Azure Machine Learning without changes. MLflow also supports model management and model deployment capabilities.
+> * We don't host MLflow server instances under the hood. The workspace can talk the MLflow standard.
+> * You can use Azure Machine Learning workspaces as your tracking server for any MLflow code, whether it runs on Azure Machine Learning or not. You only need to configure MLflow to point to the workspace where the tracking should happen.
+> * You can run any training routine that uses MLflow in Azure Machine Learning without any change.
+> [!NOTE]
+> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2 and we recommend using MLflow for logging. Such strategy allows your training routines to become cloud-agnostic and portable, removing any dependency in your code with Azure Machine Learning.
## Tracking with MLflow
-Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments, whether you created the experiments via the Azure Machine Learning Python SDK, the Azure Machine Learning CLI, or Azure Machine Learning studio. We recommend using MLflow for tracking experiments. To get started, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md).
+Azure Machine Learning uses MLflow Tracking for metric logging and artifact storage for your experiments. When connected to Azure Machine Learning, all tracking performed using MLflow is materialized in the workspace you are working on. To learn more about how to instrument your experiments for tracking experiments and training routines, see [Log metrics, parameters, and files with MLflow](how-to-log-view-metrics.md). You can also use MLflow to [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
-> [!NOTE]
-> Unlike the Azure Machine Learning SDK v1, there's no logging functionality in the SDK v2. We recommend that you use MLflow for logging.
-With MLflow Tracking, you can connect Azure Machine Learning as the back end of your MLflow experiments. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
+### Centralize tracking
+
+You can connect MLflow to Azure Machine Learning workspaces even when you are running locally or in a different cloud. The workspace provides a centralized, secure, and scalable location to store training metrics and models.
Capabilities include:
Capabilities include:
* [Track Azure Databricks machine learning experiments](how-to-use-mlflow-azure-databricks.md) with MLflow in Azure Machine Learning. * [Track Azure Synapse Analytics machine learning experiments](how-to-use-mlflow-azure-synapse.md) with MLflow in Azure Machine Learning.
-You can also use MLflow to [Query & compare experiments and runs with MLflow](how-to-track-experiments-mlflow.md).
+### Example notebooks
+* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
+* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
+* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
+* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-and-log/logging_and_customizing_models.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
+* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/runs-management/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
> [!IMPORTANT]
-> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio, Posit (formerly RStudio Workbench) or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or Azure ML studio for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
+> - MLflow in R support is limited to tracking experiment's metrics, parameters and models on Azure Machine Learning jobs. Interactive training on RStudio, Posit (formerly RStudio Workbench) or Jupyter Notebooks with R kernels is not supported. Model management and registration is not supported using the MLflow R SDK. As an alternative, use Azure ML CLI or [Azure ML studio](https://ml.azure.com) for model registration and management. View the following [R example about using the MLflow tracking client with Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/r).
> - MLflow in Java support is limited to tracking experiment's metrics and parameters on Azure Machine Learning jobs. Artifacts and models can't be tracked using the MLflow Java SDK. As an alternative, use the `Outputs` folder in jobs along with the method `mlflow.save_model` to save models (or artifacts) you want to capture. View the following [Java example about using the MLflow tracking client with the Azure Machine Learning](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/single-step/java/iris).
-### Example notebooks
-
-* [Training and tracking an XGBoost classifier with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_classification_mlflow.ipynb): Demonstrates how to track experiments by using MLflow, log models, and combine multiple flavors into pipelines.
-* [Training and tracking an XGBoost classifier with MLflow using service principal authentication](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_service_principal.ipynb): Demonstrates how to track experiments by using MLflow from compute that's running outside Azure Machine Learning. It shows how to authenticate against Azure Machine Learning services by using a service principal.
-* [Hyper-parameter optimization using Hyperopt and nested runs in MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/train-with-mlflow/xgboost_nested_runs.ipynb): Demonstrates how to use child runs in MLflow to do hyper-parameter optimization for models by using the popular library Hyperopt. It shows how to transfer metrics, parameters, and artifacts from child runs to parent runs.
-* [Logging models with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/logging-models/logging_model_with_mlflow.ipynb): Demonstrates how to use the concept of models instead of artifacts with MLflow, including how to construct custom models.
-* [Manage runs and experiments with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/run-history/run_history.ipynb): Demonstrates how to query experiments, runs, metrics, parameters, and artifacts from Azure Machine Learning by using MLflow.
- ## Model registries with MLflow Azure Machine Learning supports MLflow for model management. This support represents a convenient way to support the entire model lifecycle for users who are familiar with the MLflow client.
To learn more about how to manage models by using the MLflow API in Azure Machin
* [Manage model registries with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/model-management/model_management.ipynb): Demonstrates how to manage models in registries by using MLflow.
-## Model deployments of MLflow models
+## Model deployment with MLflow
-You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) so that you can apply the model management capabilities and no-code deployment offering in Azure Machine Learning. Azure Machine Learning supports deploying models to both real-time and batch endpoints. You can use the `azureml-mlflow` MLflow plug-in, the Azure Machine Learning CLI v2, and the user interface in Azure Machine Learning studio.
+You can [deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) and take advantage of the improved experience when you use this type of models. Azure Machine Learning supports deploying MLflow models to both real-time and batch endpoints without having to indicate and environment or a scoring script. Deployment is supported using either MLflow SDK, Azure Machine Learning CLI, Azure Machine Learning SDK for Python, or the [Azure Machine Learning studio](https://ml.azure.com) portal.
-Learn more at [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md).
+Learn more at [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
### Example notebooks * [Deploy MLflow to Online Endpoints](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK.
-* [Deploy MLflow to Online Endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progressive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK with progressive rollout of models and the deployment of multiple model's versions in the same endpoint.
+* [Deploy MLflow to Online Endpoints with safe rollout](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_online_endpoints_progresive.ipynb): Demonstrates how to deploy models in MLflow format to online endpoints using MLflow SDK with progressive rollout of models and the deployment of multiple model's versions in the same endpoint.
* [Deploy MLflow to web services (V1)](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/deploy/mlflow_sdk_web_service.ipynb): Demonstrates how to deploy models in MLflow format to web services (ACI/AKS v1) using MLflow SDK. * [Deploying models trained in Azure Databricks to Azure Machine Learning with MLflow](https://github.com/Azure/azureml-examples/blob/main/sdk/python/using-mlflow/no-code-deployment/track_with_databricks_deploy_aml.ipynb): Demonstrates how to train models in Azure Databricks and deploy them in Azure ML. It also includes how to handle cases where you also want to track the experiments with the MLflow instance in Azure Databricks.
Learn more at [Train machine learning models with MLflow projects and Azure Mach
### Example notebooks
-* [Train an MLflow project on a local compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb)
-* [Train an MLflow project on remote compute](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-remote/train-projects-remote.ipynb).
+* [Track an MLflow project in Azure Machine Learning workspaces](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-local/train-projects-local.ipynb)
+* [Train and run an MLflow project on Azure Machine Learning jobs](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/track-and-monitor-experiments/using-mlflow/train-projects-remote/train-projects-remote.ipynb).
## MLflow SDK, Azure Machine Learning v2, and Azure Machine Learning studio capabilities The following table shows which operations are supported by each of the tools available in the machine learning lifecycle.
-| Feature | MLflow SDK | Azure Machine Learning v2 (CLI/SDK) | Azure Machine Learning studio |
+| Feature | MLflow SDK | Azure Machine Learning CLI/SDK | Azure Machine Learning studio |
| :- | :-: | :-: | :-: | | Track and log metrics, parameters, and models | **&check;** | | | | Retrieve metrics, parameters, and models | **&check;** | <sup>1</sup> | **&check;** |
-| Submit training jobs with MLflow projects | **&check;** <sup>2</sup> | | |
-| Submit training jobs with inputs and outputs | | **&check;** | **&check;** |
-| Submit training jobs by using machine learning pipelines | | **&check;** | **&check;** |
+| Submit training jobs | **&check;** <sup>2</sup> | **&check;** | **&check;** |
+| Submit training jobs with Azure Machine learning data assets | | **&check;** | **&check;** |
+| Submit training jobs with machine learning pipelines | | **&check;** | **&check;** |
| Manage experiments and runs | **&check;** | **&check;** | **&check;** | | Manage MLflow models | **&check;**<sup>3</sup> | **&check;** | **&check;** | | Manage non-MLflow models | | **&check;** | **&check;** |
The following table shows which operations are supported by each of the tools av
> [!NOTE] > - <sup>1</sup> Only artifacts and models can be downloaded.
-> - <sup>2</sup> On preview.
+> - <sup>2</sup> Using MLflow projects (preview).
> - <sup>3</sup> Some operations may not be supported. View [Manage model registries in Azure Machine Learning with MLflow](how-to-manage-models-mlflow.md) for details.
-> - <sup>4</sup> Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment. View [Deploy MLflow models to Azure Machine Learning](how-to-deploy-mlflow-models.md) for details.
+> - <sup>4</sup> Deployment of MLflow models to batch inference by using the MLflow SDK is not possible at the moment. As an alternative, see [Deploy and run MLflow models in Spark jobs](how-to-deploy-mlflow-model-spark-jobs.md).
## Next steps
-* [Track machine learning experiments and models running locally or in the cloud](how-to-use-mlflow-cli-runs.md) with MLflow in Azure Machine Learning.
+* [Concept: From artifacts to models in MLflow](concept-mlflow-models.md).
+* [How-to: Configure MLflow for Azure Machine Learning](how-to-use-mlflow-configure-tracking.md).
+* [How-to: Migrate logging from SDK v1 to MLflow](reference-migrate-sdk-v1-mlflow-tracking.md)
+* [How-to: Track ML experiments and models with MLflow](how-to-use-mlflow-cli-runs.md).
+* [How-to: Log MLflow models](how-to-log-mlflow-models.md).
+* [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
machine-learning How To Create Component Pipelines Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-component-pipelines-cli.md
You can check component details and manage the component using CLI (v2). Use `az
## Next steps -- Try out [CLI v2 component example](https://github.com/Azure/azureml-examples/tree/sdk-preview/cli/jobs/pipelines-with-components)
+- Try out [CLI v2 component example](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components)
machine-learning How To Deploy Mlflow Models Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-mlflow-models-online-endpoints.md
version = registered_model.version
:::image type="content" source="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" lightbox="media/how-to-deploy-mlflow-models-online-endpoints/review-screen-ncd.png" alt-text="Screenshot showing NCD review screen":::
-1. Assign all the traffic to the deployment
-
- So far, the endpoint has one deployment, but none of its traffic is assigned to it. Let's assign it.
+1. Assign all the traffic to the deployment: So far, the endpoint has one deployment, but none of its traffic is assigned to it. Let's assign it.
# [Azure CLI](#tab/cli)
- *This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
+ *This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
# [Python (Azure ML SDK)](#tab/sdk)
version = registered_model.version
# [Azure CLI](#tab/cli)
- *This step in not required in the Azure CLI since we used the `--all-traffic` during creation.*
+ *This step in not required in the Azure CLI since we used the `--all-traffic` during creation. If you need to change traffic, you can use the command `az ml online-endpoint update --traffic` as explained at [Progressively update traffic](how-to-deploy-mlflow-models-online-progressive.md#progressively-update-the-traffic).*
# [Python (Azure ML SDK)](#tab/sdk)
machine-learning How To Deploy Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md
Managed online endpoints help to deploy your ML models in a turnkey manner. Mana
The main example in this doc uses managed online endpoints for deployment. To use Kubernetes instead, see the notes in this document inline with the managed online endpoint discussion.
+> [!TIP]
+> To create managed online endpoints in the Azure Machine Learning studio, see [Use managed online endpoints in the studio](how-to-use-managed-online-endpoint-studio.md).
+ ## Prerequisites # [Azure CLI](#tab/azure-cli)
machine-learning How To Troubleshoot Online Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md
ml_client.begin_create_or_update(online_deployment, local=True)
* `ml_client` is the instance for `MLCLient` class, and `online_deployment` is the instance for either `ManagedOnlineDeployment` class or `KubernetesOnlineDeployment` class.
+## [Studio](#tab/studio)
+
+The studio doesn't support local endpoints/deployments. See the Azure CLI or Python tabs for steps to perform deployment locally.
+ As a part of local deployment the following steps take place:
To debug conda installation problems, try the following:
## Get container logs
-You can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM. The amount of information depends on the provisioning status of the deployment. If the specified container is up and running you'll see its console output, otherwise you'll get a message to try again later.
+You can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM. The amount of information you get depends on the provisioning status of the deployment. If the specified container is up and running, you'll see its console output; otherwise, you'll get a message to try again later.
+
+There are two types of containers that you can get the logs from:
+- Inference server: Logs include the console log (from [the inference server](how-to-inference-server-http.md)) which contains the output of print/logging functions from your scoring script (`score.py` code).
+- Storage initializer: Logs contain information on whether code and model data were successfully downloaded to the container. The container will run before the inference server container starts to run.
# [Azure CLI](#tab/cli)
-To see log output from container, use the following CLI command:
+To see log output from a container, use the following CLI command:
```azurecli az ml online-deployment get-logs -e <endpoint-name> -n <deployment-name> -l 100
To see information about how to set these parameters, and if current values are
az ml online-deployment get-logs -h ```
-By default the logs are pulled from the inference server. Logs include the console log from the inference server, which contains print/log statements from your `score.py' code.
+By default the logs are pulled from the inference server.
> [!NOTE] > If you use Python logging, ensure you use the correct logging level order for the messages to be published to logs. For example, INFO. -
-You can also get logs from the storage initializer container by passing `ΓÇô-container storage-initializer`. These logs contain information on whether code and model data were successfully downloaded to the container.
+You can also get logs from the storage initializer container by passing `ΓÇô-container storage-initializer`.
Add `--help` and/or `--debug` to commands to see more information.
ml_client.online_deployments.get_logs(
To see information about how to set these parameters, see [reference for get-logs](/python/api/azure-ai-ml/azure.ai.ml.operations.onlinedeploymentoperations#azure-ai-ml-operations-onlinedeploymentoperations-get-logs)
-By default the logs are pulled from the inference server. Logs include the console log from the inference server, which contains print/log statements from your `score.py' code.
+By default the logs are pulled from the inference server.
> [!NOTE] > If you use Python logging, ensure you use the correct logging level order for the messages to be published to logs. For example, INFO.
-You can also get logs from the storage initializer container by adding `container_type="storage-initializer"` option. These logs contain information on whether code and model data were successfully downloaded to the container.
+You can also get logs from the storage initializer container by adding `container_type="storage-initializer"` option.
```python ml_client.online_deployments.get_logs(
ml_client.online_deployments.get_logs(
) ```
+# [Studio](#tab/studio)
+
+To see log output from a container, use the **Endpoints** in the studio:
+
+1. In the left navigation bar, select Endpoints.
+1. (Optional) Create a filter on compute type to show only managed compute types.
+1. Select an endpoint's name to view the endpoint's details page.
+1. Select the **Deployment logs** tab in the endpoint's details page.
+1. Use the dropdown to select the deployment whose log you want to see.
++
+The logs are pulled from the inference server.
+
+To get logs from the storage initializer container, use the Azure CLI or Python SDK (see each tab for details).
+ For Kubernetes online endpoint, the administrators are able to directly access the cluster where the model is deployed, which is more flexible for them to check the log in Kubernetes. For example:
ml_client.online_deployments.get_logs(
) ```
+#### [Studio](#tab/studio)
+
+Use the **Endpoints** in the studio:
+
+1. In the left navigation bar, select **Endpoints**.
+1. (Optional) Create a filter on compute type to show only managed compute types.
+1. Select an endpoint name to view the endpoint's details page.
+1. Select the **Deployment logs** tab in the endpoint's details page.
+1. Use the dropdown to select the deployment whose log you want to see.
+ ### ERROR: OutOfCapacity
For example, if image is `testacr.azurecr.io/azureml/azureml_92a029f831ce58d2ed0
#### Unable to download user model
-It is possible that the user model can't be found. Check [container logs](#get-container-logs) to get more details.
+It is possible that the user's model can't be found. Check [container logs](#get-container-logs) to get more details.
-Make sure the model is registered to the same workspace as the deployment. Use the `show` command or equivalent Python method to show details for a model in a workspace.
--- For example:
+Make sure the model is registered to the same workspace as the deployment. To show details for a model in a workspace:
- #### [Azure CLI](#tab/cli)
+#### [Azure CLI](#tab/cli)
- ```azurecli
- az ml model show --name <model-name> --version <version>
- ```
-
- #### [Python SDK](#tab/python)
+```azurecli
+az ml model show --name <model-name> --version <version>
+```
- ```python
- ml_client.models.get(name="<model-name>", version=<version>)
- ```
-
+#### [Python SDK](#tab/python)
+
+```python
+ml_client.models.get(name="<model-name>", version=<version>)
+```
+
+#### [Studio](#tab/studio)
+
+See the **Models** page in the studio:
- > [!WARNING]
- > You must specify either version or label to get the model information.
+1. In the left navigation bar, select Models.
+1. Select a model's name to view the model's details page.
+++
+> [!WARNING]
+> You must specify either version or label to get the model's information.
You can also check if the blobs are present in the workspace storage account.
You can also check if the blobs are present in the workspace storage account.
) ```
+ #### [Studio](#tab/studio)
+
+ You can't see logs from the storage initializer in the studio. Use the Azure CLI or Python SDK (see each tab for details).
+
#### Resource requests greater than limits
When you access online endpoints with REST requests, the returned status codes a
Below are common error codes when consuming managed online endpoints with REST requests:
-| Status code| Reason phrase | Why this code might get returned |
-| | | |
-| 200 | OK | Your model executed successfully, within your latency bound. |
-| 401 | Unauthorized | You don't have permission to do the requested action, such as score, or your token is expired. |
-| 404 | Not found | The endpoint doesn't have any valid deployment with positive weight. |
-| 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config.|
-| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
-| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` requests in parallel at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-prevent-503-status-codes). |
-| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints.|
-| 500 | Internal server error | AzureML-provisioned infrastructure is failing. |
+| Status code | Reason phrase | Why this code might get returned |
+| -- | - | - |
+| 200 | OK | Your model executed successfully, within your latency bound. |
+| 401 | Unauthorized | You don't have permission to do the requested action, such as score, or your token is expired. |
+| 404 | Not found | The endpoint doesn't have any valid deployment with positive weight. |
+| 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config. |
+| 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. |
+| 429 | Too many pending requests | Your model is getting more requests than it can handle. We allow maximum 2 * `max_concurrent_requests_per_instance` * `instance_count` requests in parallel at any time. Additional requests are rejected. You can confirm these settings in your model deployment config under `request_settings` and `scale_settings`, respectively. If you're using auto-scaling, your model is getting requests faster than the system can scale up. With auto-scaling, you can try to resend requests with [exponential backoff](https://aka.ms/exponential-backoff). Doing so can give the system time to adjust. Apart from enable auto-scaling, you could also increase the number of instances by using the below [code](#how-to-prevent-503-status-codes). |
+| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints. |
+| 500 | Internal server error | AzureML-provisioned infrastructure is failing. |
Below are common error codes when consuming Kubernetes online endpoints with REST requests:
-| Status code| Reason phrase | Why this code might get returned |
-| | | |
-| 409 | Conflict error | When an operation is already in progress, any new operation on that same online endpoint will respond with 409 conflict error. For example, If create or update online endpoint operation is in progress and if you trigger a new Delete operation it will throw an error. |
-| 502 | Has thrown an exception or crashed in the `run()` method of the score.py file | When there's an error in `score.py`, for example an imported package does not exist in the conda environment, a syntax error, or a failure in the `init()` method. You can follow [here](#error-resourcenotready) to debug the file. |
-| 503 | Receive large spikes in requests per second | The autoscaler is designed to handle gradual changes in load. If you receive large spikes in requests per second, clients may receive an HTTP status code 503. Even though the autoscaler reacts quickly, it takes AKS a significant amount of time to create more containers. You can follow [here](#how-to-prevent-503-status-codes) to prevent 503 status codes.|
-| 504 | Request has timed out | A 504 status code indicates that the request has timed out. The default timeout is 1 minute. You can increase the timeout or try to speed up the endpoint by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, you can follow [here](#error-resourcenotready) to debug the score.py file. The code may be in a non-responsive state or an infinite loop. |
-| 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
+| Status code | Reason phrase | Why this code might get returned |
+| -- | -- | |
+| 409 | Conflict error | When an operation is already in progress, any new operation on that same online endpoint will respond with 409 conflict error. For example, If create or update online endpoint operation is in progress and if you trigger a new Delete operation it will throw an error. |
+| 502 | Has thrown an exception or crashed in the `run()` method of the score.py file | When there's an error in `score.py`, for example an imported package does not exist in the conda environment, a syntax error, or a failure in the `init()` method. You can follow [here](#error-resourcenotready) to debug the file. |
+| 503 | Receive large spikes in requests per second | The autoscaler is designed to handle gradual changes in load. If you receive large spikes in requests per second, clients may receive an HTTP status code 503. Even though the autoscaler reacts quickly, it takes AKS a significant amount of time to create more containers. You can follow [here](#how-to-prevent-503-status-codes) to prevent 503 status codes. |
+| 504 | Request has timed out | A 504 status code indicates that the request has timed out. The default timeout is 1 minute. You can increase the timeout or try to speed up the endpoint by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, you can follow [here](#error-resourcenotready) to debug the score.py file. The code may be in a non-responsive state or an infinite loop. |
+| 500 | Internal server error | Azure ML-provisioned infrastructure is failing. |
### How to prevent 503 status codes
instance_count = ceil(concurrent_requests / max_concurrent_requests_per_instance
Online endpoints (v2) currently do not support [Cross-Origin Resource Sharing](https://developer.mozilla.org/docs/Web/HTTP/CORS) (CORS) natively. If your web application tries to invoke the endpoint without proper handling of the CORS preflight requests, you'll see the following error message: ```
-Access to fetch at 'https://{your-endpoinnt-name}.{your-region}.inference.ml.azure.com/score' from origin http://{your-url} has been blocked by CORS policy: Response to preflight request doesn't pass access control check. No 'Access-control-allow-origin' header is present on the request resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with the CORS disabled.
+Access to fetch at 'https://{your-endpoint-name}.{your-region}.inference.ml.azure.com/score' from origin http://{your-url} has been blocked by CORS policy: Response to preflight request doesn't pass access control check. No 'Access-control-allow-origin' header is present on the request resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with the CORS disabled.
``` We recommend that you use Azure Functions, Azure Application Gateway, or any service as an interim layer to handle CORS preflight requests.+ ## Common network isolation issues [!INCLUDE [network isolation issues](../../includes/machine-learning-online-endpoint-troubleshooting.md)]
machine-learning How To Use Managed Online Endpoint Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md
To use the monitoring tab, you must select "**Enable Application Insight diagnos
:::image type="content" source="media/how-to-create-managed-online-endpoint-studio/monitor-endpoint.png" lightbox="media/how-to-create-managed-online-endpoint-studio/monitor-endpoint.png" alt-text="A screenshot of monitoring endpoint-level metrics in the studio.":::
-For more information on how viewing other monitors and alerts, see [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md).
+For more information on viewing other monitors and alerts, see [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md).
+
+### Deployment logs
+
+You can get logs from the containers that are running on the VM where the model is deployed. The amount of information you get depends on the provisioning status of the deployment. If the specified container is up and running, you'll see its console output; otherwise, you'll get a message to try again later.
+
+Use the **Deployment logs** tabs in the endpoint's details page to see log output from container.
+
+1. Select the **Deployment logs** tab in the endpoint's details page.
+1. Use the dropdown to select the deployment whose log you want to see.
++
+The logs are pulled from the inference server. Logs include the console log (from the inference server) which contains print/log statements from your scoring script (`score.py`).
+
+To get logs from the storage initializer container, use the Azure CLI or Python SDK. These logs contain information on whether code and model data were successfully downloaded to the container. See the [get container logs section in troubleshooting online endpoints deployment](how-to-troubleshoot-online-endpoints.md#get-container-logs).
## Add a deployment to a managed online endpoint
managed-instance-apache-cassandra Configure Hybrid Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
This quickstart demonstrates how to use the Azure CLI commands to configure a hybrid cluster. If you have existing datacenters in an on-premises or self-hosted environment, you can use Azure Managed Instance for Apache Cassandra to add other datacenters to that cluster and maintain them. * This article requires the Azure CLI version 2.30.0 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
managed-instance-apache-cassandra Create Cluster Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-cli.md
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
This quickstart demonstrates how to use the Azure CLI commands to create a cluster with Azure Managed Instance for Apache Cassandra. It also shows to create a datacenter, and scale nodes up or down within the datacenter. * [Azure Virtual Network](../virtual-network/virtual-networks-overview.md) with connectivity to your self-hosted or on-premises environment. For more information on connecting on premises environments to Azure, see the [Connect an on-premises network to Azure](/azure/architecture/reference-architectures/hybrid-networking/) article.
managed-instance-apache-cassandra Create Multi Region Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-multi-region-cluster.md
Azure Managed Instance for Apache Cassandra provides automated deployment and sc
This quickstart demonstrates how to use the Azure CLI commands to configure a multi-region cluster in Azure. * This article requires the Azure CLI version 2.30.0 or higher. If you're using Azure Cloud Shell, the latest version is already installed.
managed-instance-apache-cassandra Manage Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/manage-resources-cli.md
keywords: azure resource manager cli
This article describes common commands to automate the management of your Azure Managed Instance for Apache Cassandra clusters using Azure CLI. > [!IMPORTANT] > This article requires the Azure CLI version 2.30.0 or higher. If you are using Azure Cloud Shell, the latest version is already installed.
mariadb Howto Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-auto-grow-storage-cli.md
To complete this guide:
- You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mariadb Howto Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-audit-logs-cli.md
To complete this guide:
- You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-portal.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mariadb Howto Configure Privatelink Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-configure-privatelink-cli.md
A Private Endpoint is the fundamental building block for private link in Azure.
- You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md). - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mariadb Howto Manage Vnet Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-manage-vnet-cli.md
Virtual Network (VNet) services endpoints and rules extend the private address s
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - You need an [Azure Database for MariaDB server and database](quickstart-create-mariadb-server-database-using-azure-cli.md).
mariadb Howto Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restart-server-cli.md
To complete this how-to guide:
- You need an [Azure Database for MariaDB server](quickstart-create-mariadb-server-database-using-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mariadb Howto Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/howto-restore-server-cli.md
Azure Database for MariaDB servers are backed up periodically to enable Restore
- You need an [Azure Database for MariaDB server and database](quickstart-create-mariadb-server-database-using-azure-cli.md). - This how-to guide requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mariadb Quickstart Create Mariadb Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/quickstart-create-mariadb-server-database-using-azure-cli.md
You can use the Azure CLI to create and manage Azure resources from the command
If you don't have an Azure subscription, create a [free](https://azure.microsoft.com/free/) account before you begin. - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mariadb Sample Scripts Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/sample-scripts-azure-cli.md
You can configure Azure SQL Database for MariaDB by using the <a href="/cli/azur
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] ## Samples
mariadb Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-change-server-configuration.md
This sample CLI script lists all available configuration parameters as well as t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mariadb Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-and-firewall-rule.md
This sample CLI script creates an Azure Database for MariaDB server and configur
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mariadb Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-create-server-with-vnet-rule.md
This sample CLI script creates an Azure Database for MariaDB server and configur
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mariadb Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-point-in-time-restore.md
This sample CLI script restores a single Azure Database for MariaDB server to a
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mariadb Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-scale-server.md
This sample CLI script scales compute and storage for a single Azure Database fo
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mariadb Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/scripts/sample-server-logs.md
This sample CLI script enables and downloads the slow query logs of a single Azu
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mariadb Tutorial Design Database Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/tutorial-design-database-cli.md
Azure Database for MariaDB is a relational database service in the Microsoft clo
If you don't have an Azure subscription, create a [free Azure account](https://azure.microsoft.com/free/) before you begin. - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
marketplace Azure App Metered Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/azure-app-metered-billing.md
When it comes to defining the offer along with its pricing models, it is importa
* Each managed application plan has a pricing model associated with it. * Pricing model has a monthly recurring fee, which can be set to $0. * In addition to the recurring fee, the plan can also include optional dimensions used to charge customers for usage not included in the flat rate. Each dimension represents a billable unit that your service will communicate to Microsoft using the [Marketplace metering service API](marketplace-metering-service-apis.md).
+> [!IMPORTANT]
+> You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.
-* > [!IMPORTANT]
- > You must keep track of the usage in your code and only send usage events to Microsoft for the usage that is above the base fee.
-
- > [!Note]
-> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](./marketplace-geo-availability-currencies.md).
+> [!Note]
+> Offers will be billed to customers in the customersΓÇÖ agreement currency, using the local market price that was published at the time the offer was created. The amount that customers pay, and that ISVs are paid, depends on the Foreign Exchange rates at the time the customer transacts the offer. Learn more on ["How we convert currency?"](/azure/marketplace/marketplace-geo-availability-currencies#how-we-convert-currency).
## Sample offer As an example, Contoso is a publisher with a managed application service called Contoso Analytics (CoA). CoA allows customers to analyze large amount of data for reporting and data warehousing. Contoso is registered as a publisher in Partner Center for the commercial marketplace program to publish offers to Azure customers. There are two plans associated with CoA, outlined below:
Follow the instruction in [Support for the commercial marketplace program in Par
**Video tutorial** -- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)
+- [Metered Billing for Azure Managed Applications Overview](https://go.microsoft.com/fwlink/?linkid=2196310)
++
marketplace Isv Customer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/isv-customer.md
Create and manage private offers from the **Private offers** dashboard in Partne
Use this page to define private offer terms, notification contacts, and pricing for your customer. -- **Customer Information** ΓÇô Specify the billing account for the customer receiving this private offer. This will only be available to the configured customer billing account and the customer will need to be an owner or contributor or signatory on the billing account to accept the offer.+ > [!NOTE]
- > Customers can find their billing account ID in 2 ways. 1) In the [Azure portal](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** >**ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. 2) If the customer knows the subscription they plan to use for the purchase, click on **Subscriptions**, click on the relevant subscription **Properties** (or Billing Properties) **Billing Account ID**. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts).
+ > Customers can find their billing account ID in 2 ways. 1) In the [Azure portal](https://aka.ms/PrivateOfferAzurePortal) under **Cost Management + Billing** > **Properties** > **ID**. A user in the customer organization should have access to the billing account to see the ID in Azure Portal. 2) If the customer knows the subscription they plan to use for the purchase, click on **Subscriptions**, click on the relevant subscription > **Properties** (or Billing Properties) > **Billing Account ID**. See [Billing account scopes in the Azure portal](/azure/cost-management-billing/manage/view-all-accounts).
:::image type="content" source="media/isv-customer/customer-properties.png" alt-text="Shows the offer Properties tab in Partner Center.":::
The payout amount and agency fee that Microsoft charges is based on the private
+
mysql Sample Cli Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-audit-logs.md
This sample CLI script enables [audit logs](../concepts-audit-logs.md) on an Azu
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql Sample Cli Change Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-change-server-parameters.md
This sample CLI script lists all available [server parameters](../concepts-serve
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Create Connect Private Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-private-access.md
This sample CLI script creates an Azure Database for MySQL - Flexible Server in
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Create Connect Public Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-create-connect-public-access.md
Once the script runs successfully, the MySQL Flexible Server will be accessible
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Monitor And Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-monitor-and-scale.md
This sample CLI script scales compute, storage and IOPS for a single Azure Datab
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Read Replicas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-read-replicas.md
This sample CLI script creates and manages [read replicas](../concepts-read-repl
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Restart Stop Start https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restart-stop-start.md
Also, see [stop/start limitations](../concepts-limitations.md#stopstart-operatio
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Restore Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-restore-server.md
The new Flexible Server is created with the original server's configuration and
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Same Zone Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-same-zone-ha.md
Currently, Same-Zone high availability is supported only for the General purpose
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Slow Query Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-slow-query-logs.md
This sample CLI script configures [slow query logs](../concepts-slow-query-logs.
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Sample Cli Zone Redundant Ha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/scripts/sample-cli-zone-redundant-ha.md
Currently, Zone-Redundant high availability is supported only for the General pu
[!INCLUDE [quickstarts-free-trial-note](../../includes/flexible-server-free-trial-note.md)] ## Sample script
mysql Tutorial Deploy Wordpress On Aks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/tutorial-deploy-wordpress-on-aks.md
In this quickstart, you deploy a WordPress application on Azure Kubernetes Servi
[!INCLUDE [flexible-server-free-trial-note](../includes/flexible-server-free-trial-note.md)] - This article requires the latest version of Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-change-server-configuration.md
This sample CLI script lists all available configuration parameters as well as t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mysql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-create-server-and-firewall-rule.md
This sample CLI script creates an Azure Database for MySQL server and configures
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mysql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-point-in-time-restore.md
This sample CLI script restores a single Azure Database for MySQL server to a pr
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mysql Sample Scale Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-scale-server.md
This sample CLI script scales compute and storage for a single Azure Database fo
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mysql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/scripts/sample-server-logs.md
This sample CLI script enables and downloads the slow query logs of a single Azu
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
mysql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-auto-grow-storage-cli.md
To complete this how-to guide:
- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql How To Configure Audit Logs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-audit-logs-cli.md
To step through this how-to guide:
- You need an [Azure Database for MySQL server](quickstart-create-mysql-server-database-using-azure-portal.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql How To Configure Private Link Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-configure-private-link-cli.md
A Private Endpoint is the fundamental building block for private link in Azure.
> [!NOTE] > The private link feature is only available for Azure Database for MySQL servers in the General Purpose or Memory Optimized pricing tiers. Ensure the database server is in one of these pricing tiers. - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-manage-vnet-using-cli.md
Virtual Network (VNet) services endpoints and rules extend the private address s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] > [!NOTE] > Support for VNet service endpoints is only for General Purpose and Memory Optimized servers.
mysql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restart-server-cli.md
To complete this how-to guide:
- You need an [Azure Database for MySQL server](quickstart-create-server-up-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/how-to-restore-server-cli.md
To complete this how-to guide:
- You need an [Azure Database for MySQL server and database](quickstart-create-mysql-server-database-using-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql Quickstart Create Mysql Server Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/quickstart-create-mysql-server-database-using-azure-cli.md
This quickstart shows how to use the [Azure CLI](/cli/azure/get-started-with-azu
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
mysql Tutorial Design Database Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/tutorial-design-database-using-cli.md
Azure Database for MySQL is a relational database service in the Microsoft cloud
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
In this article, you deploy a virtual machine (VM), and then check communication
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
network-watcher Diagnose Vm Network Traffic Filtering Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md
In this quickstart, you deploy a virtual machine (VM) and then check communicati
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0 or later of the Azure CLI. If you are using Azure Cloud Shell, the latest version is already installed.
notification-hubs Configure Notification Hub Portal Pns Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/configure-notification-hub-portal-pns-settings.md
When you complete these steps, an alert indicates that the notification hub has
You will need the **API Key** for your Google Firebase Cloud Messaging (FCM) project. - This article requires version 2.0.67 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
notification-hubs Create Notification Hub Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/create-notification-hub-azure-cli.md
In this quickstart, you create a notification hub using the Azure CLI. The first
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. > [!IMPORTANT] > Notification Hubs requires version 2.0.67 or later of the Azure CLI. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
partner-solutions Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/create-cli.md
After you've selected the offer for Apache Kafka on Confluent Cloud, you're read
Start by preparing your environment for the Azure CLI: After you sign in, use the [az confluent organization create](/cli/azure/confluent/organization#az-confluent-organization-create) command to create the new organization resource:
partner-solutions Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/manage.md
To delete the resources in Azure:
Start by preparing your environment for the Azure CLI: After you sign in, use the [az confluent organization delete](/cli/azure/confluent#az-confluent-organization-delete) command to delete the organization resource by name:
payment-hsm Create Different Ip Addresses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-ip-addresses.md
This article describes how to create a payment HSM with the host and management
You can continue with this quick start if all four of these commands return "Registered". - You must have an Azure subscription. You can [create a free account](https://azure.microsoft.com/free/) if you don't have one. ## Review the template
payment-hsm Create Different Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-different-vnet.md
This article describes how to create a payment HSM with the host and management
You can continue with this quick start if all four of these commands return "Registered". - You must have an Azure subscription. You can [create a free account](https://azure.microsoft.com/free/) if you don't have one. ## Review the template
payment-hsm Create Payment Hsm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/create-payment-hsm.md
In this tutorial, you learn how to:
You can continue with this quick start if all four of these commands return "Registered". - You must have an Azure subscription. You can [create a free account](https://azure.microsoft.com/free/) if you don't have one. # [Azure PowerShell](#tab/azure-powershell)
payment-hsm Quickstart Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-cli.md
This article describes how to create, update, and delete an Azure Payment HSM by
az account set --subscription <subscription-id> ``` ## Create a resource group
payment-hsm Quickstart Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/payment-hsm/quickstart-template.md
This article describes how to create a payment HSM with the host and management
You can continue with this quick start if all four of these commands return "Registered". - You must have an Azure subscription. You can [create a free account](https://azure.microsoft.com/free/) if you don't have one. ## Review the template
peering-service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/azure-portal.md
Title: Create Azure Peering Service connection - Azure portal
-description: Learn how to create, configure, and delete an Azure Peering Service connection using the Azure portal
+ Title: Create, change, or delete a Peering Service connection - Azure portal
+description: Learn how to create, change, or delete a Peering Service connection using the Azure portal
Previously updated : 01/12/2023 Last updated : 01/13/2023
-# Create Peering Service connection using the Azure portal
+# Create, change, or delete a Peering Service connection using the Azure portal
> [!div class="op_single_selector"] > * [Portal](azure-portal.md) > * [PowerShell](powershell.md) > * [Azure CLI](cli.md)
-Azure Peering Service is a networking service that enhances customer connectivity to Microsoft public cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet.
+Azure Peering Service is a networking service that enhances connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet.
-In this article, you'll learn how to Create a Peering Service connection by using the Azure portal.
+In this article, you'll learn how to create, change, and delete a Peering Service connection using the Azure portal.
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. ## Prerequisites -- An Azure subscription
+- An Azure subscription.
-- A connectivity provider. For more information, see [Azure peering service partners](./location-partners.md).
+- A connectivity provider. For more information, see [Peering Service partners](./location-partners.md).
## Sign in to Azure
-Sign in to the [Azure portal](https://portal.azure.com)
+Sign in to the [Azure portal](https://portal.azure.com).
## Create a Peering Service connection
Sign in to the [Azure portal](https://portal.azure.com)
1. Select **+ Create**.
-1. In **Create a peering service connection**, enter or select the following information in the **Basics** tab:
+1. In **Create a peering service connection**, enter or select the following information on the **Basics** page:
| Setting | Value | | - | -- |
Sign in to the [Azure portal](https://portal.azure.com)
:::image type="content" source="./media/azure-portal/peering-service-basics.png" alt-text="Screenshot of the Basics tab of Create a peering service connection in Azure portal."::: > [!NOTE]
- > Once a Peering Service resource is created under a certain subscription and resource group, it cannot be moved to another resource group or subscription.
+ > Once a Peering Service resource is created under a certain subscription and resource group, it cannot be moved to another subscription or resource group.
1. Select **Next: Configuration**.
Sign in to the [Azure portal](https://portal.azure.com)
1. On the **Configuration** page, select your **Country** and **State/Province** where the Peering Service must be enabled.
-1. Select the **Provider** that you're using to enable the Peering Service.
+1. Select the **Provider** that you're using to enable the Peering Service. For more information, see [Peering Service partners](./location-partners.md)
1. Select the **provider primary peering location** closest to your network location. This is the peering service location between Microsoft and the Partner.
peering-service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/cli.md
You can work with an internet service provider or internet exchange partner to o
Make sure that the connectivity providers are partnered with Microsoft. - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
peering-service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/powershell.md
Title: 'Register a Peering Service connection - Azure PowerShell '
-description: In this tutorial learn how to register a Peering Service connection with PowerShell.
+ Title: Create or change a Peering Service connection - Azure PowerShell
+description: Learn how to create or change a Peering Service connection using PowerShell.
- Previously updated : 05/18/2020+ Last updated : 01/13/2022 -
-# Customer intent: Customer wants to measure their connection telemetry per prefix to Microsoft services with Azure Peering Service .
+
-# Tutorial: Register a Peering Service connection using Azure PowerShell
+# Create or change a Peering Service connection using PowerShell
-In this tutorial, you'll learn how to register Peering Service using Azure PowerShell.
+> [!div class="op_single_selector"]
+> * [Portal](azure-portal.md)
+> * [PowerShell](powershell.md)
+> * [Azure CLI](cli.md)
-Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. In this article, you'll learn how to register a Peering Service connection by using Azure PowerShell.
+Azure Peering Service is a networking service that enhances connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet.
-If you don't have an Azure subscription, create an [account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) now.
+In this article, you'll learn how to create and change a Peering Service connection using PowerShell.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
Finally, if you're running PowerShell locally, you'll also need to run `Connect-
Use the Azure PowerShell module to register and manage Peering Service. You can register or manage Peering Service from the PowerShell command line or in scripts.
+## Prerequisites
-## Prerequisites
-You must have the following:
-
-### Azure account
-
-You must have a valid and active Microsoft Azure account. This account is required to set up the Peering Service connection. Peering Service is a resource within Azure subscriptions.
+- An Azure subscription.
-### Connectivity provider
+- A connectivity provider. For more information, see [Peering Service partners](./location-partners.md).
-You can work with an internet service provider or internet exchange partner to obtain Peering Service to connect your network with the Microsoft network.
+## Register your subscription with the resource provider and feature flag
-Make sure that the connectivity providers are partnered with Microsoft.
-
-### Register a subscription with the resource provider and feature flag
-
-Before you proceed to the steps of registering Peering Service, register your subscription with the resource provider and feature flag by using Azure PowerShell. The Azure PowerShell commands are specified here:
+Before you proceed to the steps of creating Peering Service, register your subscription with the resource provider and feature flag using [Register-AzResourceProvider](/powershell/module/az.resources/register-azresourceprovider) and [Register-AzProviderFeature](/powershell/module/az.resources/get-azproviderfeature):
```azurepowershell-interactive
-Register-AzProviderFeature -FeatureName AllowPeeringService ProviderNamespace Microsoft.Peering
-
-Register-AzResourceProvider -ProviderNamespace Microsoft.Peering
-
+# Register Microsoft.Peering provider.
+Register-AzResourceProvider -ProviderNamespace Microsoft.Peering
+# Register AllowPeeringService feature.
+Register-AzProviderFeature -FeatureName AllowPeeringService -ProviderNamespace Microsoft.Peering
```
-### Fetch the location and service provider
-
-Run the following commands in Azure PowerShell to acquire the location and service provider to which the Peering Service should be enabled.
+## List Peering Service locations and service providers
-Get Peering Service locations:
+Use [Get-AzPeeringServiceCountry](/powershell/module/az.peering/get-azpeeringservicecountry) to list the countries where Peering Service is available and [Get-AzPeeringServiceLocation](/powershell/module/az.peering/get-azpeeringservicelocation) to list the available metro locations in each country where you can get the Peering Service:
```azurepowershell-interactive
-# Gets a list of available countries
+# List the countries available for Peering Service.
Get-AzPeeringServiceCountry
-# Gets a list of metro locations serviced by country
+# List metro locations serviced in a country
Get-AzPeeringServiceLocation -Country "United States" ```
-Get Peering Service providers:
+Use [Get-AzPeeringServiceProvider](/powershell/module/az.peering/get-azpeeringserviceprovider) to get a list of available [Peering Service providers](location-partners.md):
```azurepowershell-interactive Get-AzPeeringServiceProvider ```
-### Register the Peering Service connection
+## Create a Peering Service connection
-Register the Peering Service connection by using the following set of commands via Azure PowerShell. This example registers the Peering Service named myPeeringService.
+Create a Peering Service connection using [New-AzPeeringService](/powershell/module/az.peering/new-azpeeringservice):
```azurepowershell-interactive
-$loc = "Washington"
-$provider = "TestPeer1"
-$resourceGroup = "MyResourceGroup"
-$name = ΓÇ£myPeeringServiceΓÇ¥
-$peeringService = New-AzPeeringService -ResourceGroupName $resourceGroup -Name $name -PeeringLocation $loc -PeeringServiceProvider $provider
+New-AzPeeringService -ResourceGroupName myResourceGroup -Name myPeeringService -PeeringLocation Virginia -PeeringServiceProvider Contoso
```
-### Register the Peering Service prefix
+## Add the Peering Service prefix
-Register the prefix that's provided by the connectivity provider by executing the following commands via Azure PowerShell. This example registers the prefix named myPrefix.
+Use [New-AzPeeringServicePrefix](/powershell/module/az.peering/new-azpeeringserviceprefix) to add the prefix provided to you by the connectivity provider:
```azurepowershell-interactive
-$loc = "Washington"
-$provider = "TestPeer1"
-$resourceGroup = "MyResourceGroup"
-$name = ΓÇ£myPeeringServiceΓÇ¥
-$peeringService = New-AzPeeringService -ResourceGroupName $resourceGroup -Name $name -PeeringLocation $loc -PeeringServiceProvider $provider
-$prefixName = "myPrefix"
-$prefix = ΓÇ£192.168.1.0/24ΓÇ¥
-$serviceKey = "6f48cdd6-2c2e-4722-af89-47e27b2513af"
-$prefixService = $peeringService | New-AzPeeringServicePrefix -Name $prefixName -Prefix $prefix -ServiceKey $serviceKey
+New-AzPeeringServicePrefix -ResourceGroupName myResourceGroup -PeeringServiceName myPeeringService -Name myPrefix -prefix 240.0.0.0/32 -ServiceKey 00000000-0000-0000-0000-000000000000
```
-### List all the Peering Services connections
+## List all Peering Services connections
-To view the list of all Peering Services, run the following command:
+To view the list of all Peering Service connections, use [Get-AzPeeringService](/powershell/module/az.peering/get-azpeeringservice):
```azurepowershell-interactive
-$peeringService = Get-AzPeeringService
+Get-AzPeeringService | Format-Table Name, PeeringServiceLocation, PeeringServiceProvider, Location
```
-### List all the Peering Service prefixes
-
-To view the list of all Peering Service prefixes, run the following command:
+## List all Peering Service prefixes
-```azurepowershell-interactive
- $prefixName = "myPrefix"
-```
+To view the list of all Peering Service prefixes, use [Get-AzPeeringServicePrefix](/powershell/module/az.peering/get-azpeeringserviceprefix):
```azurepowershell-interactive
-$prefix = Get-AzPeeringServicePrefix -PeeringServiceName "myPeeringService" -ResourceGroupName "MyResourceGroup" -Name "myPrefix"
+Get-AzPeeringServicePrefix -PeeringServiceName myPeeringService -ResourceGroupName myResourceGroup
```
-### Remove the Peering Service prefix
+## Remove the Peering Service prefix
-To remove the Peering Service prefix, run the following command:
+To remove the Peering Service prefix, use [Remove-AzPeeringServicePrefix](/powershell/module/az.peering/remove-azpeeringserviceprefix):
```azurepowershell-interactive
-Remove-AzPeeringServicePrefix -ResourceGroupName "MyResourceGroup" -Name "myPrefix" -PeeringServiceName "myPeeringService"
+Remove-AzPeeringServicePrefix -ResourceGroupName myResourceGroup -Name myPeeringService -PrefixName myPrefix
``` ## Next steps -- To learn about Peering Service connection, see [Peering Service connection](connection.md).-- To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).-- To register a Peering Service connection by using the Azure portal, see [Register a Peering Service connection - Azure portal](azure-portal.md).-- To register a Peering Service connection by using the Azure CLI, see [Register a Peering Service connection - Azure CLI](cli.md).
+- To learn more about Peering Service connection, see [Peering Service connection](connection.md).
+- To learn more about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).
+- To measure Peering Service connection telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).
postgresql Howto Configure Server Parameters Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/howto-configure-server-parameters-using-portal.md
Last updated 11/30/2021
[!INCLUDE [applies-to-postgresql-flexible-server](../includes/applies-to-postgresql-flexible-server.md)]
-You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server through the Azure portal.
+You can list, show, and update configuration parameters for an Azure Database for PostgreSQL server through the Azure portal. In addition, you can also click on the **Server Parameter Tabs** to easily view parameter group as **Modified**, **Static**, **Dynamic** and **Read-Only**.
## Prerequisites To step through this how-to guide you need:
To step through this how-to guide you need:
2. Select your Azure Database for PostgreSQL server. 3. Under the **SETTINGS** section, select **Server parameters**. The page shows a list of parameters, their values, and descriptions. 4. Select the **drop down** button to see the possible values for enumerated-type parameters like client_min_messages. 5. Select or hover over the **i** (information) button to see the range of possible values for numeric parameters like cpu_index_tuple_cost. 6. If needed, use the **search box** to narrow down to a specific parameter. The search is on the name and description of the parameters. 7. Change the parameter values you would like to adjust. All changes you make in a session are highlighted in purple. Once you have changed the values, you can select **Save**. Or you can **Discard** your changes.
-8. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
+8. List all the parameters that are modified from their _default_ value.
+
+9. If you have saved new values for the parameters, you can always revert everything back to the default values by selecting **Reset all to default**.
## Working with time zone parameters If you plan to work with date and time data in PostgreSQL, youΓÇÖll want to ensure that youΓÇÖve set the correct time zone for your location. All timezone-aware dates and times are stored internally in PostgreSQL in UTC. They are converted to local time in the zone specified by the **TimeZone** server parameter before being displayed to the client. This parameter can be edited on **Server parameters** page as explained above.
postgresql Sample Change Server Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-change-server-configuration.md
This sample CLI script lists all available configuration parameters as well as t
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
postgresql Sample Create Server And Firewall Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-and-firewall-rule.md
This sample CLI script creates an Azure Database for PostgreSQL server and confi
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
postgresql Sample Create Server With Vnet Rule https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-create-server-with-vnet-rule.md
This sample CLI script creates an Azure Database for PostgreSQL server and confi
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
postgresql Sample Point In Time Restore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-point-in-time-restore.md
This sample CLI script restores a single Azure Database for PostgreSQL server to
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
postgresql Sample Scale Server Up Or Down https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-scale-server-up-or-down.md
This sample CLI script scales compute and storage for a single Azure Database fo
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
postgresql Sample Server Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/scripts/sample-server-logs.md
This sample CLI script enables and downloads the slow query logs of a single Azu
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
postgresql How To Auto Grow Storage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-auto-grow-storage-cli.md
The server [reaching the storage limit](./concepts-pricing-tiers.md#reaching-the
- You need an [Azure Database for PostgreSQL server](quickstart-create-server-database-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
postgresql How To Manage Vnet Using Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-manage-vnet-using-cli.md
Virtual Network (VNet) services endpoints and rules extend the private address s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] > [!NOTE] > Support for VNet service endpoints is only for General Purpose and Memory Optimized servers. In case of VNet peering, if traffic is flowing through a common VNet Gateway with service endpoints and is supposed to flow to the peer, please create an ACL/VNet rule to allow Azure Virtual Machines in the Gateway VNet to access the Azure Database for PostgreSQL server.
postgresql How To Restart Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restart-server-cli.md
The server restart will be blocked if the service is busy. For example, the serv
To complete this how-to guide: - Create an [Azure Database for PostgreSQL server](quickstart-create-server-up-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
postgresql How To Restore Server Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/how-to-restore-server-cli.md
To complete this how-to guide:
- You need an [Azure Database for PostgreSQL server and database](quickstart-create-server-database-azure-cli.md). - This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
postgresql Quickstart Create Server Database Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-database-azure-cli.md
This quickstart shows how to use [Azure CLI](/cli/azure/get-started-with-azure-c
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
postgresql Quickstart Create Server Up Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/quickstart-create-server-up-azure-cli.md
Azure Database for PostgreSQL is a managed service that enables you to run, mana
## Create an Azure Database for PostgreSQL server [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
postgresql Tutorial Design Database Using Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/tutorial-design-database-using-azure-cli.md
In this tutorial, you use Azure CLI (command-line interface) and other utilities
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
private-link Create Private Link Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/create-private-link-service-cli.md
Get started creating a Private Link service that refers to your service. Give P
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
private-link Private Endpoint Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-link/private-endpoint-dns.md
For Azure services, use the recommended zone names as described in the following
| Azure Bot Service (Microsoft.BotService/botServices) / Token | privatelink.token.botframework.com | token.botframework.com </br> europe.token.botframework.com | | Azure Data Health Data Services (Microsoft.HealthcareApis/workspaces) / healthcareworkspace | workspace.privatelink.azurehealthcareapis.com </br> fhir.privatelink.azurehealthcareapis.com </br> dicom.privatelink.azurehealthcareapis.com | workspace.azurehealthcareapis.com </br> fhir.azurehealthcareapis.com </br> dicom.azurehealthcareapis.com |
-<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
+<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
>[!Note] >In the above text, `{region}` refers to the region code (for example, **eus** for East US and **ne** for North Europe). Refer to the following lists for regions codes:
For Azure services, use the recommended zone names as described in the following
| Azure HDInsight (Microsoft.HDInsight) | privatelink.azurehdinsight.cn | azurehdinsight.cn | | Azure Data Explorer (Microsoft.Kusto) | privatelink.{region}.kusto.windows.cn | {region}.kusto.windows.cn |
-<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hub-compatible-endpoint)
+<sup>1</sup>To use with IoT Hub's built-in Event Hub compatible endpoint. To learn more, see [private link support for IoT Hub's built-in endpoint](../iot-hub/virtual-network-support.md#built-in-event-hubs-compatible-endpoint)
## DNS configuration scenarios
public-multi-access-edge-compute-mec Quickstart Create Vm Azure Resource Manager Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-azure-resource-manager-template.md
In this quickstart, you learn how to use an Azure Resource Manager (ARM) templat
- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec). > [!NOTE] > Azure public MEC deployments are supported in Azure CLI versions 2.26 and later.
public-multi-access-edge-compute-mec Quickstart Create Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/quickstart-create-vm-cli.md
In this quickstart, you learn how to use Azure CLI to deploy a Linux virtual mac
- Add an allowlisted subscription to your Azure account, which allows you to deploy resources in Azure public MEC. If you don't have an active allowed subscription, contact the [Azure public MEC product team](https://aka.ms/azurepublicmec). > [!NOTE] > Azure public MEC deployments are supported in Azure CLI versions 2.26 and later.
public-multi-access-edge-compute-mec Tutorial Create Vm Using Python Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/public-multi-access-edge-compute-mec/tutorial-create-vm-using-python-sdk.md
In this tutorial, you learn how to:
- Set up Python in your local development environment by following the instructions at [Configure your local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment?tabs=cmd). Ensure you create a service principal for local development, and create and activate a virtual environment for this tutorial project. ## Install the required Azure library packages
reliability Reliability Energy Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-energy-data-services.md
Title: Resiliency in Microsoft Energy Data Services #Required; Must be "Resiliency in *your official service name*"
-description: Find out about reliability in Microsoft Energy Data Services #Required;
---- Previously updated : 12/05/2022 #Required; mm/dd/yyyy format.
+ Title: Reliability in Azure Microsoft Energy Data Services
+description: Find out about reliability in Microsoft Energy Data Services
+++++ Last updated : 01/13/2023
-<!--#Customer intent: As a customer, I want to understand reliability support for Microsoft Energy Data Services so that I can respond to and/or avoid failures in order to minimize downtime and data loss. -->
-
-<!--
-
-Template for the main reliability article for Azure services.
-Keep the required sections and add/modify any content for any information specific to your service.
-This article should live in the reliability content area of azure-docs-pr.
-This article should be linked to in your TOC. Under a Reliability node or similar. The name of this page should be *reliability-Microsoft Energy Data Services.md* and the TOC title should be "Reliability in Microsoft Energy Data Services".
-Keep the headings in this order.
-
-This template uses comment pseudo code to indicate where you must choose between two options or more.
-
-Conditions are used in this document in the following manner and can be easily searched for:
>-
-<!-- IF (AZ SUPPORTED) -->
-<!-- some text -->
-<!-- END IF (AZ SUPPORTED)-->
-
-<!-- BEGIN IF (SLA INCREASE) -->
-<!-- some text -->
-<!-- END IF (SLA INCREASE) -->
-
-<!-- IF (SERVICE IS ZONAL) -->
-<!-- some text -->
-<!-- END IF (SERVICE IS ZONAL) -->
-
-<!-- IF (SERVICE IS ZONE REDUNDANT) -->
-<!-- some text -->
-<!-- END IF (SERVICE IS ZONAL) -->
-
-<!--
-
-IMPORTANT:
-- Do a search and replace of TODO-service-name with the name of your service. That will make the template easier to read.-- ALL sections are required unless noted otherwise.-- MAKE SURE YOU REMOVE ALL COMMENTS BEFORE PUBLISH!!!!!!!!->-
-<!-- 1. H1 --
-Required: Uses the format "What is reliability in X?"
-The "X" part should identify the product or service.
> # What is reliability in Microsoft Energy Data Services?
-<!-- 2. Introductory paragraph
-Required: Provide an introduction. Use the following placeholder as a suggestion, but elaborate.
>- This article describes reliability support in Microsoft Energy Data Services, and covers intra-regional resiliency with [availability zones](#availability-zone-support). For a more detailed overview of reliability in Azure, see [Azure reliability](../reliability/overview.md). ## Availability zone support
-<!-- IF (AZ SUPPORTED) -->
-Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If there's a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](/azure/availability-zones/az-overview.md).
+
+Azure availability zones are at least three physically separate groups of datacenters within each Azure region. Datacenters within each zone are equipped with independent power, cooling, and networking infrastructure. If there's a local zone failure, availability zones are designed so that if the one zone is affected, regional services, capacity, and high availability are supported by the remaining two zones. Failures can range from software and hardware failures to events such as earthquakes, floods, and fires. Tolerance to failures is achieved with redundancy and logical isolation of Azure services. For more detailed information on availability zones in Azure, see [Regions and availability zones](availability-zones-overview.md).
Microsoft Energy Data Services Preview supports zone-redundant instance by default and there's no setup required by the Customer.
The Microsoft Energy Data Services Preview supports availability zones in the fo
| East US | West Europe | | | | ### Zone down experience
-During a zone-wide outage, no action is required during zone recovery. Customer may however experience brief degradation of performance, until the service self-heals and rebalances underlying capacity to adjust to healthy zones. Customers experiencing failures with Microsoft Energy Data Services APIs may need to be retried for 5XX errors.
+During a zone-wide outage, no action is required during zone recovery. There may be a brief degradation of performance until the service self-heals and re-balances underlying capacity to adjust to healthy zones.
+
+If you're experiencing failures with Microsoft Energy Data Services APIs, you may need to implement a retry mechanism for 5XX errors.
## Next steps > [!div class="nextstepaction"]
-> [Resiliency in Azure](availability-zones-overview.md))
+> [Reliability in Azure](availability-zones-overview.md)
resource-mover Tutorial Move Region Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/resource-mover/tutorial-move-region-powershell.md
If you don't have an Azure subscription, create a [free account](https://azure.m
Sign in to your Azure subscription with the Connect-AzAccount cmdlet: ```azurepowershell-interactive
-Connect-AzAccount ΓÇô Subscription "<subscription-id>"
+Connect-AzAccount ΓÇôSubscription "<subscription-id>"
``` ## Set up the move collection
After committing the move, and verifying that resources work as expected in the
## Next steps
-[Learn more](./tutorial-move-region-virtual-machines.md) about move Azure VMs in the portal.
+[Learn more](./tutorial-move-region-virtual-machines.md) about move Azure VMs in the portal.
route-server Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/overview.md
Azure Route Server simplifies dynamic routing between your network virtual appli
## How does it work?
-The following diagram illustrates how Azure Route Server works with an SDWAN NVA and a security NVA in a virtual network. Once youΓÇÖve established the BGP peering, Azure Route Server will receive an on-premises route (10.250.0.0/16) from the SDWAN appliance and a default route (0.0.0.0/0) from the firewall. These routes are then automatically configured on the VMs in the virtual network. As a result, all traffic destined to the on-premises network will be sent to the SDWAN appliance. While all Internet-bound traffic will be sent to the firewall. In the opposite direction, Azure Route Server will send the virtual network address (10.1.0.0/16) to both NVAs. The SDWAN appliance can propagate it further to the on-premises network.
+The following diagram illustrates how Azure Route Server works with an SDWAN NVA and a security NVA in a virtual network. Once youΓÇÖve established the BGP peering, Azure Route Server will receive an on-premises route (10.250.0.0/16) from the SDWAN appliance and a default route (0.0.0.0/0) from the firewall. These routes are then automatically configured on the VMs in the virtual network. As a result, all traffic destined to the on-premises network will be sent to the SDWAN appliance, while all Internet-bound traffic will be sent to the firewall. In the opposite direction, Azure Route Server will send the virtual network address (10.1.0.0/16) to both NVAs. The SDWAN appliance can propagate it further to the on-premises network.
:::image type="content" source="./media/overview/route-server-overview.png" alt-text="Diagram showing Azure Route Server configured in a virtual network.":::
search Search Howto Aad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-aad.md
In this step, create a [managed identity](../active-directory/managed-identities
Next, you need to grant your managed identity access to your search service. Azure Cognitive Search has various [built-in roles](search-security-rbac.md#built-in-roles-used-in-search). You can also create a [custom role](search-security-rbac.md#create-a-custom-role).
-It's a best practice to grant minimum permissions. If your application only needs to handle queries, you should assign the [Search Index Data Reader (preview)](../role based-access-control/built-in-roles.md#search-index-data-reader) role. Alternatively, if it needs both read and write access on a search index, you should use the [Search Index Data Contributor (preview)](../role-based-access-control/built-in-roles.md#search-index-data-contributor) role.
+It's a best practice to grant minimum permissions. If your application only needs to handle queries, you should assign the [Search Index Data Reader (preview)](/azure/role-based-access-control/built-in-roles#search-index-data-reader) role. Alternatively, if it needs both read and write access on a search index, you should use the [Search Index Data Contributor (preview)](/azure/role-based-access-control/built-in-roles#search-index-data-contributor) role.
1. Sign in to the [Azure portal](https://portal.azure.com).
The following instructions reference an existing C# sample to demonstrate the co
1. Instead of using `AzureKeyCredential` in the beginning of `Main()` in [Program.cs](https://github.com/Azure-Samples/azure-search-dotnet-samples/blob/master/quickstart/v11/AzureSearchQuickstart-v11/Program.cs), use `DefaultAzureCredential` like in the code snippet below:
- ```csharp
+ ```csharp
// Create a SearchIndexClient to send create/delete index commands SearchIndexClient adminClient = new SearchIndexClient(serviceEndpoint, new DefaultAzureCredential()); // Create a SearchClient to load and query documents
search Search Manage Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-manage-azure-cli.md
You cannot use tools or APIs to transfer content, such as an index, from one ser
Preview administration features are typically not available in the **az search** module. If you want to use a preview feature, [use the Management REST API](search-manage-rest.md) and a preview API version. <a name="list-search-services"></a>
service-connector Quickstart Cli App Service Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-app-service-connection.md
The [Azure CLI](/cli/azure) is a set of commands used to create and manage Azure
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.30.0 or higher of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
service-connector Quickstart Cli Container Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-container-apps.md
This quickstart shows you how to connect Azure Container Apps to other Cloud res
- At least one application deployed to Container Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [create and deploy a container to Container Apps](../container-apps/quickstart-portal.md). - Version 2.37.0 or higher of the Azure CLI must be installed. To upgrade to the latest version, run `az upgrade`. If using Azure Cloud Shell, the latest version is already installed.
service-connector Quickstart Cli Spring Cloud Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/quickstart-cli-spring-cloud-connection.md
Service Connector lets you quickly connect compute services to cloud services, w
- At least one application hosted by Azure Spring Apps in a [region supported by Service Connector](./concept-region-support.md). If you don't have one, [deploy your first application to Azure Spring Apps](../spring-apps/quickstart.md). - Version 2.37.0 or higher of the Azure CLI must be installed. To upgrade to the latest version, run `az upgrade`. If using Azure Cloud Shell, the latest version is already installed.
service-fabric How To Managed Cluster Application Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-application-secrets.md
Alternatively, we also support [KeyVaultReference](service-fabric-keyvault-refer
To create your own key vault and setup certificates, follow the instructions from Azure Key Vault by using the [Azure CLI, PowerShell, Portal, and more][key-vault-certs]. >[!NOTE]
-> The key vault must be [enabled for template deployment](../key-vault/general/manage-with-cli2.md#bkmk_KVperCLI) to allow the compute resource provider to get certificates from it and install it on cluster nodes.
+> The key vault must be [enabled for template deployment](../key-vault/general/manage-with-cli2.md#setting-key-vault-advanced-access-policies) to allow the compute resource provider to get certificates from it and install it on cluster nodes.
## Install the certificate in your cluster This certificate must be installed on each node in the cluster and Service Fabric managed clusters helps make this easy. The managed cluster service can push version-specific secrets to the nodes to help install secrets that won't change often like installing a private root CA to the nodes. For most production workloads we suggest using [KeyVault extension][key-vault-windows]. The Key Vault VM extension provides automatic refresh of certificates stored in an Azure key vault vs a static version.
service-fabric Service Fabric Cluster Creation Via Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-cluster-creation-via-arm.md
az sf cluster create --resource-group $resourceGroupName --location $resourceGro
### Use a pointer to a secret uploaded into a key vault
-To use an existing key vault, the key vault must be [enabled for deployment](../key-vault/general/manage-with-cli2.md#bkmk_KVperCLI) to allow the compute resource provider to get certificates from it and install it on cluster nodes.
+To use an existing key vault, the key vault must be [enabled for deployment](../key-vault/general/manage-with-cli2.md#setting-key-vault-advanced-access-policies) to allow the compute resource provider to get certificates from it and install it on cluster nodes.
Deploy the cluster using PowerShell:
storage Blob Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-cli.md
Blob storage supports block blobs, append blobs, and page blobs. Block blobs are
[!INCLUDE [storage-quickstart-prereq-include](../../../includes/storage-quickstart-prereq-include.md)] - This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
storage Blob Containers Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/blob-containers-cli.md
In this how-to article, you learn to use the Azure CLI with Bash to work with co
[!INCLUDE [storage-quickstart-prereq-include](../../../includes/storage-quickstart-prereq-include.md)] - It's always a good idea to install the latest version of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
storage Storage Blob Event Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-event-quickstart.md
When you complete the steps described in this article, you see that the event da
[!INCLUDE [quickstarts-free-trial-note.md](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.70 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
The Azure CLI is Azure's command-line experience for managing Azure resources. Y
[!INCLUDE [storage-quickstart-prereq-include](../../../includes/storage-quickstart-prereq-include.md)] - This article requires version 2.0.46 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
storage Storage Quickstart Blobs Php https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-php.md
- Title: Azure Quickstart - Create a blob in object storage using PHP
-description: Quickly learn to transfer objects to/from Azure Blob storage using PHP. Upload, download, and list block blobs in a container in Azure Blob storage.
-- Previously updated : 11/14/2018------
-# Transfer objects to/from Azure Blob storage using PHP
-
-In this quickstart, you learn how to use PHP to upload, download, and list block blobs in a container in Azure Blob storage.
-
-## Prerequisites
--
-Make sure you have the following additional prerequisites installed:
--- [PHP](https://php.net/downloads.php)-- [Azure Storage SDK for PHP](https://github.com/Azure/azure-storage-php)-
-## Download the sample application
-
-The [sample application](https://github.com/Azure-Samples/storage-blobs-php-quickstart.git) used in this quickstart is a basic PHP application.
-
-Use [git](https://git-scm.com/) to download a copy of the application to your development environment.
-
-```bash
-git clone https://github.com/Azure-Samples/storage-blobs-php-quickstart.git
-```
-
-This command clones the repository to your local git folder. To open the PHP sample application, look for the storage-blobs-php-quickstart folder, and open the phpqs.php file.
--
-## Configure your storage connection string
-
-In the application, you must provide your storage account name and account key to create the **BlobRestProxy** instance for your application. It is recommended to store these identifiers within an environment variable on the local machine running the application. Use one of the following examples depending on your Operating System to create the environment variable. Replace the **youraccountname** and **youraccountkey** values with your account name and key.
-
-# [Linux](#tab/linux)
-
-```bash
-export ACCOUNT_NAME=<youraccountname>
-export ACCOUNT_KEY=<youraccountkey>
-```
-
-# [Windows](#tab/windows)
-
-```cmd
-setx ACCOUNT_NAME=<youraccountname>
-setx ACCOUNT_KEY=<youraccountkey>
-```
---
-## Configure your environment
-
-Take the folder from your local git folder and place it in a directory served by your PHP server. Then, open a command prompt scoped to that same directory and enter: `php composer.phar install`
-
-## Run the sample
-
-This sample creates a test file in the '.' folder. The sample program uploads the test file to Blob storage, lists the blobs in the container, and downloads the file with a new name.
-
-Run the sample. The following output is an example of the output returned when running the application:
-
-```
-Uploading BlockBlob: HelloWorld.txt
-These are the blobs present in the container: HelloWorld.txt: https://myexamplesacct.blob.core.windows.net/blockblobsleqvxd/HelloWorld.txt
-
-This is the content of the blob uploaded: Hello Azure!
-```
-
-When you press the button displayed, the sample program deletes the storage container and the files. Before you continue, check your server's folder for the two files. You can open them and see they are identical.
-
-You can also use a tool such as the [Azure Storage Explorer](https://storageexplorer.com) to view the files in Blob storage. Azure Storage Explorer is a free cross-platform tool that allows you to access your storage account information.
-
-After you've verified the files, hit any key to finish the demo and delete the test files. Now that you know what the sample does, open the example.rb file to look at the code.
-
-## Understand the sample code
-
-Next, we walk through the sample code so that you can understand how it works.
-
-### Get references to the storage objects
-
-The first thing to do is create the references to the objects used to access and manage Blob storage. These objects build on each other, and each is used by the next one in the list.
--- Create an instance of the Azure storage **BlobRestProxy** object to set up connection credentials.-- Create the **BlobService** object that points to the Blob service in your storage account.-- Create the **Container** object, which represents the container you are accessing. Containers are used to organize your blobs like you use folders on your computer to organize your files.-
-Once you have the **blobClient** container object, you can create the **Block** blob object that points to the specific blob in which you are interested. Then you can perform operations such as upload, download, and copy.
-
-> [!IMPORTANT]
-> Container names must be lowercase. See [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata) for more information about container and blob names.
-
-In this section, you set up an instance of Azure storage client, instantiate the blob service object, create a new container, and set permissions on the container so the blobs are public. The container is called **quickstartblobs**.
-
-```php
- # Setup a specific instance of an Azure::Storage::Client
- $connectionString = "DefaultEndpointsProtocol=https;AccountName=".getenv('account_name').";AccountKey=".getenv('account_key');
-
- // Create blob client.
- $blobClient = BlobRestProxy::createBlobService($connectionString);
-
- # Create the BlobService that represents the Blob service for the storage account
- $createContainerOptions = new CreateContainerOptions();
-
- $createContainerOptions->setPublicAccess(PublicAccessType::CONTAINER_AND_BLOBS);
-
- // Set container metadata.
- $createContainerOptions->addMetaData("key1", "value1");
- $createContainerOptions->addMetaData("key2", "value2");
-
- $containerName = "blockblobs".generateRandomString();
-
- try {
- // Create container.
- $blobClient->createContainer($containerName, $createContainerOptions);
-```
-
-### Upload blobs to the container
-
-Blob storage supports block blobs, append blobs, and page blobs. Block blobs are the most commonly used, and that is what is used in this quickstart.
-
-To upload a file to a blob, get the full path of the file by joining the directory name and the file name on your local drive. You can then upload the file to the specified path using the **createBlockBlob()** method.
-
-The sample code takes a local file and uploads it to Azure. The file is stored as **myfile** and the name of the blob as **fileToUpload** in the code. The following example uploads the file to your container called **quickstartblobs**.
-
-```php
- $myfile = fopen("HelloWorld.txt", "w") or die("Unable to open file!");
- fclose($myfile);
-
- # Upload file as a block blob
- echo "Uploading BlockBlob: ".PHP_EOL;
- echo $fileToUpload;
- echo "<br />";
-
- $content = fopen($fileToUpload, "r");
-
- //Upload blob
- $blobClient->createBlockBlob($containerName, $fileToUpload, $content);
-```
-
-To perform a partial update of the content of a block blob, use the **createblocklist()** method. Block blobs can be as large as 4.7 TB, and can be anything from Excel spreadsheets to large video files. Page blobs are primarily used for the VHD files used to back IaaS VMs. Append blobs are used for logging, such as when you want to write to a file and then keep adding more information. Append blob should be used in a single writer model. Most objects stored in Blob storage are block blobs.
-
-### List the blobs in a container
-
-You can get a list of files in the container using the **listBlobs()** method. The following code retrieves the list of blobs, then loops through them, showing the names of the blobs found in a container.
-
-```php
- $listBlobsOptions = new ListBlobsOptions();
- $listBlobsOptions->setPrefix("HelloWorld");
-
- echo "These are the blobs present in the container: ";
-
- do{
- $result = $blobClient->listBlobs($containerName, $listBlobsOptions);
- foreach ($result->getBlobs() as $blob)
- {
- echo $blob->getName().": ".$blob->getUrl()."<br />";
- }
-
- $listBlobsOptions->setContinuationToken($result->getContinuationToken());
- } while($result->getContinuationToken());
-```
-
-### Get the content of your blobs
-
-Get the contents of your blobs using the **getBlob()** method. The following code displays the contents of the blob uploaded in a previous section.
-
-```php
- $blob = $blobClient->getBlob($containerName, $fileToUpload);
- fpassthru($blob->getContentStream());
-```
-
-### Clean up resources
-
-If you no longer need the blobs uploaded in this quickstart, you can delete the entire container using the **deleteContainer()** method. If the files created are no longer needed, you use the **deleteBlob()** method to delete the files.
-
-```php
- // Delete blob.
- echo "Deleting Blob".PHP_EOL;
- echo $fileToUpload;
- echo "<br />";
- $blobClient->deleteBlob($_GET["containerName"], $fileToUpload);
-
- // Delete container.
- echo "Deleting Container".PHP_EOL;
- echo $_GET["containerName"].PHP_EOL;
- echo "<br />";
- $blobClient->deleteContainer($_GET["containerName"]);
-
- //Deleting local file
- echo "Deleting file".PHP_EOL;
- echo "<br />";
- unlink($fileToUpload);
-```
-
-## Resources for developing PHP applications with blobs
-
-See these additional resources for PHP development with Blob storage:
--- View, download, and install the [PHP client library source code](https://github.com/Azure/azure-storage-php) for Azure Storage on GitHub.-- Explore [Blob storage samples](https://azure.microsoft.com/resources/samples/?sort=0&service=storage&platform=php&term=blob) written using the PHP client library.-
-## Next steps
-
-In this quickstart, you learned how to transfer files between a local disk and Azure blob storage using PHP. To learn more about working with PHP, continue to our PHP Developer center.
-
-> [!div class="nextstepaction"]
-> [PHP Developer Center](https://azure.microsoft.com/develop/php/)
-
-For more information about the Storage Explorer and Blobs, see [Manage Azure Blob storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=/azure/storage/blobs/toc.json).
storage Storage Quickstart Blobs Ruby https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-ruby.md
- Title: "Quickstart: Azure Blob Storage client library - Ruby"
-description: Create a storage account and a container in Azure Blob Storage. Use the storage client library for Ruby to create a blob, download a blob, and list the blobs in a container.
-- Previously updated : 12/04/2020------
-# Quickstart: Azure Blob Storage client library for Ruby
-
-Learn how to use Ruby to create, download, and list blobs in a container in Microsoft Azure Blob Storage.
-
-## Prerequisites
--
-Make sure you have the following additional prerequisites installed:
--- [Ruby](https://www.ruby-lang.org/en/downloads/)-- [Azure Storage client library for Ruby](https://github.com/azure/azure-storage-ruby), using the [RubyGem package](https://rubygems.org/gems/azure-storage-blob):-
- ```console
- gem install azure-storage-blob
- ```
-
-## Download the sample application
-
-The [sample application](https://github.com/Azure-Samples/storage-blobs-ruby-quickstart.git) used in this quickstart is a basic Ruby application.
-
-Use [Git](https://git-scm.com/) to download a copy of the application to your development environment. This command clones the repository to your local machine:
-
-```console
-git clone https://github.com/Azure-Samples/storage-blobs-ruby-quickstart.git
-```
-
-Navigate to the *storage-blobs-ruby-quickstart* folder, and open the *example.rb* file in your code editor.
--
-## Configure your storage connection string
-
-Provide your storage account name and account key to create a [BlobService](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/BlobService) instance for your application.
-
-The following code in the *example.rb* file instantiates a new [BlobService](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/BlobService) object. Replace the *accountname* and *accountkey* values with your account name and key.
-
-```ruby
-# Create a BlobService object
-account_name = "accountname"
-account_key = "accountkey"
-
-blob_client = Azure::Storage::Blob::BlobService.create(
- storage_account_name: account_name,
- storage_access_key: account_key
-)
-```
-
-## Run the sample
-
-The sample creates a container in Blob Storage, creates a new blob in the container, lists the blobs in the container, and downloads the blob to a local file.
-
-Run the sample. Here is an example of the output from running the application:
-
-```console
-C:\azure-samples\storage-blobs-ruby-quickstart> ruby example.rb
-
-Creating a container: quickstartblobs18cd9ec0-f4ac-4688-a979-75c31a70503e
-
-Creating blob: QuickStart_6f8f29a8-879a-41fb-9db2-0b8595180728.txt
-
-List blobs in the container following continuation token
- Blob name: QuickStart_6f8f29a8-879a-41fb-9db2-0b8595180728.txt
-
-Downloading blob to C:/Users/azureuser/Documents/QuickStart_6f8f29a8-879a-41fb-9db2-0b8595180728.txt
-
-Paused, press the Enter key to delete resources created by the sample and exit the application
-```
-
-When you press Enter to continue, the sample program deletes the storage container and the local file. Before you continue, check your *Documents* folder for the downloaded file.
-
-You can also use [Azure Storage Explorer](https://storageexplorer.com) to view the files in your storage account. Azure Storage Explorer is a free cross-platform tool that allows you to access your storage account information.
-
-After you've verified the files, press the Enter key to delete the test files and end the demo. Open the *example.rb* file to look at the code.
-
-## Understand the sample code
-
-Next, we walk through the sample code so you can understand how it works.
-
-### Get references to the storage objects
-
-The first thing to do is create instances of the objects used to access and manage Blob Storage. These objects build on each other. Each is used by the next one in the list.
--- Create an instance of the Azure storage [BlobService](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/BlobService) object to set up connection credentials.-- Create the [Container](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/Container/Container) object, which represents the container you're accessing. Containers are used to organize your blobs like you use folders on your computer to organize your files.-
-Once you have the container object, you can create a [Block](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/Block) blob object that points to a specific blob in which you're interested. Use the [Block](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/Block) object to create, download, and copy blobs.
-
-> [!IMPORTANT]
-> Container names must be lowercase. For more information about container and blob names, see [Naming and Referencing Containers, Blobs, and Metadata](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata).
-
-The following example code:
--- Creates a new container-- Sets permissions on the container so the blobs are public. The container is called *quickstartblobs* with a unique ID appended.-
-```ruby
-# Create a container
-container_name = "quickstartblobs" + SecureRandom.uuid
-puts "\nCreating a container: " + container_name
-container = blob_client.create_container(container_name)
-
-# Set the permission so the blobs are public
-blob_client.set_container_acl(container_name, "container")
-```
-
-### Create a blob in the container
-
-Blob Storage supports block blobs, append blobs, and page blobs. To create a blob, call the [create_block_blob](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob#create_block_blob-instance_method) method passing in the data for the blob.
-
-The following example creates a blob called *QuickStart_* with a unique ID and a *.txt* file extension in the container created earlier.
-
-```ruby
-# Create a new block blob containing 'Hello, World!'
-blob_name = "QuickStart_" + SecureRandom.uuid + ".txt"
-blob_data = "Hello, World!"
-puts "\nCreating blob: " + blob_name
-blob_client.create_block_blob(container.name, blob_name, blob_data)
-```
-
-Block blobs can be as large as 4.7 TB, and can be anything from spreadsheets to large video files. Page blobs are primarily used for the VHD files that back IaaS virtual machines. Append blobs are commonly used for logging, such as when you want to write to a file and then keep adding more information.
-
-### List the blobs in a container
-
-Get a list of files in the container using the [list_blobs](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/Container#list_blobs-instance_method) method. The following code retrieves the list of blobs, then displays their names.
-
-```ruby
-# List the blobs in the container
-puts "\nList blobs in the container following continuation token"
-nextMarker = nil
-loop do
- blobs = blob_client.list_blobs(container_name, { marker: nextMarker })
- blobs.each do |blob|
- puts "\tBlob name: #{blob.name}"
- end
- nextMarker = blobs.continuation_token
- break unless nextMarker && !nextMarker.empty?
-end
-```
-
-### Download a blob
-
-Download a blob to your local disk using the [get_blob](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob#get_blob-instance_method) method. The following code downloads the blob created in a previous section.
-
-```ruby
-# Download the blob
-
-# Set the path to the local folder for downloading
-if(is_windows)
- local_path = File.expand_path("~/Documents")
-else
- local_path = File.expand_path("~/")
-end
-
-# Create the full path to the downloaded file
-full_path_to_file = File.join(local_path, blob_name)
-
-puts "\nDownloading blob to " + full_path_to_file
-blob, content = blob_client.get_blob(container_name, blob_name)
-File.open(full_path_to_file,"wb") {|f| f.write(content)}
-```
-
-### Clean up resources
-
-If a blob is no longer needed, use [delete_blob](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob#delete_blob-instance_method) to remove it. Delete an entire container using the [delete_container](https://www.rubydoc.info/gems/azure-storage-blob/2.0.1/Azure/Storage/Blob/Container#delete_container-instance_method) method. Deleting a container also deletes any blobs stored in the container.
-
-```ruby
-# Clean up resources, including the container and the downloaded file
-blob_client.delete_container(container_name)
-File.delete(full_path_to_file)
-```
-
-## Resources for developing Ruby applications with blobs
-
-See these additional resources for Ruby development:
--- View and download the [Ruby client library source code](https://github.com/Azure/azure-storage-ruby) for Azure Storage on GitHub.-- Explore [Azure samples](/samples/browse/?products=azure&languages=ruby) written using the Ruby client library.-- [Sample: Getting Started with Azure Storage in Ruby](https://github.com/Azure-Samples/storage-blob-ruby-getting-started)-
-## Next steps
-
-In this quickstart, you learned how to transfer files between Azure Blob Storage and a local disk by using Ruby. To learn more about working with Blob Storage, continue to the Storage account overview.
-
-> [!div class="nextstepaction"]
-> [Storage account overview](../common/storage-account-overview.md)
-
-For more information about the Storage Explorer and Blobs, see [Manage Azure Blob Storage resources with Storage Explorer](../../vs-azure-tools-storage-explorer-blobs.md?toc=/azure/storage/blobs/toc.json).
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
With Azure File Sync, you will need to account for the following taking up space
We'll use an example to illustrate how to estimate the amount of free space would need on your local disk. Let's say you installed your Azure File Sync agent on your Azure Windows VM, and plan to create a server endpoint on disk F. You have 1 million files and would like to tier all of them, 100,000 directories, and a disk cluster size of 4 KiB. The disk size is 1000 GiB. You want to enable cloud tiering and set your volume free space policy to 20%. 1. NTFS allocates a cluster size for each of the tiered files. 1 million files * 4 KiB cluster size = 4,000,000 KiB (4 GiB)
-> [!Note]
-> The space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.
-3. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KB cluster size = 4,400,000 KiB (4.4 GiB)
-4. Azure File Sync heatstore occupies 1.1 KiB per file. 1 million files * 1.1 KiB = 1,100,000 KiB (1.1 GiB)
-5. Volume free space policy is 20%. 1000 GiB * 0.2 = 200 GiB
+ > [!Note]
+ > The space occupied by tiered files is allocated by NTFS. Therefore, it will not show up in any UI.
+1. Sync metadata occupies a cluster size per item. (1 million files + 100,000 directories) * 4 KB cluster size = 4,400,000 KiB (4.4 GiB)
+1. Azure File Sync heatstore occupies 1.1 KiB per file. 1 million files * 1.1 KiB = 1,100,000 KiB (1.1 GiB)
+1. Volume free space policy is 20%. 1000 GiB * 0.2 = 200 GiB
In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of space for this namespace. Add this amount to any additional free space that is desired in order to figure out how much free space is required for this disk.
storage Storage How To Use Files Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-use-files-portal.md
If you'd like to install and use PowerShell locally, you'll need the Azure Power
# [Azure CLI](#tab/azure-cli) - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
storage Storage Blobs Container Calculate Size Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-calculate-size-cli.md
This script calculates the size of a container in Azure Blob storage by totaling
> > The maximum number of blobs returned with a single listing call is 5000. If you need to return more than 5000 blobs, use a continuation token to request additional sets of results. ## Sample script
storage Storage Blobs Container Delete By Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-blobs-container-delete-by-prefix-cli.md
This script first creates a few sample containers in Azure Blob storage, then de
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
storage Storage Common Rotate Account Keys Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/scripts/storage-common-rotate-account-keys-cli.md
This script creates an Azure Storage account, displays the new storage account's
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
stream-analytics Custom Deserializer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/custom-deserializer.md
Title: Custom .NET deserializers for Azure Stream Analytics cloud jobs
-description: This doc demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio.
+description: This doc demonstrates how to create a custom .NET deserializer for an Azure Stream Analytics cloud job using Visual Studio (Preview)
Previously updated : 12/17/2020 Last updated : 01/12/2023
-# Custom .NET deserializers for Azure Stream Analytics in Visual Studio
+# Custom .NET deserializers for Azure Stream Analytics in Visual Studio (Preview)
Azure Stream Analytics has [built-in support for three data formats](stream-analytics-parsing-json.md): JSON, CSV, and Avro. With custom .NET deserializers, you can read data from other formats such as [Protocol Buffer](https://developers.google.com/protocol-buffers/), [Bond](https://github.com/Microsoft/bond) and other user defined formats for both cloud and edge jobs.
When no longer needed, delete the resource group, the streaming job, and all rel
In this tutorial, you learned how to implement a custom .NET deserializer for the protocol buffer input serialization. To learn more about creating custom deserializers, continue to the following article: > [!div class="nextstepaction"]
-> [Create different .NET deserializers for Azure Stream Analytics jobs](custom-deserializer-examples.md)
+> [Create different .NET deserializers for Azure Stream Analytics jobs](custom-deserializer-examples.md)
stream-analytics Quick Create Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/quick-create-azure-cli.md
In this quickstart, you use the Azure CLI to define a Stream Analytics job that
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - Create a resource group. All Azure resources must be deployed into a resource group. Resource groups allow you to organize and manage related Azure resources.
synapse-analytics Apache Spark Advisor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/monitoring/apache-spark-advisor.md
Last updated 06/23/2022
-# Apache Spark Advisor in Azure Synapse Analytics
+# Apache Spark Advisor in Azure Synapse Analytics (Preview)
The Apache Spark advisor analyzes commands and code run by Spark and displays real-time advice for Notebook runs. The Spark advisor has built-in patterns to help users avoid common mistakes, offer recommendations for code optimization, perform error analysis, and locate the root cause of failures.
-## Built-in advices
+## Built-in advice
-### May return inconsistent results when using 'randomSplit'
+#### May return inconsistent results when using 'randomSplit'
Inconsistent or inaccurate results may be returned when working with the results of the 'randomSplit' method. Use Apache Spark (RDD) caching before using the 'randomSplit' method. Method randomSplit() is equivalent to performing sample() on your data frame multiple times, with each sample refetching, partitioning, and sorting your data frame within partitions. The data distribution across partitions and sorting order is important for both randomSplit() and sample(). If either changes upon data refetch, there may be duplicates, or missing values across splits and the same sample using the same seed may produce different results. These inconsistencies may not happen on every run, but to eliminate them completely, cache your data frame, repartition on a column(s), or apply aggregate functions such as groupBy.
-### Table/view name is already in use
+#### Table/view name is already in use
A view already exists with the same name as the created table, or a table already exists with the same name as the created view. When this name is used in queries or applications, only the view will be returned no matter which one created first. To avoid conflicts, rename either the table or the view.
-### Unable to recognize a hint
+#### Unable to recognize a hint
The selected query contains a hint that isn't recognized. Verify that the hint is spelled correctly. ```scala spark.sql("SELECT /*+ unknownHint */ * FROM t1") ```
-### Unable to find a specified relation name(s)
+#### Unable to find a specified relation name(s)
Unable to find the relation(s) specified in the hint. Verify that the relation(s) are spelled correctly and accessible within the scope of the hint. ```scala spark.sql("SELECT /*+ BROADCAST(unknownTable) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ```
-### A hint in the query prevents another hint from being applied
+#### A hint in the query prevents another hint from being applied
The selected query contains a hint that prevents another hint from being applied. ```scala spark.sql("SELECT /*+ BROADCAST(t1), MERGE(t1, t2) */ * FROM t1 INNER JOIN t2 ON t1.str = t2.str") ```
-### Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
+#### Enable 'spark.advise.divisionExprConvertRule.enable' to reduce rounding error propagation
This query contains the expression with Double type. We recommend that you enable the configuration 'spark.advise.divisionExprConvertRule.enable', which can help reduce the division expressions and to reduce the rounding error propagation. ```text "t.a/t.b/t.c" convert into "t.a/(t.b * t.c)" ```
-### Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
+#### Enable 'spark.advise.nonEqJoinConvertRule.enable' to improve query performance
This query contains time consuming join due to "Or" condition within query. We recommend that you enable the configuration 'spark.advise.nonEqJoinConvertRule.enable', which can help to convert the join triggered by "Or" condition to SMJ or BHJ to accelerate this query.
-### Optimize delta table with small files compaction
+#### Optimize delta table with small files compaction
This query is on a delta table with many small files. To improve the performance of queries, run the OPTIMIZE command on the delta table. More details could be found within this [article](https://aka.ms/small-file-advise-delta).
-### Optimize Delta table with ZOrder
+#### Optimize Delta table with ZOrder
This query is on a Delta table and contains a highly selective filter. To improve the performance of queries, run the OPTIMIZE ZORDER BY command on the delta table. More details could be found within this [article](https://aka.ms/small-file-advise-delta).
The Apache Spark advisor displays the advices, including info, warning and error
## Next steps
-For more information on monitoring pipeline runs, see the [Monitor pipeline runs using Synapse Studio](how-to-monitor-pipeline-runs.md) article.
+For more information on monitoring Apache Spark applications, see the [Monitor Apache Spark applications using Synapse Studio](apache-spark-applications.md) article.
+
+For more information to create a notebook, see the [How to use Synapse notebooks](../spark/apache-spark-development-using-notebooks.md)
synapse-analytics Quickstart Create Workspace Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/quickstart-create-workspace-cli.md
In this quickstart, you learn to create a Synapse workspace by using the Azure C
> [!IMPORTANT] > The Azure Synapse workspace needs to be able to read and write to the selected ADLS Gen2 account. In addition, for any storage account that you link as the primary storage account, you must have enabled **hierarchical namespace** at the creation of the storage account, as described on the [Create a Storage Account](../storage/common/storage-account-create.md?tabs=azure-portal#create-a-storage-account) page. ## Create an Azure Synapse workspace using the Azure CLI
synapse-analytics Create Data Warehouse Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql-data-warehouse/create-data-warehouse-azure-cli.md
Create a Synapse SQL pool (data warehouse) in Azure Synapse Analytics using the Azure CLI. ## Getting started
traffic-manager Quickstart Create Traffic Manager Profile Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-cli.md
In this quickstart, you'll create two instances of a web application. Each of th
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
traffic-manager Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
This script creates a resource group, two app service plans, two web apps, a tra
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
traffic-manager Traffic Manager Subnet Override Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-cli.md
There are two types of routing profiles that support subnet overrides:
To create a Traffic Manager subnet override, you can use Azure CLI to add the subnets for the override to the Traffic Manager endpoint. - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-desktop Create Host Pools Azure Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-azure-marketplace.md
To start creating your new host pool:
Start by preparing your environment for the Azure CLI: After you sign in, use the [az desktopvirtualization hostpool create](/cli/azure/desktopvirtualization#az-desktopvirtualization-hostpool-create) command to create the new host pool, optionally creating a registration token for session hosts to join the host pool:
virtual-desktop Create Host Pools Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-host-pools-powershell.md
$token = Get-AzWvdRegistrationInfo -ResourceGroupName <resourcegroupname> -HostP
If you haven't already done so, prepare your environment for the Azure CLI: After you sign in, use the [az desktopvirtualization hostpool create](/cli/azure/desktopvirtualization#az-desktopvirtualization-hostpool-create) command to create the new host pool, optionally creating a registration token for session hosts to join the host pool:
virtual-desktop Create Validation Host Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/create-validation-host-pool.md
The results from the cmdlet should look similar to this output:
If you haven't already done so, prepare your environment for the Azure CLI and sign in. To define the new host pool as a validation host pool, use the [az desktopvirtualization hostpool update](/cli/azure/desktopvirtualization#az-desktopvirtualization-hostpool-update) command:
virtual-desktop Manage App Groups Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/manage-app-groups-powershell.md
This article assumes you've followed the instructions in [Set up the PowerShell
This article assumes you've already set up your environment for the Azure CLI, and that you've signed in to your Azure account.
virtual-desktop Prerequisites https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/prerequisites.md
You have a choice of operating systems that you can use for session hosts to pro
> [!IMPORTANT] > - Azure Virtual Desktop doesn't support 32-bit operating systems or SKUs not listed in the previous table.
+>
+> - Support for Windows 7 ended on January 10, 2023.
+>
> - [Ephemeral OS disks for Azure VMs](../virtual-machines/ephemeral-os-disks.md) are not supported. You can use operating system images provided by Microsoft in the [Azure Marketplace](https://azuremarketplace.microsoft.com), or your own custom images stored in an Azure Compute Gallery, as a managed image, or storage blob. To learn more about how to create custom images, see:
virtual-desktop Connect Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-windows.md
Before you can access your resources, you'll need to meet the prerequisites:
- Windows Server 2019 - Windows Server 2016 - Windows Server 2012 R2
+
+ > [!IMPORTANT]
+ > Support for Windows 7 ended on January 10, 2023.
- Download the Remote Desktop client installer, choosing the correct version for your device: - [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2068602) *(most common)*
virtual-desktop Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new.md
Title: What's new in Azure Virtual Desktop? - Azure
description: New features and product updates for Azure Virtual Desktop. Previously updated : 12/08/2022 Last updated : 01/13/2023
Azure Virtual Desktop updates regularly. This article is where you'll find out a
Make sure to check back here often to keep up with new updates.
+## December 2022
+
+Here's what changed in December 2022:
+
+### FSLogix 2210 now generally available
+
+FSLogix version 2210 is now generally available. This version introduces new features like VHD Disk Compaction, a new process that improves user experience with AppX applications like built-in Windows apps (inbox apps) and Recycle Bin roaming. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/announcing-general-availability-of-fslogix-2210/ba-p/3695166) or [WhatΓÇÖs new in FSLogix](/fslogix/whats-new?context=%2Fazure%2Fvirtual-desktop%2Fcontext%2Fcontext#fslogix-2210-29836152326).
+
+### India metadata service now generally available
+
+The Azure Virtual Desktop region in India is now generally available. Customers can now store their Azure Virtual Desktop objects and metadata within a database located in the India geography. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/azure-virtual-desktop-metadata-database-is-now-available-in/ba-p/3670768).
+
+### Confidential Virtual Machine support for Azure Virtual Desktop now in public preview
+
+Azure Confidential Virtual Machine (VM) support is now in public preview. Azure Confidential VMs increase data privacy and security by protecting data in use. The public preview update also adds support for Windows 11 22H2 to Confidential VMs. For more information, see [our blog post](https://techcommunity.microsoft.com/t5/azure-virtual-desktop-blog/confidential-virtual-machine-support-for-azure-virtual-desktop/ba-p/3686350).
+ ## November 2022 Here's what changed in November 2022:
virtual-machine-scale-sets Disk Encryption Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/disk-encryption-cli.md
The Azure CLI is used to create and manage Azure resources from the command line or in scripts. This quickstart shows you how to use the Azure CLI to create and encrypt a Virtual Machine Scale Set. For more information on applying Azure Disk encryption to a Virtual Machine Scale Set, see [Azure Disk Encryption for Virtual Machine Scale Sets](disk-encryption-overview.md). - This article requires version 2.0.31 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/quick-create-cli.md
A Virtual Machine Scale Set allows you to deploy and manage a set of auto-scalin
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
When you create a scale set, you define the number of VM instances that you wish
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.32 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Autoscale Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-template.md
When you create a scale set, you define the number of VM instances that you wish
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Connect To Instances Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-connect-to-instances-cli.md
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Create And Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-create-and-manage-cli.md
A Virtual Machine Scale Set allows you to deploy and manage a set of virtual mac
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Install Apps Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-install-apps-cli.md
To run applications on virtual machine (VM) instances in a scale set, you first
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Use Custom Image Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-custom-image-cli.md
When you create a scale set, you specify an image to be used when the VM instanc
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.4.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machine-scale-sets Tutorial Use Disks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-use-disks-cli.md
Virtual Machine Scale Sets use disks to store the VM instance's operating system
If you donΓÇÖt have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. - This article requires version 2.0.29 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machines Azure Cli Change Subscription Marketplace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-cli-change-subscription-marketplace.md
This script demonstrates three operations:
- Move the snapshot to a different subscription. - Create a virtual machine based on that snapshot. ## Sample script
virtual-machines Disks Cross Tenant Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-cross-tenant-customer-managed-keys.md
$config = New-AzDiskEncryptionSetConfig `
# [Azure CLI](#tab/azure-cli) In the command below, `myAssignedId` should be the resource ID of the user-assigned managed identity that you created earlier, and `myFederatedClientId` should be the application ID (client ID) of the multi-tenant application.
virtual-machines Image Builder Permissions Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-permissions-cli.md
If you want VM Image Builder to distribute images, you need to create a user-ass
You must set up permissions and privileges prior to building an image. The following sections detail how to configure possible scenarios by using the Azure CLI. ## Create a user-assigned managed identity
virtual-machines Image Builder Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/image-builder-vnet.md
This article shows you how to use Azure VM Image Builder to create a basic, customized Linux image that has access to existing resources on a virtual network. The build virtual machine (VM) you create is deployed to a new or existing virtual network that you specify in your subscription. When you use an existing Azure virtual network, VM Image Builder doesn't require public network connectivity. ## Set variables and permissions
virtual-machines Tutorial Config Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-config-management.md
In this tutorial, you learn how to:
> * Manage Linux updates > * Monitor changes and inventory - This tutorial requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machines Tutorial Elasticsearch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/tutorial-elasticsearch.md
In this tutorial you learn how to:
This deployment is suitable for basic development with the Elastic Stack. For more on the Elastic Stack, including recommendations for a production environment, see the [Elastic documentation](https://www.elastic.co/guide/https://docsupdatetracker.net/index.html) and the [Azure Architecture Center](/azure/architecture/elasticsearch/). - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machines Lsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv3-series.md
The Lsv3-series VMs are available in sizes from 8 to 80 vCPUs. There are 8 GiB o
| Standard_L80s_v3 | 80 | 640 | 800 | 10x1.92TB | 3.8M/20000 | 80000/2160 | 80000/3000 | 32 | 8 | 32000 | 1. **Temp disk**: Lsv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4,000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80s_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation.
-1. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. Local NVMe disks aren't encrypted by [Azure Storage encryption](disk-encryption.md), even if you enable [encryption at host](disk-encryption.md#supported-vm-sizes).
-1. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
-1. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
+2. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM.
+3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVME drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below.
+> [!NOTE]
+> Central US, East US 2, and Qatar Central do not support Local NVME disk encryption, but will be added in the future.
+4. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
+5. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
> [!NOTE] > Lsv3-series VMs don't provide host cache for data disk as it doesn't benefit the Lsv3 workloads.
virtual-machines Maintenance Configurations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-cli.md
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets :heavy_check_mark: Uniform scale sets
-Maintenance Configurations lets you decide when to apply platform updates to various Azure resources. This topic covers the Azure CLI options for Dedicated Hosts and Isolated VMs. For more about benefits of using Maintenance Configurations, its limitations, and other management options, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
+Maintenance Configurations lets you decide when to apply platform updates to various Azure resources. This topic covers the Azure CLI options for using this service. For more about benefits of using Maintenance Configurations, its limitations, and other management options, see [Managing platform updates with Maintenance Configurations](maintenance-configurations.md).
> [!IMPORTANT] > There are different **scopes** which support certain machine types and schedules, so please ensure you are selecting the right scope for your virtual machine. ## Create a maintenance configuration
-Use `az maintenance configuration create` to create a maintenance configuration. This example creates a maintenance configuration named *myConfig* scoped to the host.
+The first step to creating a maintenance configuration is creating a resource group as a container for your configuration. In this example, a resource group named *myMaintenanceRG* is created in *eastus*. If you already have a resource group that you want to use, you can skip this part and replace the resource group name with your own in the rest of the examples.
```azurecli-interactive az group create \ --location eastus \ --name myMaintenanceRG
-az maintenance configuration create \
- -g myMaintenanceRG \
- --resource-name myConfig \
- --maintenance-scope host\
- --location eastus
```
-Copy the configuration ID from the output to use later.
+After creating the resource group, use `az maintenance configuration create` to create a maintenance configuration.
+
+### Host
-Using `--maintenance-scope host` ensures that the maintenance configuration is used for controlling updates to the host infrastructure.
+This example creates a maintenance configuration named *myConfig* scoped to host machines with a scheduled window of 5 hours on the fourth Monday of every month.
-If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
+```azurecli-interactive
+az maintenance configuration create \
+ --resource-group myMaintenanceRG \
+ --resource-name myConfig \
+ --maintenance-scope host \
+ --location eastus \
+ --maintenance-window-duration "05:00" \
+ --maintenance-window-recur-every "Month Fourth Monday" \
+ --maintenance-window-start-date-time "2020-12-30 08:00" \
+ --maintenance-window-time-zone "Pacific Standard Time"
+```
+
+Using `--maintenance-scope host` ensures that the maintenance configuration is used for controlling updates to the host infrastructure. If you try to create a configuration with the same name, but in a different location, you will get an error. Configuration names must be unique to your resource group.
-You can query for available maintenance configurations using `az maintenance configuration list`.
+You can check if you have created the maintenance configuration successfully by querying for available maintenance configurations using `az maintenance configuration list`.
```azurecli-interactive
-az maintenance configuration list --query "[].{Name:name, ID:id}" -o table
+az maintenance configuration list
+ --query "[].{Name:name, ID:id}"
+ --output table
```
-### Create a maintenance configuration with scheduled window
-You can also declare a scheduled window when Azure will apply the updates on your resources. This example creates a maintenance configuration named myConfig with a scheduled window of 5 hours on the fourth Monday of every month. Once you create a scheduled window you no longer have to apply the updates manually.
+> [!NOTE]
+> Maintenance recurrence can be expressed as daily, weekly or monthly. Some examples are:
+> - **daily**- maintenance-window-recur-every: "Day" **or** "3Days"
+> - **weekly**- maintenance-window-recur-every: "3Weeks" **or** "Week Saturday,Sunday"
+> - **monthly**- maintenance-window-recur-every: "Month day23,day24" **or** "Month Last Sunday" **or** "Month Fourth Monday"
+
+### Virtual Machine Scale Sets
+
+This example creates a maintenance configuration named *myConfig* with the osimage scope for virtual machine scale sets with a scheduled window of 5 hours on the fourth Monday of every month.
```azurecli-interactive az maintenance configuration create \
- -g myMaintenanceRG \
+ --resource-group myMaintenanceRG \
--resource-name myConfig \
- --maintenance-scope host \
+ --maintenance-scope osimage \
--location eastus \ --maintenance-window-duration "05:00" \ --maintenance-window-recur-every "Month Fourth Monday" \
az maintenance configuration create \
--maintenance-window-time-zone "Pacific Standard Time" ```
-> [!IMPORTANT]
-> Maintenance **duration** must be *2 hours* or longer.
-
+### Guest VMs
-Maintenance recurrence can be expressed as daily, weekly or monthly. Some examples are:
-- **daily**- maintenance-window-recur-every: "Day" **or** "3Days"-- **weekly**- maintenance-window-recur-every: "3Weeks" **or** "Week Saturday,Sunday"-- **monthly**- maintenance-window-recur-every: "Month day23,day24" **or** "Month Last Sunday" **or** "Month Fourth Monday"
+This example creates a maintenance configuration named *myConfig* scoped to guest machines (VMs and Arc enabled servers) with a scheduled window of 2 hours every 20 days.
+```azurecli-interactive
+az maintenance configuration create \
+ --resource-group myMaintenanceRG \
+ --resource-name myConfig \
+ --maintenance-scope InGuestPatch \
+ --location eastus
+ --maintenance-window-duration "02:00"
+ --maintenance-window-recur-every "20days"
+ --maintenance-window-start-date-time "2022-12-30 07:00"
+ --maintenance-window-time-zone "Pacific Standard Time"
+ --install-patches-linux-parameters package-name-masks-to-exclude="ppt" package-name-masks-to-include="apt" classifications-to-include="Other"
+ --install-patches-windows-parameters kb-numbers-to-exclude="KB123456" kb-numbers-to-include="KB123456" classifications-to-include="FeaturePack"
+ --reboot-setting "IfRequired"
+ --extension-properties InGuestPatchMode="User"
+```
## Assign the configuration
Use `az maintenance assignment create` to assign the configuration to your machi
### Isolated VM
-Apply the configuration to a VM using the ID of the configuration. Specify `--resource-type virtualMachines` and supply the name of the VM for `--resource-name`, and the resource group for to the VM in `--resource-group`, and the location of the VM for `--location`.
+Apply the configuration to an isolated host VM using the ID of the configuration. Specify `--resource-type virtualMachines` and supply the name of the VM for `--resource-name`, and the resource group for to the VM in `--resource-group`, and the location of the VM for `--location`.
```azurecli-interactive az maintenance assignment create \
az maintenance assignment create \
--resource-type virtualMachines \ --provider-name Microsoft.Compute \ --configuration-assignment-name myConfig \
- --maintenance-configuration-id "/subscriptions/1111abcd-1a11-1a2b-1a12-123456789abc/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig"
+ --maintenance-configuration-id "/subscriptions/{subscription ID}/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig"
``` ### Dedicated host
The parameter `--resource-id` is the ID of the host. You can use [az-vm-host-get
```azurecli-interactive az maintenance assignment create \
- -g myDHResourceGroup \
+ --resource-group myDHResourceGroup \
--resource-name myHost \ --resource-type hosts \ --provider-name Microsoft.Compute \ --configuration-assignment-name myConfig \
- --maintenance-configuration-id "/subscriptions/1111abcd-1a11-1a2b-1a12-123456789abc/resourcegroups/myDhResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig" \
- -l eastus \
+ --maintenance-configuration-id "/subscriptions/{subscription ID}/resourcegroups/myDhResourceGroup/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig" \
+ --location eastus \
--resource-parent-name myHostGroup \ --resource-parent-type hostGroups ```
+### Virtual Machine Scale Sets
+
+```azurecli-interactive
+az maintenance assignment create \
+ --resource-group myMaintenanceRG \
+ --location eastus \
+ --resource-name myVMSS \
+ --resource-type virtualMachineScaleSets \
+ --provider-name Microsoft.Compute \
+ --configuration-assignment-name myConfig \
+ --maintenance-configuration-id "/subscriptions/{subscription ID}/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig"
+```
+
+### Guest VMs
+
+```azurecli-interactive
+az maintenance assignment create \
+ --resource-group myMaintenanceRG \
+ --location eastus \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --provider-name Microsoft.Compute \
+ --configuration-assignment-name myConfig \
+ --maintenance-configuration-id "/subscriptions/{subscription ID}/resourcegroups/myMaintenanceRG/providers/Microsoft.Maintenance/maintenanceConfigurations/myConfig"
+```
+ ## Check configuration You can verify that the configuration was applied correctly, or check to see what configuration is currently applied using `az maintenance assignment list`.
az maintenance assignment list \
--resource-parent-name myHostGroup \ --resource-parent-type hostGroups --query "[].{ResourceGroup:resourceGroup,configName:name}" \
- -o table
+ --output table
+```
+
+### Virtual Machine Scale Sets
+
+```azurecli-interactive
+az maintenance assignment list \
+ --provider-name Microsoft.Compute \
+ --resource-group myMaintenanceRG \
+ --resource-name myVMSS \
+ --resource-type virtualMachines \
+ --query "[].{resource:resourceGroup, configName:name}" \
+ --output table
```
+### Guest VMs
+
+```azurecli-interactive
+az maintenance assignment list \
+ --provider-name Microsoft.Compute \
+ --resource-group myMaintenanceRG \
+ --resource-name myVM \
+ --resource-type virtualMachines \
+ --query "[].{resource:resourceGroup, configName:name}" \
+ --output table
+```
## Check for pending updates
Check for pending updates for an isolated VM. In this example, the output is for
```azurecli-interactive az maintenance update list \
- -g myMaintenanceRg \
+ --subscription {subscription ID} \
+ --resourcegroup myMaintenanceRg \
--resource-name myVM \ --resource-type virtualMachines \ --provider-name Microsoft.Compute \
- -o table
+ --output table
``` ### Dedicated host
To check for pending updates for a dedicated host. In this example, the output i
```azurecli-interactive az maintenance update list \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- -g myHostResourceGroup \
+ --subscription {subscription ID} \
+ --resourcegroup myHostResourceGroup \
--resource-name myHost \ --resource-type hosts \ --provider-name Microsoft.Compute \ --resource-parentname myHostGroup \ --resource-parent-type hostGroups \
- -o table
+ --output table
``` ## Apply updates
-Use `az maintenance apply update` to apply pending updates. On success, this command will return JSON containing the details of the update. Apply update calls can take upto 2 hours to complete.
+Use `az maintenance apply update` to apply pending updates. On success, this command will return JSON containing the details of the update. Apply update calls can take up to 2 hours to complete.
### Isolated VM
Create a request to apply updates to an isolated VM.
```azurecli-interactive az maintenance applyupdate create \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ --subscription {subscriptionID} \
--resource-group myMaintenanceRG \ --resource-name myVM \ --resource-type virtualMachines \
Apply updates to a dedicated host.
```azurecli-interactive az maintenance applyupdate create \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ --subscription {subscriptionID} \
--resource-group myHostResourceGroup \ --resource-name myHost \ --resource-type hosts \
az maintenance applyupdate create \
--resource-parent-type hostGroups ```
+### Virtual Machine Scale Sets
+
+Apply update to a scale set
+
+```azurecli-interactive
+az maintenance applyupdate create \
+ --subscription {subscriptionID} \
+ --resource-group myMaintenanceRG \
+ --resource-name myVMSS \
+ --resource-type virtualMachineScaleSets \
+ --provider-name Microsoft.Compute
+```
+ ## Check the status of applying updates You can check on the progress of the updates using `az maintenance applyupdate get`.
LastUpdateTime will be the time when the update got complete, either initiated b
```azurecli-interactive az maintenance applyupdate get \
+ --subscription {subscriptionID} \
--resource-group myMaintenanceRG \ --resource-name myVM \ --resource-type virtualMachines \ --provider-name Microsoft.Compute \
- --apply-update-name default
+ --apply-update-name myUpdateName \
+ --query "{LastUpdate:lastUpdateTime, Name:name, ResourceGroup:resourceGroup, Status:status}" \
+ --output table
``` ### Dedicated host ```azurecli-interactive az maintenance applyupdate get \
- --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
+ --subscription {subscriptionID} \
--resource-group myMaintenanceRG \ --resource-name myHost \ --resource-type hosts \
az maintenance applyupdate get \
--output table ```
+### Virtual Machine Scale Sets
+
+```azurecli-interactive
+az maintenance applyupdate get \
+ --subscription {subscriptionID} \
+ --resource-group myMaintenanceRG \
+ --resource-name myVMSS \
+ --resource-type virtualMachineScaleSets \
+ --provider-name Microsoft.Compute \
+ --apply-update-name myUpdateName \
+ --query "{LastUpdate:lastUpdateTime, Name:name, ResourceGroup:resourceGroup, Status:status}" \
+ --output table
+```
## Delete a maintenance configuration
Use `az maintenance configuration delete` to delete a maintenance configuration.
```azurecli-interactive az maintenance configuration delete \ --subscription 1111abcd-1a11-1a2b-1a12-123456789abc \
- -g myResourceGroup \
+ -resource-group myResourceGroup \
--resource-name myConfig ```
virtual-machines Maintenance Configurations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations-powershell.md
You may also be asked to confirm that you want to install from an *untrusted rep
## Create a maintenance configuration
-Create a resource group as a container for your configuration. In this example, a resource group named *myMaintenanceRG* is created in *eastus*. If you already have a resource group that you want to use, you can skip this part and replace the resource group name with your own in the rest of the examples.
+The first step to creating a maintenance configuration is creating a resource group as a container for your configuration. In this example, a resource group named *myMaintenanceRG* is created in *eastus*. If you already have a resource group that you want to use, you can skip this part and replace the resource group name with your own in the rest of the examples.
```azurepowershell-interactive New-AzResourceGroup `
After you have created your configuration, you might want to also assign machine
### Isolated VM
-Apply the configuration to a VM using the ID of the configuration. Specify `-ResourceType VirtualMachines` and supply the name of the VM for `-ResourceName`, and the resource group of the VM for `-ResourceGroupName`.
+Assign the configuration to a VM using the ID of the configuration. Specify `-ResourceType VirtualMachines` and supply the name of the VM for `-ResourceName`, and the resource group of the VM for `-ResourceGroupName`.
```azurepowershell-interactive New-AzConfigurationAssignment `
New-AzConfigurationAssignment `
-MaintenanceConfigurationId "configID" ``` +
+### Guest
+
+```azurepowershell-interactive
+New-AzConfigurationAssignment `
+ -ResourceGroupName "myResourceGroup" `
+ -Location "eastus" `
+ -ResourceName "myGuest" `
+ -ResourceType "VirtualMachines" `
+ -ProviderName "Microsoft.Compute" `
+ -ConfigurationAssignmentName "configName" `
+ -MaintenanceConfigurationId "configID"
+```
+ ## Check for pending updates Use [Get-AzMaintenanceUpdate](/powershell/module/az.maintenance/get-azmaintenanceupdate) to see if there are pending updates. Use `-subscription` to specify the Azure subscription of the VM if it is different from the one that you are logged into.
virtual-machines Maintenance Configurations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md
Maintenance Configurations gives you the ability to control and manage updates for many Azure virtual machine resources since Azure frequently updates its infrastructure to improve reliability, performance, security or launch new features. Most updates are transparent to users, but some sensitive workloads, like gaming, media streaming, and financial transactions, can't tolerate even few seconds of a VM freezing or disconnecting for maintenance. Maintenance Configurations is integrated with Azure Resource Graph (ARG) for low latency and high scale customer experience. >[!IMPORTANT]
-> Users are required to have a role of at least contributor in order to use maintenance configurations.
+> Users are required to have a role of at least contributor in order to use maintenance configurations. Users also have to ensure that their subscription is registered with Maintenance Resource Provider to use maintenance configurations.
## Scopes
virtual-machines Monitor Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/monitor-vm.md
Using recommended alerts, a separate alert rule is created for each VM. You can
For more information about the various alerts for Azure virtual machines, see the following resources: - See [Monitor virtual machines with Azure Monitor: Alerts](../azure-monitor/vm/monitor-virtual-machine-alerts.md) for common alert rules for virtual machines. -- See [Monitor virtual machines with Azure Monitor: Workloads](../azure-monitor/vm/monitor-virtual-machine-workloads.md) for data you can collect from VM workloads that you can use to create alerts. - See [Create a log query alert for an Azure resource](../azure-monitor/alerts/tutorial-log-alert.md) for a tutorial on creating a log query alert rule. - For common log alert rules, go to the **Queries** pane in Log Analytics. For **Resource type**, enter **Virtual machines**, and for **Type**, enter **Alerts**.
virtual-machines Copy Managed Disks To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-to-same-or-different-subscription.md
This script copies a managed disk to same or different subscription but in the s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Copy Managed Disks Vhd To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-managed-disks-vhd-to-storage-account.md
This script exports the underlying VHD of a managed disk to a storage account in
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Copy Snapshot To Same Or Different Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-same-or-different-subscription.md
This script copies a snapshot of a managed disk to same or different subscriptio
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Copy Snapshot To Storage Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/copy-snapshot-to-storage-account.md
This script exports a managed snapshot to a storage account in different region.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Create Managed Disk From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-snapshot.md
This script creates a managed disk from a snapshot. Use it to restore a virtual
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Create Managed Disk From Vhd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-managed-disk-from-vhd.md
This script creates a managed disk from a VHD file in a storage account in the s
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Create Vm From Managed Os Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-managed-os-disks.md
This script creates a virtual machine by attaching an existing managed disk as O
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Create Vm From Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/scripts/create-vm-from-snapshot.md
This script creates a virtual machine from a snapshot of an OS disk.
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-machines Virtual Machines Create Restore Points Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/virtual-machines-create-restore-points-cli.md
In this tutorial, you learn how to:
> * [Track the progress of Copy operation](#step-3-track-the-status-of-the-vm-restore-point-creation) > * [Restore a VM](#restore-a-vm-from-vm-restore-point) - Learn more about the [support requirements](concepts-restore-points.md) and [limitations](virtual-machines-create-restore-points.md#limitations) before creating a restore point. ## Step 1: Create a VM restore point collection
virtual-machines Disk Encryption Cli Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disk-encryption-cli-quickstart.md
The Azure CLI is used to create and manage Azure resources from the command line
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.30 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-machines Oracle Database Backup Azure Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-backup.md
This article demonstrates the use of Azure Backup to take disk snapshots of the
> * Restore and recover the database from a recovery point > * Restore the VM from from a recovery point - To perform the backup and recovery process, you must first create a Linux VM that has an installed instance of Oracle Database 12.1 or higher.
virtual-machines Oracle Database Backup Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-database-backup-azure-storage.md
This article demonstrates the use of Azure Files as a media to back up and restore an Oracle database running on an Azure VM. The steps in this article have been tested against Oracle 12.1 and higher. You will back up the database using Oracle RMAN to an Azure file share mounted to the VM using the SMB protocol. Using Azure Files for backup media is extremely cost effective and performant. However, for very large databases, Azure Backup provides a better solution. - To perform the backup and recovery process, you must first create a Linux VM that has an installed instance of Oracle Database. We recommend using Oracle 12.x or higher.
virtual-network Create Peering Different Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/create-peering-different-subscriptions.md
This tutorial peers virtual networks in the same region. You can also peer virtu
- Each user must accept the guest user invitation from the opposite Azure Active Directory tenant. - This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Add Dual Stack Ipv6 Vm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/add-dual-stack-ipv6-vm-cli.md
In this article, you'll add IPv6 support to an existing virtual network. You'll
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Configure Routing Preference Virtual Machine Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/configure-routing-preference-virtual-machine-cli.md
In this tutorial, you learn how to:
> * Create a virtual machine. > * Verify the public IP address is set to **Internet** routing preference. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Create Custom Ip Address Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-custom-ip-address-prefix-cli.md
The steps in this article detail the process to:
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.28 or later of the Azure CLI (you can run az version to determine which you have). If using Azure Cloud Shell, the latest version is already installed.
virtual-network Create Public Ip Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-cli.md
In this quickstart, you'll learn how to create an Azure public IP address. Publi
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Create Public Ip Prefix Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-public-ip-prefix-cli.md
When you create a public IP address resource, you can assign a static public IP
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Create Vm Dual Stack Ipv6 Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-cli.md
In this article, you'll create a virtual machine in Azure with the Azure CLI. Th
- An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Public Ip Upgrade Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/public-ip-upgrade-cli.md
In this article, you'll learn how to upgrade a static Basic SKU public IP addres
* An Azure account with an active subscription. [Create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). * A **static** basic SKU public IP address in your subscription. For more information, see [Create a basic public IP address using the Azure CLI](./create-public-ip-cli.md?tabs=create-public-ip-basic%2Ccreate-public-ip-zonal%2Crouting-preference#create-public-ip). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Routing Preference Azure Kubernetes Service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-azure-kubernetes-service-cli.md
In this tutorial, you learn how to:
> * Create a public IP address with the **Internet** routing preference. > * Create Azure Kubernetes cluster with **Internet** routing preference public IP. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
virtual-network Routing Preference Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/routing-preference-cli.md
By default, the routing preference for public IP address is set to the Microsoft
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.49 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Virtual Network Deploy Static Pip Arm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-deploy-static-pip-arm-cli.md
In this article, you'll create a VM with a static public IP address. A public IP
Public IP addresses have a [nominal charge](https://azure.microsoft.com/pricing/details/ip-addresses). There's a [limit](../../azure-resource-manager/management/azure-subscription-service-limits.md?toc=%2fazure%2fvirtual-network%2ftoc.json#azure-resource-manager-virtual-networking-limits) to the number of public IP addresses that you can use per subscription. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Virtual Network Multiple Ip Addresses Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-network-multiple-ip-addresses-cli.md
This article explains how to add multiple IP addresses to a virtual machine usin
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Virtual Networks Static Private Ip Arm Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/ip-services/virtual-networks-static-private-ip-arm-cli.md
ms.devlang: azurecli
A virtual machine (VM) is automatically assigned a private IP address from a range that you specify. This range is based on the subnet in which the VM is deployed. The VM keeps the address until the VM is deleted. Azure dynamically assigns the next available private IP address from the subnet you create a VM in. Assign a static IP address to the VM if you want a specific IP address in the subnet. - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - This tutorial requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Manage Subnet Delegation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/manage-subnet-delegation.md
Subnet delegation gives explicit permissions to the service to create service-sp
- If you didn't create the subnet you would like to delegate to an Azure service, you need the following permission: `Microsoft.Network/virtualNetworks/subnets/write`. The built-in [Network Contributor](../role-based-access-control/built-in-roles.md?toc=%2fazure%2fvirtual-network%2ftoc.json#network-contributor) role also contains the necessary permissions. - This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/manage-nat-gateway.md
This article explains how to manage the following aspects of NAT gateway:
- The example nat gateway used in this article is named **myNATgateway**. - This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Quickstart Create Nat Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/quickstart-create-nat-gateway-cli.md
This quickstart shows you how to use the Azure Virtual Network NAT service. You'
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] [!INCLUDE [cli-launch-cloud-shell-sign-in.md](../../../includes/cli-launch-cloud-shell-sign-in.md)]
virtual-network Quick Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/quick-create-cli.md
In this quickstart, you learn how to create a virtual network. After creating a
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This quickstart requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Virtual Network Cli Sample Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-filter-network-traffic.md
This script sample creates a virtual network with front-end and back-end subnets
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack Standard Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack-standard-load-balancer.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in A
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-network Virtual Network Cli Sample Ipv6 Dual Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-ipv6-dual-stack.md
This article shows you how to deploy a dual stack (IPv4 + IPv6) application in A
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-network Virtual Network Cli Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-multi-tier-application.md
This script sample creates a virtual network with front-end and back-end subnets
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-network Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
This script sample creates and connects two virtual networks in the same region
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-network Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
This script sample creates a virtual network with front-end and back-end subnets
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] ## Sample script
virtual-network Tutorial Connect Virtual Networks Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-connect-virtual-networks-cli.md
You can connect virtual networks to each other with virtual network peering. Onc
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Tutorial Create Route Table Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-create-route-table-cli.md
Azure automatically routes traffic between all subnets within a virtual network,
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Tutorial Filter Network Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-filter-network-traffic-cli.md
You can filter network traffic inbound to and outbound from a virtual network su
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Tutorial Restrict Network Access To Resources Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/tutorial-restrict-network-access-to-resources-cli.md
Virtual network service endpoints enable you to limit network access to some Azu
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Virtual Network Network Interface https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-network-interface.md
If you need to add, change, or remove IP addresses for a network interface, see
- The example network interface name used in this article is **myNIC**. Replace the example value with the name of your network interface. - This how-to article requires version 2.31.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
virtual-network Virtual Network Service Endpoint Policies Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/virtual-network-service-endpoint-policies-cli.md
In this article, you learn how to:
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
vpn-gateway About Zone Redundant Vnet Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/about-zone-redundant-vnet-gateways.md
Title: 'Create a zone-redundant virtual network gateway in Azure availability zones'
-description: Learn how to deploy zone-redundant virtual network gateways in Azure availability zones.
+ Title: 'About zone-redundant virtual network gateway in Azure availability zones'
+description: Learn about zone-redundant virtual network gateways in Azure availability zones.
vpn-gateway Bgp How To Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/bgp-how-to-cli.md
Each part of this article helps you form a basic building block for enabling BGP
You can combine these sections to build a more complex multihop transit network that meets your needs. ## <a name ="enablebgp"></a>Enable BGP for the VPN gateway
vpn-gateway Create Routebased Vpn Gateway Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/create-routebased-vpn-gateway-cli.md
The steps in this article will create a VNet, a subnet, a gateway subnet, and a
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
vpn-gateway Vpn Gateway Howto Site To Site Resource Manager Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-site-to-site-resource-manager-cli.md
Verify that you have met the following criteria before beginning configuration:
* Make sure you have a compatible VPN device and someone who is able to configure it. For more information about compatible VPN devices and device configuration, see [About VPN Devices](vpn-gateway-about-vpn-devices.md). * Verify that you have an externally facing public IPv4 address for your VPN device. * If you are unfamiliar with the IP address ranges located in your on-premises network configuration, you need to coordinate with someone who can provide those details for you. When you create this configuration, you must specify the IP address range prefixes that Azure will route to your on-premises location. None of the subnets of your on-premises network can over lap with the virtual network subnets that you want to connect to. * This article requires version 2.0 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed. ### <a name="example"></a>Example values
web-application-firewall Tutorial Restrict Web Traffic Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/web-application-firewall/ag/tutorial-restrict-web-traffic-cli.md
If you prefer, you can complete this procedure using [Azure PowerShell](tutorial
[!INCLUDE [quickstarts-free-trial-note](../../../includes/quickstarts-free-trial-note.md)] - This article requires version 2.0.4 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.