Updates from: 07/14/2022 01:15:14
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory Howto Sspr Authenticationdata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-sspr-authenticationdata.md
Previously updated : 10/05/2020 Last updated : 07/12/2022
# Pre-populate user authentication contact information for Azure Active Directory self-service password reset (SSPR)
-To use Azure Active Directory (Azure AD) self-service password reset (SSPR), authentication contact information for a user must be present. Some organizations have users register their authentication data themselves. Other organizations prefer to synchronize from authentication data that already exists in Active Directory Domain Services (AD DS). This synchronized data is made available to Azure AD and SSPR without requiring user interaction. When users need to change or reset their password, they can do so even if they haven't previously registered their contact information.
+To use Azure Active Directory (Azure AD) self-service password reset (SSPR), authentication information for a user must be present. Most organizations have users register their authentication data themselves while collecting information for MFA. Some organizations prefer to bootstrap this process through synchronization of authentication data that already exists in Active Directory Domain Services (AD DS). This synchronized data is made available to Azure AD and SSPR without requiring user interaction. When users need to change or reset their password, they can do so even if they haven't previously registered their contact information.
You can pre-populate authentication contact information if you meet the following requirements:
The following fields can be set through PowerShell:
* Can only be set if you're not synchronizing with an on-premises directory. > [!IMPORTANT]
-> There's a known lack of parity in command features between PowerShell v1 and PowerShell v2. The [Microsoft Graph REST API (beta) for authentication methods](/graph/api/resources/authenticationmethods-overview) is the current engineering focus to provide modern interaction.
+> Azure AD PowerShell is planned for deprecation. You can start using [Microsoft Graph PowerShell](/powershell/microsoftgraph/overview) to interact with Azure AD as you would in Azure AD PowerShell, or use the [Microsoft Graph REST API for managing authentication methods](/graph/api/resources/authenticationmethods-overview).
-### Use PowerShell version 1
+### Use Azure AD PowerShell version 1
To get started, [download and install the Azure AD PowerShell module](/previous-versions/azure/jj151815(v=azure.100)#bkmk_installmodule). After it's installed, use the following steps to configure each field.
-#### Set the authentication data with PowerShell version 1
+#### Set the authentication data with Azure AD PowerShell version 1
```PowerShell Connect-MsolService
Set-MsolUser -UserPrincipalName user@domain.com -PhoneNumber "+1 4252345678"
Set-MsolUser -UserPrincipalName user@domain.com -AlternateEmailAddresses @("email@domain.com") -MobilePhone "+1 4251234567" -PhoneNumber "+1 4252345678" ```
-#### Read the authentication data with PowerShell version 1
+#### Read the authentication data with Azure AD PowerShell version 1
```PowerShell Connect-MsolService
Get-MsolUser -UserPrincipalName user@domain.com | select -Expand StrongAuthentic
Get-MsolUser -UserPrincipalName user@domain.com | select -Expand StrongAuthenticationUserDetails | select Email ```
-### Use PowerShell version 2
+### Use Azure AD PowerShell version 2
To get started, [download and install the Azure AD version 2 PowerShell module](/powershell/module/azuread/). To quickly install from recent versions of PowerShell that support `Install-Module`, run the following commands. The first line checks to see if the module is already installed: ```PowerShell
-Get-Module AzureADPreview
-Install-Module AzureADPreview
+Get-Module AzureAD
+Install-Module AzureAD
Connect-AzureAD ``` After the module is installed, use the following steps to configure each field.
-#### Set the authentication data with PowerShell version 2
+#### Set the authentication data with Azure AD PowerShell version 2
```PowerShell Connect-AzureAD
Set-AzureADUser -ObjectId user@domain.com -TelephoneNumber "+1 4252345678"
Set-AzureADUser -ObjectId user@domain.com -OtherMails @("emails@domain.com") -Mobile "+1 4251234567" -TelephoneNumber "+1 4252345678" ```
-#### Read the authentication data with PowerShell version 2
+#### Read the authentication data with Azure AD PowerShell version 2
```PowerShell Connect-AzureAD
Get-AzureADUser -ObjectID user@domain.com | select TelephoneNumber
Get-AzureADUser | select DisplayName,UserPrincipalName,otherMails,Mobile,TelephoneNumber | Format-Table ```
+### Use Microsoft Graph PowerShell
+
+To get started, [download and install the Microsoft Graph PowerShell module](/powershell/microsoftgraph/overview).
+
+To quickly install from recent versions of PowerShell that support `Install-Module`, run the following commands. The first line checks to see if the module is already installed:
+
+```PowerShell
+Get-Module Microsoft.Graph
+Install-Module Microsoft.Graph
+Select-MgProfile -Name "beta"
+Connect-MgGraph -Scopes "User.ReadWrite.All"
+```
+
+After the module is installed, use the following steps to configure each field.
+
+#### Set the authentication data with Microsoft Graph PowerShell
+
+```PowerShell
+Connect-MgGraph -Scopes "User.ReadWrite.All"
+
+Update-MgUser -UserId 'user@domain.com' -otherMails @("emails@domain.com")
+Update-MgUser -UserId 'user@domain.com' -mobilePhone "+1 4251234567"
+Update-MgUser -UserId 'user@domain.com' -businessPhones "+1 4252345678"
+
+Update-MgUser -UserId 'user@domain.com' -otherMails @("emails@domain.com") -mobilePhone "+1 4251234567" -businessPhones "+1 4252345678"
+```
+
+#### Read the authentication data with Microsoft Graph PowerShell
+
+```PowerShell
+Connect-MgGraph -Scopes "User.Read.All"
+
+Get-MgUser -UserId 'user@domain.com' | select otherMails
+Get-MgUser -UserId 'user@domain.com' | select mobilePhone
+Get-MgUser -UserId 'user@domain.com' | select businessPhones
+
+Get-MgUser -UserId 'user@domain.com' | Select businessPhones, mobilePhone, otherMails | Format-Table
+```
+ ## Next steps Once authentication contact information is pre-populated for users, complete the following tutorial to enable self-service password reset:
active-directory Plan Cloud Sync Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/plan-cloud-sync-topologies.md
This article describes various on-premises and Azure Active Directory (Azure AD)
> [!IMPORTANT] > Microsoft doesn't support modifying or operating Azure AD Connect cloud sync outside of the configurations or actions that are formally documented. Any of these configurations or actions might result in an inconsistent or unsupported state of Azure AD Connect cloud sync. As a result, Microsoft can't provide technical support for such deployments.
-For more information see the following video.
+For more information, see the following video.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWJ8l5] ## Things to remember about all scenarios and topologies
-The following is a list of information to keep in mind when selecting a solution.
+The information below should be kept in mind, when selecting a solution.
- Users and groups must be uniquely identified across all forests-- Matching across forests does not occur with cloud sync
+- Matching across forests doesn't occur with cloud sync
- A user or group must be represented only once across all forests - The source anchor for objects is chosen automatically. It uses ms-DS-ConsistencyGuid if present, otherwise ObjectGUID is used.-- You cannot change the attribute that is used for source anchor.
+- You can't change the attribute that is used for source anchor.
## Single forest, single Azure AD tenant ![Diagram that shows the topology for a single forest and a single tenant.](media/tutorial-single-forest/diagram-2.png)
The simplest topology is a single on-premises forest, with one or multiple domai
## Multi-forest, single Azure AD tenant ![Topology for a multi-forest and a single tenant](media/plan-cloud-provisioning-topologies/multi-forest-2.png)
-A common topology is a multiple AD forests, with one or multiple domains, and a single Azure AD tenant.
+Multiple AD forests is a common topology, with one or multiple domains, and a single Azure AD tenant.
## Existing forest with Azure AD Connect, new forest with cloud Provisioning ![Diagram that shows the topology for an existing forest and a new forest.](media/tutorial-existing-forest/existing-forest-new-forest-2.png)
The piloting scenario involves the existence of both Azure AD Connect and Azure
For an example of this scenario see [Tutorial: Pilot Azure AD Connect cloud sync in an existing synced AD forest](tutorial-pilot-aadc-aadccp.md)
+## Merging objects from disconnected sources
+### (Public Preview)
+![Diagram for merging objects from disconnected sources](media/plan-cloud-provisioning-topologies/attributes-multiple-sources.png)
+In this scenario, the attributes of a user are contributed to by two disconnected Active Directory forests.
+An example would be:
+
+ - one forest (1) contains most of the attributes
+ - a second forest (2) contains a few attributes
+
+ Since the second forest doesn't have network connectivity to the Azure AD Connect server, the object can't be merged through Azure AD Connect. Cloud Sync in the second forest allows the attribute value to be retrieved from the second forest. The value can then be merged with the object in Azure AD that is synced by Azure AD Connect.
+
+This configuration is advanced and there are a few caveats to this topology:
+
+ 1. You must use `msdsConsistencyGuid` as the source anchor in the Cloud Sync configuration.
+ 2. The `msdsConsistencyGuid` of the user object in the second forest must match that of the corresponding object in Azure AD.
+ 3. You must populate the `UserPrincipalName` attribute and the `Alias` attribute in the second forest and it must match the ones that are synced from the first forest.
+ 4. You must remove all attributes from the attribute mapping in the Cloud Sync configuration that don't have a value or may have a different value in the second forest ΓÇô you can't have overlapping attribute mappings between the first forest and the second one.
+ 5. If there's no matching object in the first forest, for an object that is synced from the second forest, then Cloud Sync will still create the object in Azure AD. The object will only have the attributes that are defined in the mapping configuration of Cloud Sync for the second forest.
+ 6. If you delete the object from the second forest, it will be temporarily soft deleted in Azure AD. It will be restored automatically after the next Azure AD Connect sync cycle.
+ 7. If you delete the object from the first forest, it will be soft deleted from Azure AD. The object won't be restored unless a change is made to the object in the second forest. After 30 days the object will be hard deleted from Azure AD and if a change is made to the object in the second forest it will be created as a new object in Azure AD.
+
+
## Next steps
active-directory Active Directory Configurable Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-configurable-token-lifetimes.md
Refresh and session token configuration are affected by the following properties
|Single-Factor Session Token Max Age |MaxAgeSessionSingleFactor |Session tokens (persistent and nonpersistent) |Until-revoked | |Multi-Factor Session Token Max Age |MaxAgeSessionMultiFactor |Session tokens (persistent and nonpersistent) |Until-revoked |
+Non-persistent session tokens have a Max Inactive Time of 24 hours whereas persistent session tokens have a Max Inactive Time of 180 days. Any time the SSO session token is used within its validity period, the validity period is extended another 24 hours or 180 days. If the SSO session token is not used within its Max Inactive Time period, it is considered expired and will no longer be accepted. Any changes to this default periods should be change using [Conditional Access](../conditional-access/howto-conditional-access-session-lifetime.md).
+ You can use PowerShell to find the policies that will be affected by the retirement. Use the [PowerShell cmdlets](configure-token-lifetimes.md#get-started) to see the all policies created in your organization, or to find which apps and service principals are linked to a specific policy. ## Policy evaluation and prioritization
active-directory App Objects And Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/app-objects-and-service-principals.md
A service principal must be created in each tenant where the application is used
### Consequences of modifying and deleting applications
-Any changes that you make to your application object are also reflected in its service principal object in the application's home tenant only (the tenant where it was registered). This means that deleting an application object will also delete its home tenant service principal object. However, restoring that application object will not restore its corresponding service principal. For multi-tenant applications, changes to the application object are not reflected in any consumer tenants' service principal objects until the access is removed through the [Application Access Panel](https://myapps.microsoft.com) and granted again.
+Any changes that you make to your application object are also reflected in its service principal object in the application's home tenant only (the tenant where it was registered). This means that deleting an application object will also delete its home tenant service principal object. However, restoring that application object through the app registrations UI won't restore its corresponding service principal. For more information on deletion and recovery of applications and their service principal objects, see [delete and recover applications and service principal objects](../manage-apps/recover-deleted-apps-faq.md).
## Example
active-directory Howto Remove App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-remove-app.md
In the following sections, you learn how to:
## Remove an application authored by you or your organization
-Applications that you or your organization have registered are represented by both an application object and service principal object in your tenant. For more information, see [Application Objects and Service Principal Objects](./app-objects-and-service-principals.md).
+Applications that you or your organization have registered are represented by both an application object and service principal object in your tenant. For more information, see [Application objects and service principal objects](./app-objects-and-service-principals.md).
> [!NOTE] > Deleting an application will also delete its service principal object in the application's home directory. For multi-tenant applications, service principal objects in other directories will not be deleted.
To delete an application, be listed as an owner of the application or have admin
If you are viewing **App registrations** in the context of a tenant, a subset of the applications that appear under the **All apps** tab are from another tenant and were registered into your tenant during the consent process. More specifically, they are represented by only a service principal object in your tenant, with no corresponding application object. For more information on the differences between application and service principal objects, see [Application and service principal objects in Azure AD](./app-objects-and-service-principals.md).
-In order to remove an applicationΓÇÖs access to your directory (after having granted consent), the company administrator must remove its service principal. The administrator must have Global Admininstrator access, and can remove the application through the Azure portal or use the [Azure AD PowerShell Cmdlets](/previous-versions/azure/jj151815(v=azure.100)) to remove access.
+In order to remove an applicationΓÇÖs access to your directory (after having granted consent), the company administrator must remove its service principal. The administrator must have Global Administrator access, and can remove the application through the Azure portal or use the [Azure AD PowerShell Cmdlets](/previous-versions/azure/jj151815(v=azure.100)) to remove access.
## Next steps
active-directory Msal Net Token Cache Serialization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-net-token-cache-serialization.md
services.Configure<MsalDistributedTokenCacheAdapterOptions>(options =>
options.DisableL1Cache = false; // Or limit the memory (by default, this is 500 MB)
- options.L1CacheOptions.SizeLimit = 1024 * 1024 * 1024, // 1 GB
+ options.L1CacheOptions.SizeLimit = 1024 * 1024 * 1024; // 1 GB
// You can choose if you encrypt or not encrypt the cache options.Encrypt = false;
services.Configure<MsalDistributedTokenCacheAdapterOptions>(options =>
// And you can set eviction policies for the distributed // cache. options.SlidingExpiration = TimeSpan.FromHours(1);
- }
+ });
// Then, choose your implementation of distributed cache // --
The following samples illustrate token cache serialization.
| | -- | -- | |[active-directory-dotnet-desktop-msgraph-v2](https://github.com/azure-samples/active-directory-dotnet-desktop-msgraph-v2) | Desktop (WPF) | Windows Desktop .NET (WPF) application that calls the Microsoft Graph API. ![Diagram that shows a topology with a desktop app client flowing to Azure Active Directory by acquiring a token interactively and to Microsoft Graph.](media/msal-net-token-cache-serialization/topology.png)| |[active-directory-dotnet-v1-to-v2](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2) | Desktop (console) | Set of Visual Studio solutions that illustrate the migration of Azure AD v1.0 applications (using ADAL.NET) to Microsoft identity platform applications (using MSAL.NET). In particular, see [Token cache migration](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/blob/master/TokenCacheMigration/README.md) and [Confidential client token cache](https://github.com/Azure-Samples/active-directory-dotnet-v1-to-v2/tree/master/ConfidentialClientTokenCache). |
-[ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | ASP.NET (net472) | Example of token cache serialization in an ASP.NET MVC application (using MSAL.NET). In particular, see [MsalAppBuilder](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs).
+[ms-identity-aspnet-webapp-openidconnect](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect) | ASP.NET (net472) | Example of token cache serialization in an ASP.NET MVC application (using MSAL.NET). In particular, see [MsalAppBuilder](https://github.com/Azure-Samples/ms-identity-aspnet-webapp-openidconnect/blob/master/WebApp/Utils/MsalAppBuilder.cs).
active-directory V2 Oauth2 Client Creds Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-oauth2-client-creds-grant-flow.md
You can use the OAuth 2.0 client credentials grant specified in [RFC 6749](https://tools.ietf.org/html/rfc6749#section-4.4), sometimes called *two-legged OAuth*, to access web-hosted resources by using the identity of an application. This type of grant is commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. These types of applications are often referred to as *daemons* or *service accounts*.
-This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md).
+This article describes how to program directly against the protocol in your application. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL) instead to [acquire tokens and call secured web APIs](authentication-flows-app-scenarios.md#scenarios-and-supported-authentication-flows). Also take a look at the [sample apps that use MSAL](sample-v2-code.md). As a side note, refresh tokens will never be granted with this flow as `client_id` and `client_secret` (which would be required to obtain a refresh token) can be used to obtain an access token instead.
The OAuth 2.0 client credentials grant flow permits a web service (confidential client) to use its own credentials, instead of impersonating a user, to authenticate when calling another web service. For a higher level of assurance, the Microsoft identity platform also allows the calling service to authenticate using a [certificate](#second-case-access-token-request-with-a-certificate) or federated credential instead of a shared secret. Because the application's own credentials are being used, these credentials must be kept safe - _never_ publish that credential in your source code, embed it in web pages, or use it in a widely distributed native application.
active-directory Workload Identity Federation Create Trust Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-github.md
Anyone with permissions to create an app registration and add a secret or certif
After you configure your app to trust a GitHub repo, [configure your GitHub Actions workflow](/azure/developer/github/connect-from-azure) to get an access token from Microsoft identity provider and access Azure AD protected resources. ## Prerequisites
-[Create an app registration](quickstart-register-app.md) in Azure AD. Grant your app access to the Azure resources targeted by your GitHub workflow.
+
+[Create an app registration](quickstart-register-app.md) in Azure AD. [Grant your app access to the Azure resources](howto-create-service-principal-portal.md) targeted by your GitHub workflow.
Find the object ID of the app (not the application (client) ID), which you need in the following steps. You can find the object ID of the app in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal and select your app registration. In **Overview**->**Essentials**, find the **Object ID**.
Get the organization, repository, and environment information for your GitHub re
# [Azure portal](#tab/azure-portal)
-Sign in to the [Azure portal](https://portal.azure.com/). Go to **App registrations** and open the app you want to configure.
+Sign into the [Azure portal](https://portal.azure.com/). Go to **App registrations** and open the app you want to configure.
Go to **Certificates and secrets**. In the **Federated credentials** tab, select **Add credential**. The **Add a credential** blade opens.
Specify an **Entity type** of **Tag** and a **GitHub tag name** of "v2".
For a workflow triggered by a pull request event, specify an **Entity type** of **Pull request**. # [Microsoft Graph](#tab/microsoft-graph)+ Launch [Azure Cloud Shell](https://portal.azure.com/#cloudshell/) and sign in to your tenant. ### Create a federated identity credential
az rest --method POST --uri 'https://graph.microsoft.com/beta/applications/f6475
``` And you get the response:+ ```azurecli { "@odata.context": "https://graph.microsoft.com/beta/$metadata#applications('f6475511-fd81-4965-a00e-41e7792b7b9c')/federatedIdentityCredentials/$entity",
And you get the response:
*issuer*: The path to the GitHub OIDC provider: `https://token.actions.githubusercontent.com/`. This issuer will become trusted by your Azure application. *subject*: Before Azure will grant an access token, the request must match the conditions defined here.+ - For Jobs tied to an environment: `repo:< Organization/Repository >:environment:< Name >` - For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`. - For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull-request`.
Run the following command to [delete a federated identity credential](/graph/api
```azurecli az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd81-4965-a00e-41e7792b7b9c/federatedIdentityCredentials/1aa3e6a7-464c-4cd2-88d3-90db98132755' ```+ ## Get the application (client) ID and tenant ID from the Azure portal
az rest -m DELETE -u 'https://graph.microsoft.com/beta/applications/f6475511-fd
Before configuring your GitHub Actions workflow, get the *tenant-id* and *client-id* values of your app registration. You can find these values in the Azure portal. Go to the list of [registered applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) and select your app registration. In **Overview**->**Essentials**, find the **Application (client) ID** and **Directory (tenant) ID**. Set these values in your GitHub environment to use in the Azure login action for your workflow. ## Next steps+ For an end-to-end example, read [Deploy to App Service using GitHub Actions](../../app-service/deploy-github-actions.md?tabs=openid). Read the [GitHub Actions documentation](https://docs.github.com/actions/deployment/security-hardening-your-deployments/configuring-openid-connect-in-azure) to learn more about configuring your GitHub Actions workflow to get an access token from Microsoft identity provider and access Azure resources.
active-directory Device Management Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/device-management-azure-portal.md
You must be assigned one of the following roles to view or manage device setting
- **Users may join devices to Azure AD**: This setting enables you to select the users who can register their devices as Azure AD joined devices. The default is **All**. > [!NOTE]
- > The **Users may join devices to Azure AD** setting is applicable only to Azure AD join on Windows 10 or newer. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying) because these methods work in a userless context.
+ > The **Users may join devices to Azure AD** setting is applicable only to Azure AD join on Windows 10 or newer. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying) because these methods work in a userless context.
- **Additional local administrators on Azure AD joined devices**: This setting allows you to select the users who are granted local administrator rights on a device. These users are added to the Device Administrators role in Azure AD. Global Administrators in Azure AD and device owners are granted local administrator rights by default. This option is a premium edition capability available through products like Azure AD Premium and Enterprise Mobility + Security.
This option is a premium edition capability available through products like Azur
- **Require Multi-Factor Authentication to register or join devices with Azure AD**: This setting allows you to specify whether users are required to provide another authentication factor to join or register their devices to Azure AD. The default is **No**. We recommend that you require multifactor authentication when a device is registered or joined. Before you enable multifactor authentication for this service, you must ensure that multifactor authentication is configured for users that register their devices. For more information on Azure AD Multi-Factor Authentication services, see [getting started with Azure AD Multi-Factor Authentication](../authentication/concept-mfa-howitworks.md). This setting may not work with third-party identity providers. > [!NOTE]
- > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enabling-azure-ad-login-for-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
+ > The **Require Multi-Factor Authentication to register or join devices with Azure AD** setting applies to devices that are either Azure AD joined (with some exceptions) or Azure AD registered. This setting doesn't apply to hybrid Azure AD joined devices, [Azure AD joined VMs in Azure](./howto-vm-sign-in-azure-ad-windows.md#enable-azure-ad-login-for-a-windows-vm-in-azure), or Azure AD joined devices that use [Windows Autopilot self-deployment mode](/mem/autopilot/self-deploying).
> [!IMPORTANT] > - We recommend that you use the [Register or join devices user](../conditional-access/concept-conditional-access-cloud-apps.md#user-actions) action in Conditional Access to enforce multifactor authentication for joining or registering a device.
active-directory Howto Vm Sign In Azure Ad Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-linux.md
Title: Login to Linux virtual machine in Azure using Azure Active Directory and openSSH certificate-based authentication
-description: Login with Azure AD using openSSH certificate-based authentication to an Azure VM running Linux
+ Title: Log in to a Linux virtual machine in Azure by using Azure AD and OpenSSH
+description: Learn how to log in to an Azure VM that's running Linux by using Azure Active Directory and OpenSSH certificate-based authentication.
-# Login to a Linux virtual machine in Azure with Azure Active Directory using openSSH certificate-based authentication
+# Log in to a Linux virtual machine in Azure by using Azure AD and OpenSSH
-To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform and a certificate authority to SSH into a Linux VM using Azure AD and openSSH certificate-based authentication. This functionality allows organizations to manage access to VMs with Azure role-based access control (RBAC) and Conditional Access policies. This article shows you how to create and configure a Linux VM and login with Azure AD using openSSH certificate-based authentication.
+To improve the security of Linux virtual machines (VMs) in Azure, you can integrate with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform and a certificate authority to SSH into a Linux VM by using Azure AD and OpenSSH certificate-based authentication. This functionality allows organizations to manage access to VMs with Azure role-based access control (RBAC) and Conditional Access policies.
+
+This article shows you how to create and configure a Linux VM and log in with Azure AD by using OpenSSH certificate-based authentication.
> [!IMPORTANT]
-> This capability is now generally available! [The previous version that made use of device code flow was deprecated August 15, 2021](../../virtual-machines/linux/login-using-aad.md). To migrate from the old version to this version, see the section, [Migration from previous preview](#migration-from-previous-preview).
+> This capability is now generally available. The previous version that made use of device code flow was [deprecated on August 15, 2021](../../virtual-machines/linux/login-using-aad.md). To migrate from the old version to this version, see the section [Migrate from the previous (preview) version](#migrate-from-the-previous-preview-version).
-There are many security benefits of using Azure AD with openSSH certificate-based authentication to log in to Linux VMs in Azure, including:
+There are many security benefits of using Azure AD with OpenSSH certificate-based authentication to log in to Linux VMs in Azure. They include:
- Use your Azure AD credentials to log in to Azure Linux VMs.-- Get SSH key based authentication without needing to distribute SSH keys to users or provision SSH public keys on any Azure Linux VMs you deploy. This experience is much simpler than having to worry about sprawl of stale SSH public keys that could cause unauthorized access.
+- Get SSH key-based authentication without needing to distribute SSH keys to users or provision SSH public keys on any Azure Linux VMs that you deploy. This experience is much simpler than having to worry about sprawl of stale SSH public keys that could cause unauthorized access.
- Reduce reliance on local administrator accounts, credential theft, and weak credentials.-- Password complexity and password lifetime policies configured for Azure AD help secure Linux VMs as well.-- With Azure role-based access control, specify who can login to a VM as a regular user or with administrator privileges. When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.-- With Conditional Access, configure policies to require multi-factor authentication and or require client device youΓÇÖre using to SSH be a managed device (for example: compliant device or hybrid Azure AD joined) before you can SSH to Linux VMs. -- Use Azure deploy and audit policies to require Azure AD login for Linux VMs and flag non-approved local accounts.-- Login to Linux VMs with Azure Active Directory also works for customers that use Federation Services.
+- Help secure Linux VMs by configuring password complexity and password lifetime policies for Azure AD.
+- With RBAC, specify who can log in to a VM as a regular user or with administrator privileges. When users join your team, you can update the Azure RBAC policy for the VM to grant access as appropriate. When employees leave your organization and their user accounts are disabled or removed from Azure AD, they no longer have access to your resources.
+- With Conditional Access, configure policies to require multifactor authentication or to require that your client device is managed (for example, compliant or hybrid Azure AD joined) before you can use it SSH into Linux VMs.
+- Use Azure deploy and audit policies to require Azure AD login for Linux VMs and flag unapproved local accounts.
+
+Login to Linux VMs with Azure Active Directory works for customers who use Active Directory Federation Services.
## Supported Linux distributions and Azure regions
-The following Linux distributions are currently supported during the preview of this feature when deployed in a supported region:
+The following Linux distributions are currently supported for deployments in a supported region:
| Distribution | Version | | | |
The following Azure regions are currently supported for this feature:
- Azure Government - Azure China 21Vianet
-It's not supported to use this extension on Azure Kubernetes Service (AKS) clusters. For more information, see [Support policies for AKS](../../aks/support-policies.md).
+Use of the SSH extension for Azure CLI on Azure Kubernetes Service (AKS) clusters is not supported. For more information, see [Support policies for AKS](../../aks/support-policies.md).
-If you choose to install and use the CLI locally, you must be running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli).
+If you choose to install and use the Azure CLI locally, it must be version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
> [!NOTE] > This functionality is also available for [Azure Arc-enabled servers](../../azure-arc/servers/ssh-arc-overview.md).
-## Requirements for login with Azure AD using openSSH certificate-based authentication
+## Meet requirements for login with Azure AD using OpenSSH certificate-based authentication
-To enable Azure AD login using SSH certificate-based authentication for Linux VMs in Azure, ensure the following network, virtual machine, and client (ssh client) requirements are met.
+To enable Azure AD login through SSH certificate-based authentication for Linux VMs in Azure, be sure to meet the following network, virtual machine, and client (SSH client) requirements.
### Network
-VM network configuration must permit outbound access to the following endpoints over TCP port 443:
+VM network configuration must permit outbound access to the following endpoints over TCP port 443.
-For Azure Global
+Azure Global:
-- `https://packages.microsoft.com` ΓÇô For package installation and upgrades.-- `http://169.254.169.254` ΓÇô Azure Instance Metadata Service endpoint.-- `https://login.microsoftonline.com` ΓÇô For PAM (pluggable authentication modules) based authentication flows.-- `https://pas.windows.net` ΓÇô For Azure RBAC flows.
+- `https://packages.microsoft.com`: For package installation and upgrades.
+- `http://169.254.169.254`: Azure Instance Metadata Service endpoint.
+- `https://login.microsoftonline.com`: For PAM-based (pluggable authentication modules) authentication flows.
+- `https://pas.windows.net`: For Azure RBAC flows.
-For Azure Government
+Azure Government:
-- `https://packages.microsoft.com` ΓÇô For package installation and upgrades.-- `http://169.254.169.254` ΓÇô Azure Instance Metadata Service endpoint.-- `https://login.microsoftonline.us` ΓÇô For PAM (pluggable authentication modules) based authentication flows.-- `https://pasff.usgovcloudapi.net` ΓÇô For Azure RBAC flows.
+- `https://packages.microsoft.com`: For package installation and upgrades.
+- `http://169.254.169.254`: Azure Instance Metadata Service endpoint.
+- `https://login.microsoftonline.us`: For PAM-based authentication flows.
+- `https://pasff.usgovcloudapi.net`: For Azure RBAC flows.
-For Azure China 21Vianet
+Azure China 21Vianet:
-- `https://packages.microsoft.com` ΓÇô For package installation and upgrades.-- `http://169.254.169.254` ΓÇô Azure Instance Metadata Service endpoint.-- `https://login.chinacloudapi.cn` ΓÇô For PAM (pluggable authentication modules) based authentication flows.-- `https://pas.chinacloudapi.cn` ΓÇô For Azure RBAC flows.
+- `https://packages.microsoft.com`: For package installation and upgrades.
+- `http://169.254.169.254`: Azure Instance Metadata Service endpoint.
+- `https://login.chinacloudapi.cn`: For PAM-based authentication flows.
+- `https://pas.chinacloudapi.cn`: For Azure RBAC flows.
### Virtual machine
-Ensure your VM is configured with the following functionality:
+Ensure that your VM is configured with the following functionality:
-- System assigned managed identity. This option gets automatically selected when you use Azure portal to create VM and select Azure AD login option. You can also enable system-assigned managed identity on a new or an existing VM using the Azure CLI.-- `aadsshlogin` and `aadsshlogin-selinux` (as appropriate). These packages get installed with the AADSSHLoginForLinux VM extension. The extension is installed when you use Azure portal to create VM and enable Azure AD login (Management tab) or via the Azure CLI.
+- System-assigned managed identity. This option is automatically selected when you use the Azure portal to create VMs and select the Azure AD login option. You can also enable system-assigned managed identity on a new or existing VM by using the Azure CLI.
+- `aadsshlogin` and `aadsshlogin-selinux` (as appropriate). These packages are installed with the AADSSHLoginForLinux VM extension. The extension is installed when you use the Azure portal or the Azure CLI to create VMs and enable Azure AD login (**Management** tab).
### Client
-Ensure your client meets the following requirements:
+Ensure that your client meets the following requirements:
-- SSH client must support OpenSSH based certificates for authentication. You can use Azure CLI (2.21.1 or higher) with OpenSSH (included in Windows 10 version 1803 or higher) or Azure Cloud Shell to meet this requirement. -- SSH extension for Azure CLI. You can install this using `az extension add --name ssh`. You donΓÇÖt need to install this extension when using Azure Cloud Shell as it comes pre-installed.-- If youΓÇÖre using any other SSH client other than Azure CLI or Azure Cloud Shell that supports OpenSSH certificates, youΓÇÖll still need to use Azure CLI with SSH extension to retrieve ephemeral SSH cert and optionally a config file and then use the config file with your SSH client.-- TCP connectivity from the client to either the public or private IP of the VM (ProxyCommand or SSH forwarding to a machine with connectivity also works).
+- SSH client support for OpenSSH-based certificates for authentication. You can use Azure CLI (2.21.1 or later) with OpenSSH (included in Windows 10 version 1803 or later) or Azure Cloud Shell to meet this requirement.
+- SSH extension for Azure CLI. You can install this extension by using `az extension add --name ssh`. You don't need to install this extension when you're using Azure Cloud Shell, because it comes preinstalled.
+
+ If you're using any SSH client other than the Azure CLI or Azure Cloud Shell that supports OpenSSH certificates, you'll still need to use the Azure CLI with the SSH extension to retrieve ephemeral SSH certificates and optionally a configuration file. You can then use the configuration file with your SSH client.
+- TCP connectivity from the client to either the public or private IP address of the VM. (ProxyCommand or SSH forwarding to a machine with connectivity also works.)
> [!IMPORTANT]
-> SSH clients based on PuTTy do not support openSSH certificates and cannot be used to login with Azure AD openSSH certificate-based authentication.
+> SSH clients based on PuTTY don't support OpenSSH certificates and can't be used to log in with Azure AD OpenSSH certificate-based authentication.
+
+## Enable Azure AD login for a Linux VM in Azure
-## Enabling Azure AD login for Linux VM in Azure
+To use Azure AD login for a Linux VM in Azure, you need to first enable the Azure AD login option for your Linux VM. You then configure Azure role assignments for users who are authorized to log in to the VM. Finally, you use the SSH client that supports OpenSSH, such as the Azure CLI or Azure Cloud Shell, to SSH into your Linux VM.
-To use Azure AD login for Linux VM in Azure, you need to first enable Azure AD login option for your Linux VM, configure Azure role assignments for users who are authorized to login to the VM and then use SSH client that supports OpensSSH such as Azure CLI or Azure Cloud Shell to SSH to your Linux VM. There are multiple ways you can enable Azure AD login for your Linux VM, as an example you can use:
+There are two ways to enable Azure AD login for your Linux VM:
-- Azure portal experience when creating a Linux VM-- Azure Cloud Shell experience when creating a Windows VM or for an existing Linux VM
+- The Azure portal experience when you're creating a Linux VM
+- The Azure Cloud Shell experience when you're creating a Linux VM or using an existing one
-### Using Azure portal create VM experience to enable Azure AD login
+### Azure portal
-You can enable Azure AD login for any of the [supported Linux distributions mentioned](#supported-linux-distributions-and-azure-regions) using the Azure portal.
+You can enable Azure AD login for any of the [supported Linux distributions](#supported-linux-distributions-and-azure-regions) by using the Azure portal.
-As an example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD logon:
+For example, to create an Ubuntu Server 18.04 Long Term Support (LTS) VM in Azure with Azure AD login:
-1. Sign in to the Azure portal, with an account that has access to create VMs, and select **+ Create a resource**.
-1. Click on **Create** under **Ubuntu Server 18.04 LTS** in the **Popular** view.
-1. On the **Management** tab,
- 1. Check the box to enable **Login with Azure Active Directory (Preview)**.
- 1. Ensure **System assigned managed identity** is checked.
-1. Go through the rest of the experience of creating a virtual machine. During this preview, youΓÇÖll have to create an administrator account with username and password or SSH public key.
+1. Sign in to the Azure portal by using an account that has access to create VMs, and then select **+ Create a resource**.
+1. Select **Create** under **Ubuntu Server 18.04 LTS** in the **Popular** view.
+1. On the **Management** tab:
+ 1. Select the **Login with Azure Active Directory** checkbox.
+ 1. Ensure that the **System assigned managed identity** checkbox is selected.
+1. Go through the rest of the experience of creating a virtual machine. You'll have to create an administrator account with username and password or SSH public key.
-### Using the Azure Cloud Shell experience to enable Azure AD login
+### Azure Cloud Shell
-Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the Copy button to copy the code, paste it in Cloud Shell, and then press Enter to run it. There are a few ways to open Cloud Shell:
+Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the **Copy** button to copy the code, paste it in Cloud Shell, and then select the Enter key to run it.
-- Select Try It in the upper-right corner of a code block.
+There are a few ways to open Cloud Shell:
+
+- Select **Try It** in the upper-right corner of a code block.
- Open Cloud Shell in your browser. - Select the Cloud Shell button on the menu in the upper-right corner of the Azure portal.
-If you choose to install and use the CLI locally, this article requires that youΓÇÖre running the Azure CLI version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article Install Azure CLI.
-
-1. Create a resource group with [az group create](/cli/azure/group#az-group-create).
-1. Create a VM with [az vm create](/cli/azure/vm#az-vm-create&preserve-view=true) using a supported distribution in a supported region.
-1. Install the Azure AD login VM extension with [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set).
+If you choose to install and use the Azure CLI locally, this article requires you to use version 2.22.1 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install the Azure CLI](/cli/azure/install-azure-cli).
-The following example deploys a VM and then installs the extension to enable Azure AD login for Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines.
+1. Create a resource group by running [az group create](/cli/azure/group#az-group-create).
+1. Create a VM by running [az vm create](/cli/azure/vm#az-vm-create&preserve-view=true). Use a supported distribution in a supported region.
+1. Install the Azure AD login VM extension by using [az vm extension set](/cli/azure/vm/extension#az-vm-extension-set).
-The example can be customized to support your testing requirements as needed.
+The following example deploys a VM and then installs the extension to enable Azure AD login for a Linux VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. Customize the example as needed to support your testing requirements.
```azurecli-interactive az group create --name AzureADLinuxVM --location southcentralus
az vm extension set \
It takes a few minutes to create the VM and supporting resources.
-The AADSSHLoginForLinux extension can be installed on an existing (supported distribution) Linux VM with a running VM agent to enable Azure AD authentication. If deploying this extension to a previously created VM, the VM must have at least 1 GB of memory allocated or the install will fail.
+The AADSSHLoginForLinux extension can be installed on an existing (supported distribution) Linux VM with a running VM agent to enable Azure AD authentication. If you're deploying this extension to a previously created VM, the VM must have at least 1 GB of memory allocated or the installation will fail.
-The provisioningState of Succeeded is shown once the extension is successfully installed on the VM. The VM must have a running [VM agent](../../virtual-machines/extensions/agent-linux.md) to install the extension.
+The `provisioningState` value of `Succeeded` appears when the extension is successfully installed on the VM. The VM must have a running [VM agent](../../virtual-machines/extensions/agent-linux.md) to install the extension.
## Configure role assignments for the VM
-Now that youΓÇÖve created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
+Now that you've created the VM, you need to configure an Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
-- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges.-- **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
+- **Virtual Machine Administrator Login**: Users who have this role assigned can log in to an Azure virtual machine with administrator privileges.
+- **Virtual Machine User Login**: Users who have this role assigned can log in to an Azure virtual machine with regular user privileges.
-To log in to a VM over SSH, you must have the Virtual Machine Administrator Login or Virtual Machine User Login role to the Resource Group containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resources. An Azure user with the Owner or Contributor roles assigned for a VM donΓÇÖt automatically have privileges to Azure AD login to the VM over SSH. This separation is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+To allow a user to log in to a VM over SSH, you must assign the Virtual Machine Administrator Login or Virtual Machine User Login role on the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resources.
-There are multiple ways you can configure role assignments for VM, as an example you can use:
+An Azure user who has the Owner or Contributor role assigned for a VM doesn't automatically have privileges to Azure AD login to the VM over SSH. There's an intentional (and audited) separation between the set of people who control virtual machines and the set of people who can access virtual machines.
-- Azure AD Portal experience
+There are two ways to configure role assignments for a VM:
+
+- Azure AD portal experience
- Azure Cloud Shell experience > [!NOTE]
-> The Virtual Machine Administrator Login and Virtual Machine User Login roles use dataActions and can be assigned at the management group, subscription, resource group, or resource scope. It is recommended that the roles be assigned at the management group, subscription or resource level and not at the individual VM level to avoid risk of running out of [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit) per subscription.
-### Using Azure AD Portal experience
+> The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions` and can be assigned at the management group, subscription, resource group, or resource scope. We recommend that you assign the roles at the management group, subscription, or resource level and not at the individual VM level. This practice avoids the risk of reaching the [Azure role assignments limit](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit) per subscription.
+
+### Azure AD portal
-To configure role assignments for your Azure AD enabled Linux VMs:
+To configure role assignments for your Azure AD-enabled Linux VMs:
-1. Select the **Resource Group** containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resource.
+1. For **Resource Group**, select the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resource.
1. Select **Access control (IAM)**.
-1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
| Setting | Value | | | | | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity |
- ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
After a few moments, the security principal is assigned the role at the selected scope.
-### Using the Azure Cloud Shell experience
+### Azure Cloud Shell
-The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your current Azure account is obtained with [az account show](/cli/azure/account#az-account-show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az-vm-show). The scope could also be assigned at a resource group or subscription level, normal Azure RBAC inheritance permissions apply.
+The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. You obtain the username of your current Azure account by using [az account show](/cli/azure/account#az-account-show), and you set the scope to the VM created in a previous step by using [az vm show](/cli/azure/vm#az-vm-show).
+
+You can also assign the scope at a resource group or subscription level. Normal Azure RBAC inheritance permissions apply.
```azurecli-interactive username=$(az account show --query user.name --output tsv)
az role assignment create \
``` > [!NOTE]
-> If your Azure AD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az-ad-user-list).
-For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the article [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
+> If your Azure AD domain and login username domain don't match, you must specify the object ID of your user account by using `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account by using [az ad user list](/cli/azure/ad/user#az-ad-user-list).
+
+For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
-## Install SSH extension for Azure CLI
+## Install the SSH extension for Azure CLI
-If youΓÇÖre using Azure Cloud Shell, then no other setup is needed as both the minimum required version of Azure CLI and SSH extension for Azure CLI are already included in the Cloud Shell environment.
+If you're using Azure Cloud Shell, no other setup is needed because both the minimum required version of the Azure CLI and the SSH extension for Azure CLI are already included in the Cloud Shell environment.
-Run the following command to add SSH extension for Azure CLI
+Run the following command to add the SSH extension for Azure CLI:
```azurecli az extension add --name ssh ```
-The minimum version required for the extension is 0.1.4. Check the installed SSH extension version with the following command.
+The minimum version required for the extension is 0.1.4. Check the installed version by using the following command:
```azurecli az extension show --name ssh ```
-## Using Conditional Access
+## Enforce Conditional Access policies
+
+You can enforce Conditional Access policies that are enabled with Azure AD login, such as:
+
+- Requiring multifactor authentication.
+- Requiring a compliant or hybrid Azure AD-joined device for the device running the SSH client.
+- Checking for risks before authorizing access to Linux VMs in Azure.
-You can enforce Conditional Access policies such as require multi-factor authentication, require compliant or hybrid Azure AD joined device for the device running SSH client, and checking for risk before authorizing access to Linux VMs in Azure that are enabled with Azure AD login. The application that appears in Conditional Access policy is called "Azure Linux VM Sign-In".
+The application that appears in the Conditional Access policy is called *Azure Linux VM Sign-In*.
> [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join on the client device running SSH client only works with Azure CLI running on Windows and macOS. It is not supported when using Azure CLI on Linux or Azure Cloud Shell.
+> Conditional Access policy enforcement that requires device compliance or hybrid Azure AD join on the device that's running the SSH client works only with the Azure CLI that's running on Windows and macOS. It's not supported when you're using the Azure CLI on Linux or Azure Cloud Shell.
### Missing application
-If the Azure Linux VM Sign-In application is missing from Conditional Access, use the following steps to remediate the issue:
+If the Azure Linux VM Sign-In application is missing from Conditional Access, make sure the application isn't in the tenant:
-1. Check to make sure the application isn't in the tenant by:
- 1. Sign in to the **Azure portal**.
- 1. Browse to **Azure Active Directory** > **Enterprise applications**
- 1. Remove the filters to see all applications, and search for "VM". If you don't see Azure Linux VM Sign-In as a result, the service principal is missing from the tenant.
+1. Sign in to the Azure portal.
+1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Remove the filters to see all applications, and search for **VM**. If you don't see Azure Linux VM Sign-In as a result, the service principal is missing from the tenant.
Another way to verify it is via Graph PowerShell: 1. [Install the Graph PowerShell SDK](/powershell/microsoftgraph/installation) if you haven't already done so.
-1. `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All","Application.ReadWrite.All"`
-1. Sign-in with a Global Admin account
-1. Consent to permission prompt
-1. `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Linux VM Sign-In"'`
- 1. If this command results in no output and returns you to the PowerShell prompt, you can create the Service Principal with the following Graph PowerShell command:
- 1. `New-MgServicePrincipal -AppId ce6ff14a-7fdc-4685-bbe0-f6afdfcfa8e0`
- 1. Successful output will show that the AppID and the Application Name Azure Linux VM Sign-In was created.
-1. Sign out of Graph PowerShell when complete with the following command: `Disconnect-MgGraph`
+1. Enter the command `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All","Application.ReadWrite.All"`.
+1. Sign in with a Global Admin account.
+1. Consent to the prompt that asks for your permission.
+1. Enter the command `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Linux VM Sign-In"'`.
+
+ If this command results in no output and returns you to the PowerShell prompt, you can create the service principal by using the following Graph PowerShell command: `New-MgServicePrincipal -AppId ce6ff14a-7fdc-4685-bbe0-f6afdfcfa8e0`.
+
+ Successful output will show that the app ID and the application name Azure Linux VM Sign-In were created.
+1. Sign out of Graph PowerShell by using the following command: `Disconnect-MgGraph`.
-## Login using Azure AD user account to SSH into the Linux VM
+## Log in by using an Azure AD user account to SSH into the Linux VM
-### Using Azure CLI
+### Log in by using the Azure CLI
-First do az login and then az ssh vm.
+Enter `az login`. This command opens a browser window, where you can sign in by using your Azure AD account.
```azurecli az login ```
-This command will launch a browser window and a user can sign in using their Azure AD account.
-
-The following example automatically resolves the appropriate IP address for the VM.
+Then enter `az ssh vm`. The following example automatically resolves the appropriate IP address for the VM.
```azurecli az ssh vm -n myVM -g AzureADLinuxVM ```
-If prompted, enter your Azure AD login credentials at the login page, perform an MFA, and/or satisfy device checks. YouΓÇÖll only be prompted if your Azure CLI session doesnΓÇÖt already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and youΓÇÖll be automatically connected to the VM.
+If you're prompted, enter your Azure AD login credentials at the login page, perform multifactor authentication, and/or satisfy device checks. You'll be prompted only if your Azure CLI session doesn't already meet any required Conditional Access criteria. Close the browser window, return to the SSH prompt, and you'll be automatically connected to the VM.
-YouΓÇÖre now signed in to the Azure Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
+You're now signed in to the Linux virtual machine with the role permissions as assigned, such as VM User or VM Administrator. If your user account is assigned the Virtual Machine Administrator Login role, you can use sudo to run commands that require root privileges.
-### Using Azure Cloud Shell
+### Log in by using Azure Cloud Shell
-You can use Azure Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by clicking the shell icon in the upper right corner of the Azure portal.
+You can use Azure Cloud Shell to connect to VMs without needing to install anything locally to your client machine. Start Cloud Shell by selecting the shell icon in the upper-right corner of the Azure portal.
-Azure Cloud Shell will automatically connect to a session in the context of the signed in user. During the Azure AD Login for Linux Preview, **you must run az login again and go through an interactive sign in flow**.
+Cloud Shell automatically connects to a session in the context of the signed-in user. Now run `az login` again and go through the interactive sign-in flow:
```azurecli az login ```
-Then you can use the normal `az ssh vm` commands to connect using name and resource group or IP address of the VM.
+Then you can use the normal `az ssh vm` commands to connect by using the name and resource group or IP address of the VM:
```azurecli az ssh vm -n myVM -g AzureADLinuxVM ``` > [!NOTE]
-> Conditional Access policy enforcement requiring device compliance or Hybrid Azure AD join is not supported when using Azure Cloud Shell.
+> Conditional Access policy enforcement that requires device compliance or hybrid Azure AD join is not supported when you're using Azure Cloud Shell.
-### Login using Azure AD service principal to SSH into the Linux VM
+## Log in by using the Azure AD service principal to SSH into the Linux VM
-Azure CLI supports authenticating with a service principal instead of a user account. Since service principals are account not tied to any particular user, customers can use them to SSH to a VM to support any automation scenarios they may have. The service principal must have VM Administrator or VM User rights assigned. Assign permissions at the subscription or resource group level.
+The Azure CLI supports authenticating with a service principal instead of a user account. Because service principals aren't tied to any particular user, customers can use them to SSH into a VM to support any automation scenarios they might have. The service principal must have VM Administrator or VM User rights assigned. Assign permissions at the subscription or resource group level.
-The following example will assign VM Administrator rights to the service principal at the resource group level. Replace the service principal object ID, subscription ID, and resource group name fields.
+The following example will assign VM Administrator rights to the service principal at the resource group level. Replace the placeholders for service principal object ID, subscription ID, and resource group name.
```azurecli az role assignment create \
az role assignment create \
--scope ΓÇ£/subscriptions/<subscription-id>/resourceGroups/<resourcegroup-name>" ```
-Use the following example to authenticate to Azure CLI using the service principal. To learn more about signing in using a service principal, see the article [Sign in to Azure CLI with a service principal](/cli/azure/authenticate-azure-cli#sign-in-with-a-service-principal).
+Use the following example to authenticate to the Azure CLI by using the service principal. For more information, see the article [Sign in to the Azure CLI with a service principal](/cli/azure/authenticate-azure-cli#sign-in-with-a-service-principal).
```azurecli az login --service-principal -u <sp-app-id> -p <password-or-cert> --tenant <tenant-id> ```
-Once authentication with a service principal is complete, use the normal Azure CLI SSH commands to connect to the VM.
+When authentication with a service principal is complete, use the normal Azure CLI SSH commands to connect to the VM:
```azurecli az ssh vm -n myVM -g AzureADLinuxVM ```
-### Exporting SSH Configuration for use with SSH clients that support OpenSSH
+## Export the SSH configuration for use with SSH clients that support OpenSSH
-Login to Azure Linux VMs with Azure AD supports exporting the OpenSSH certificate and configuration, allowing you to use any SSH clients that support OpenSSH based certificates to sign in Azure AD. The following example exports the configuration for all IP addresses assigned to the VM.
+Login to Azure Linux VMs with Azure AD supports exporting the OpenSSH certificate and configuration. That means you can use any SSH clients that support OpenSSH-based certificates to sign in through Azure AD. The following example exports the configuration for all IP addresses assigned to the VM:
```azurecli az ssh config --file ~/.ssh/config -n myVM -g AzureADLinuxVM ```
-Alternatively, you can export the config by specifying just the IP address. Replace the IP address in the example with the public or private IP address (you must bring your own connectivity for private IPs) for your VM. Type `az ssh config -h` for help on this command.
+Alternatively, you can export the configuration by specifying just the IP address. Replace the IP address in the following example with the public or private IP address for your VM. (You must bring your own connectivity for private IPs.) Enter `az ssh config -h` for help with this command.
```azurecli az ssh config --file ~/.ssh/config --ip 10.11.123.456
az ssh config --file ~/.ssh/config --ip 10.11.123.456
You can then connect to the VM through normal OpenSSH usage. Connection can be done through any SSH client that uses OpenSSH.
-## Sudo and Azure AD login
+## Run sudo with Azure AD login
-Once, users assigned the VM Administrator role successfully SSH into a Linux VM, theyΓÇÖll be able to run sudo with no other interaction or authentication requirement. Users assigned the VM User role wonΓÇÖt be able to run sudo.
+After users who are assigned the VM Administrator role successfully SSH into a Linux VM, they'll be able to run sudo with no other interaction or authentication requirement. Users who are assigned the VM User role won't be able to run sudo.
-## Virtual machine scale set support
+## Connect to VMs in virtual machine scale sets
-Virtual machine scale sets are supported, but the steps are slightly different for enabling and connecting to virtual machine scale set VMs.
+Virtual machine scale sets are supported, but the steps are slightly different for enabling and connecting to VMs in a virtual machine scale set:
-1. Create a virtual machine scale set or choose one that already exists. Enable a system assigned managed identity for your virtual machine scale set.
+1. Create a virtual machine scale set or choose one that already exists. Enable a system-assigned managed identity for your virtual machine scale set:
-```azurecli
-az vmss identity assign --name myVMSS --resource-group AzureADLinuxVM
-```
+ ```azurecli
+ az vmss identity assign --name myVMSS --resource-group AzureADLinuxVM
+ ```
-2. Install the Azure AD extension on your virtual machine scale set.
+2. Install the Azure AD extension on your virtual machine scale set:
-```azurecli
-az vmss extension set --publisher Microsoft.Azure.ActiveDirectory --name AADSSHLoginForLinux --resource-group AzureADLinuxVM --vmss-name myVMSS
-```
+ ```azurecli
+ az vmss extension set --publisher Microsoft.Azure.ActiveDirectory --name AADSSHLoginForLinux --resource-group AzureADLinuxVM --vmss-name myVMSS
+ ```
-Virtual machine scale sets usually don't have public IP addresses. You must have connectivity to them from another machine that can reach their Azure virtual network. This example shows how to use the private IP of a virtual machine scale set VM to connect from a machine in the same virtual network.
+Virtual machine scale sets usually don't have public IP addresses. You must have connectivity to them from another machine that can reach their Azure virtual network. This example shows how to use the private IP of a VM in a virtual machine scale set to connect from a machine in the same virtual network:
```azurecli az ssh vm --ip 10.11.123.456 ``` > [!NOTE]
-> You cannot automatically determine the virtual machine scale set VM's IP addresses using the `--resource-group` and `--name` switches.
+> You can't automatically determine the virtual machine scale set VM's IP addresses by using the `--resource-group` and `--name` switches.
-## Migration from previous preview
+## Migrate from the previous (preview) version
-For customers who are using previous version of Azure AD login for Linux that was based on device code flow, complete the following steps using Azure CLI.
+If you're using the previous version of Azure AD login for Linux that was based on device code flow, complete the following steps by using the Azure CLI:
-1. Uninstall the AADLoginForLinux extension on the VM.
+1. Uninstall the AADLoginForLinux extension on the VM:
```azurecli az vm extension delete -g MyResourceGroup --vm-name MyVm -n AADLoginForLinux ``` > [!NOTE]
- > The extension uninstall can fail if there are any Azure AD users currently logged in on the VM. Make sure all users are logged off first.
-1. Enable system-assigned managed identity on your VM.
+ > Uninstallation of the extension can fail if there are any Azure AD users currently logged in on the VM. Make sure all users are logged out first.
+1. Enable system-assigned managed identity on your VM:
```azurecli az vm identity assign -g myResourceGroup -n myVm ```
-1. Install the AADSSHLoginForLinux extension on the VM.
+1. Install the AADSSHLoginForLinux extension on the VM:
```azurecli az vm extension set \
For customers who are using previous version of Azure AD login for Linux that wa
--vm-name myVM ```
-## Using Azure Policy to ensure standards and assess compliance
+## Use Azure Policy to meet standards and assess compliance
+
+Use Azure Policy to:
-Use Azure Policy to ensure Azure AD login is enabled for your new and existing Linux virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Linux VMs within your environment that donΓÇÖt have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that donΓÇÖt have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Linux VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+- Ensure that Azure AD login is enabled for your new and existing Linux virtual machines.
+- Assess compliance of your environment at scale on a compliance dashboard.
+
+With this capability, you can use many levels of enforcement. You can flag new and existing Linux VMs within your environment that don't have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Linux VMs that don't have Azure AD login enabled, as well as remediate existing Linux VMs to the same standard.
+
+In addition to these capabilities, you can use Azure Policy to detect and flag Linux VMs that have unapproved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
## Troubleshoot sign-in issues
-Some common errors when you try to SSH with Azure AD credentials include no Azure roles assigned, and repeated prompts to sign-in. Use the following sections to correct these issues.
+Use the following sections to correct common errors that can happen when you try to SSH with Azure AD credentials.
-### CouldnΓÇÖt retrieve token from local cache
+### Couldn't retrieve token from local cache
-You must run `az login` again and go through an interactive sign-in flow. Review the section [Using Azure Cloud Shell](#using-azure-cloud-shell).
+If you get a message that says the token couldn't be retrieved from the local cache, you must run `az login` again and go through an interactive sign-in flow. Review the section about [logging in by using Azure Cloud Shell](#log-in-by-using-azure-cloud-shell).
### Access denied: Azure role not assigned
-If you see the following error on your SSH prompt, verify that you have configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login or Virtual Machine User Login role. If youΓÇÖre running into issues with Azure role assignments, see the article [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
+If you see an "Azure role not assigned" error on your SSH prompt, verify that you've configured Azure RBAC policies for the VM that grants the user either the Virtual Machine Administrator Login role or the Virtual Machine User Login role. If you're having problems with Azure role assignments, see the article [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
### Problems deleting the old (AADLoginForLinux) extension
-If the uninstall scripts fail, the extension may get stuck in a transitioning state. When this happens, it can leave packages that itΓÇÖs supposed to uninstall during its removal. In such cases, itΓÇÖs better to manually uninstall the old packages and then try to run az vm extension delete command.
+If the uninstallation scripts fail, the extension might get stuck in a transitioning state. When this happens, the extension can leave packages that it's supposed to uninstall during its removal. In such cases, it's better to manually uninstall the old packages and then try to run the `az vm extension delete` command.
-1. Log in as a local user with admin privileges.
-1. Make sure there are no logged in Azure AD users. Call `who -u` command to see who is logged in; then `sudo kill <pid>` for all session processes reported by the previous command.
-1. Run `sudo apt remove --purge aadlogin` (Ubuntu/Debian), `sudo yum erase aadlogin` (RHEL or CentOS), or `sudo zypper remove aadlogin` (OpenSuse or SLES).
-1. If the command fails, try the low-level tools with scripts disabled:
- 1. For Ubuntu/Deian run `sudo dpkg --purge aadlogin`. If itΓÇÖs still failing because of the script, delete `/var/lib/dpkg/info/aadlogin.prerm` file and try again.
- 1. For everything else run `rpm -e ΓÇônoscripts aadogin`.
+To uninstall old packages:
+
+1. Log in as a local user with admin privileges.
+1. Make sure there are no logged-in Azure AD users. Call the `who -u` command to see who is logged in. Then use `sudo kill <pid>` for all session processes that the previous command reported.
+1. Run `sudo apt remove --purge aadlogin` (Ubuntu/Debian), `sudo yum erase aadlogin` (RHEL or CentOS), or `sudo zypper remove aadlogin` (openSUSE or SLES).
+1. If the command fails, try the low-level tools with scripts disabled:
+ 1. For Ubuntu/Debian, run `sudo dpkg --purge aadlogin`. If it's still failing because of the script, delete the `/var/lib/dpkg/info/aadlogin.prerm` file and try again.
+ 1. For everything else, run `rpm -e ΓÇônoscripts aadogin`.
1. Repeat steps 3-4 for package `aadlogin-selinux`.
-### Extension Install Errors
+### Extension installation errors
-Installation of the AADSSHLoginForLinux VM extension to existing computers fails with one of the following known error codes:
+Installation of the AADSSHLoginForLinux VM extension to existing computers might fail with one of the following known error codes.
-#### Non-zero exit code: 22
+#### Non-zero exit code 22
-The Status of the AADSSHLoginForLinux VM extension shows as Transitioning in the portal.
+If you get exit code 22, the status of the AADSSHLoginForLinux VM extension shows as **Transitioning** in the portal.
-Cause 1: This failure is due to a system-assigned managed identity being required.
+This failure happens because a system-assigned managed identity is required.
-Solution 1: Perform these actions:
+The solution is to:
1. Uninstall the failed extension. 1. Enable a system-assigned managed identity on the Azure VM.
-1. Run the extension install command again.
-
-#### Non-zero exit code: 23
+1. Run the extension installation command again.
-The Status of the AADSSHLoginForLinux VM extension shows as Transitioning in the portal.
+#### Non-zero exit code 23
-Cause 1: This failure is due to the older AADLoginForLinux VM extension is still installed.
+If you get exit code 23, the status of the AADSSHLoginForLinux VM extension shows as **Transitioning** in the portal.
-Solution 1: Perform these actions:
+This failure happens when the older AADLoginForLinux VM extension is still installed.
-1. Uninstall the older AADLoginForLinux VM extension from the VM. The Status of the new AADSSHLoginForLinux VM extension will change to Provisioning succeeded in the portal.
+The solution is to uninstall the older AADLoginForLinux VM extension from the VM. The status of the new AADSSHLoginForLinux VM extension will then change to **Provisioning succeeded** in the portal.
-#### Az ssh vm fails with KeyError: 'access_token'.
+#### The az ssh vm command fails with KeyError access_token
-Cause 1: An outdated version of the Azure CLI client is being used.
+If the `az ssh vm` command fails, you're using an outdated version of the Azure CLI client.
-Solution 1: Upgrade the Azure CLI client to version 2.21.0 or higher.
+The solution is to upgrade the Azure CLI client to version 2.21.0 or later.
-#### SSH Connection closed
+#### SSH connection is closed
-After the user has successfully signed in using az login, connection to the VM using `az ssh vm -ip <addres>` or `az ssh vm --name <vm_name> -g <resource_group>` fails with *Connection closed by <ip_address> port 22*.
+After a user successfully signs in by using `az login`, connection to the VM through `az ssh vm -ip <address>` or `az ssh vm --name <vm_name> -g <resource_group>` might fail with "Connection closed by <ip_address> port 22."
-Cause 1: The user isnΓÇÖt assigned to either of the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
+One cause for this error is that the user isn't assigned to the Virtual Machine Administrator Login or Virtual Machine User Login role within the scope of this VM. In that case, the solution is to add the user to one of those Azure RBAC roles within the scope of this VM.
-Solution 1: Add the user to the either of the Virtual Machine Administrator/User Login Azure RBAC roles within the scope of this VM.
-
-Cause 2: The user is in a required Azure RBAC role but the system-assigned managed identity has been disabled on the VM.
-
-Solution 2: Perform these actions:
+This error can also happen if the user is in a required Azure RBAC role, but the system-assigned managed identity has been disabled on the VM. In that case, perform these actions:
1. Enable the system-assigned managed identity on the VM.
-1. Allow several minutes to pass before trying to connect using `az ssh vm --ip <ip_address>`.
+1. Allow several minutes to pass before the user tries to connect by using `az ssh vm --ip <ip_address>`.
+
+### Connection problems with virtual machine scale sets
-### Virtual machine scale set Connection Issues
+VM connections with virtual machine scale sets can fail if the scale set instances are running an old model.
-Virtual machine scale set VM connections may fail if the virtual machine scale set instances are running an old model. Upgrading virtual machine scale set instances to the latest model may resolve issues, especially if an upgrade hasnΓÇÖt been done since the Azure AD Login extension was installed. Upgrading an instance applies a standard virtual machine scale set configuration to the individual instance.
+Upgrading scale set instances to the latest model might resolve the problem, especially if an upgrade hasn't been done since the Azure AD Login extension was installed. Upgrading an instance applies a standard scale set configuration to the individual instance.
-### AllowGroups / DenyGroups statements in sshd_config cause first login to fail for Azure AD users
+### AllowGroups or DenyGroups statements in sshd_config cause the first login to fail for Azure AD users
-Cause 1: If sshd_config contains either AllowGroups or DenyGroups statements, the very first login fails for Azure AD users. If the statement was added after a user already has a successful login, they can log in.
+If *sshd_config* contains either `AllowGroups` or `DenyGroups` statements, the first login fails for Azure AD users. If the statement was added after users have already had a successful login, they can log in.
-Solution 1: Remove AllowGroups and DenyGroups statements from sshd_config.
+One solution is to remove `AllowGroups` and `DenyGroups` statements from *sshd_config*.
-Solution 2: Move AllowGroups and DenyGroups to a "match user" section in sshd_config. Make sure the match template excludes Azure AD users.
+Another solution is to move `AllowGroups` and `DenyGroups` to a `match user` section in *sshd_config*. Make sure the match template excludes Azure AD users.
## Next steps
-[What is a device identity?](overview.md)
-[Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
+- [What is a device identity?](overview.md)
+- [Common Conditional Access policies](../conditional-access/concept-conditional-access-policy-common.md)
active-directory Howto Vm Sign In Azure Ad Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/devices/howto-vm-sign-in-azure-ad-windows.md
Title: Sign in to Windows virtual machine in Azure using Azure Active Directory
-description: Azure AD sign in to an Azure VM running Windows
+ Title: Log in to a Windows virtual machine in Azure by using Azure AD
+description: Learn how to log in to an Azure VM that's running Windows by using Azure AD authentication.
-# Login to Windows virtual machine in Azure using Azure Active Directory authentication
+# Log in to a Windows virtual machine in Azure by using Azure AD
-Organizations can now improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (AD) authentication. You can now use Azure AD as a core authentication platform to RDP into a **Windows Server 2019 Datacenter edition** and later or **Windows 10 1809** and later. You can then centrally control and enforce Azure RBAC and Conditional Access policies that allow or deny access to the VMs. This article shows you how to create and configure a Windows VM and login with Azure AD based authentication.
+Organizations can improve the security of Windows virtual machines (VMs) in Azure by integrating with Azure Active Directory (Azure AD) authentication. You can now use Azure AD as a core authentication platform to RDP into *Windows Server 2019 Datacenter edition* and later, or *Windows 10 1809* and later. You can then centrally control and enforce Azure role-based access control (RBAC) and Conditional Access policies that allow or deny access to the VMs.
-There are many security benefits of using Azure AD based authentication to login to Windows VMs in Azure, including:
-- Use Azure AD credentials to login to Windows VMs in Azure.
- - Federated and Managed domain users.
+This article shows you how to create and configure a Windows VM and log in by using Azure AD-based authentication.
+
+There are many security benefits of using Azure AD-based authentication to log in to Windows VMs in Azure. They include:
+
+- Use Azure AD credentials to log in to Windows VMs in Azure. The result is federated and managed domain users.
- Reduce reliance on local administrator accounts.-- Password complexity and password lifetime policies configured for your Azure AD help secure Windows VMs as well.-- With Azure role-based access control (Azure RBAC)
- - Specify who can login to a VM as a regular user or with administrator privileges.
+- Password complexity and password lifetime policies that you configure for Azure AD also help secure Windows VMs.
+- With Azure RBAC:
+ - Specify who can log in to a VM as a regular user or with administrator privileges.
- When users join or leave your team, you can update the Azure RBAC policy for the VM to grant access as appropriate.
- - When employees leave your organization and their user account is disabled or removed from Azure AD, they no longer have access to your resources.
-- Configure Conditional Access policies to require multi-factor authentication and other signals such as user or sign in risk before you can RDP to Windows VMs. -- Use Azure deploy and audit policies to require Azure AD login for Windows VMs and to flag use of unapproved local accounts on the VMs.-- Automate and scale Azure AD join with MDM auto enrollment with Intune of Azure Windows VMs that are part for your VDI deployments.
- - Auto MDM enrollment requires Azure AD Premium P1 licenses. Windows Server VMs don't support MDM enrollment.
+ - When employees leave your organization and their user accounts are disabled or removed from Azure AD, they no longer have access to your resources.
+- Configure Conditional Access policies to require multifactor authentication (MFA) and other signals, such as user sign-in risk, before you can RDP into Windows VMs.
+- Use Azure deploy and audit policies to require Azure AD login for Windows VMs and to flag the use of unapproved local accounts on the VMs.
+- Use Intune to automate and scale Azure AD join with mobile device management (MDM) auto-enrollment of Azure Windows VMs that are part of your virtual desktop infrastructure (VDI) deployments.
+
+ MDM auto-enrollment requires Azure AD Premium P1 licenses. Windows Server VMs don't support MDM enrollment.
> [!NOTE]
-> Once you enable this capability, your Windows VMs in Azure will be Azure AD joined. You cannot join it to another domain like on-premises AD or Azure AD DS. If you need to do so, you will need to disconnect the VM from Azure AD by uninstalling the extension.
+> After you enable this capability, your Windows VMs in Azure will be Azure AD joined. You cannot join them to another domain, like on-premises Active Directory or Azure Active Directory Domain Services. If you need to do so, disconnect the VM from Azure AD by uninstalling the extension.
## Requirements ### Supported Azure regions and Windows distributions
-The following Windows distributions are currently supported for this feature:
+This feature currently supports the following Windows distributions:
- Windows Server 2019 Datacenter and later - Windows 10 1809 and later > [!IMPORTANT]
-> Remote connection to VMs joined to Azure AD is only allowed from Windows 10 or newer PCs that are either Azure AD registered (starting Windows 10 20H1), Azure AD joined or hybrid Azure AD joined to the **same** directory as the VM.
+> Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are Azure AD registered (starting with Windows 10 20H1), Azure AD joined, or hybrid Azure AD joined to the *same* directory as the VM.
This feature is now available in the following Azure clouds:
This feature is now available in the following Azure clouds:
### Network requirements
-To enable Azure AD authentication for your Windows VMs in Azure, you need to ensure your VMs network configuration permits outbound access to the following endpoints over TCP port 443:
+To enable Azure AD authentication for your Windows VMs in Azure, you need to ensure that your VM's network configuration permits outbound access to the following endpoints over TCP port 443.
-For Azure Global
-- `https://enterpriseregistration.windows.net` - For device registration.-- `http://169.254.169.254` - Azure Instance Metadata Service endpoint.-- `https://login.microsoftonline.com` - For authentication flows.-- `https://pas.windows.net` - For Azure RBAC flows.
+Azure Global:
+- `https://enterpriseregistration.windows.net`: For device registration.
+- `http://169.254.169.254`: Azure Instance Metadata Service endpoint.
+- `https://login.microsoftonline.com`: For authentication flows.
+- `https://pas.windows.net`: For Azure RBAC flows.
-For Azure Government
-- `https://enterpriseregistration.microsoftonline.us` - For device registration.-- `http://169.254.169.254` - Azure Instance Metadata Service.-- `https://login.microsoftonline.us` - For authentication flows.-- `https://pasff.usgovcloudapi.net` - For Azure RBAC flows.
+Azure Government:
+- `https://enterpriseregistration.microsoftonline.us`: For device registration.
+- `http://169.254.169.254`: Azure Instance Metadata Service endpoint.
+- `https://login.microsoftonline.us`: For authentication flows.
+- `https://pasff.usgovcloudapi.net`: For Azure RBAC flows.
-For Azure China 21Vianet
-- `https://enterpriseregistration.partner.microsoftonline.cn` - For device registration.-- `http://169.254.169.254` - Azure Instance Metadata Service endpoint.-- `https://login.chinacloudapi.cn` - For authentication flows.-- `https://pas.chinacloudapi.cn` - For Azure RBAC flows.
+Azure China 21Vianet:
+- `https://enterpriseregistration.partner.microsoftonline.cn`: For device registration.
+- `http://169.254.169.254`: Azure Instance Metadata Service endpoint.
+- `https://login.chinacloudapi.cn`: For authentication flows.
+- `https://pas.chinacloudapi.cn`: For Azure RBAC flows.
-## Enabling Azure AD login for Windows VM in Azure
+## Enable Azure AD login for a Windows VM in Azure
-To use Azure AD login for Windows VM in Azure, you must:
+To use Azure AD login for a Windows VM in Azure, you must:
-- First enable the Azure AD login option for your Windows VM.-- Then configure Azure role assignments for users who are authorized to login in to the VM.
+1. Enable the Azure AD login option for the VM.
+1. Configure Azure role assignments for users who are authorized to log in to the VM.
-There are two ways you can enable Azure AD login for your Windows VM:
+There are two ways to enable Azure AD login for your Windows VM:
-- [Using Azure portal create VM experience to enable Azure AD login](#using-azure-portal-create-vm-experience-to-enable-azure-ad-login) when creating a Windows VM.-- [Using the Azure Cloud Shell experience to enable Azure AD login](#using-the-azure-cloud-shell-experience-to-enable-azure-ad-login) when creating a Windows VM **or for an existing Windows VM**.
+- The Azure portal, when you're creating a Windows VM.
+- Azure Cloud Shell, when you're creating a Windows VM or using an existing Windows VM.
-### Using Azure portal create VM experience to enable Azure AD login
+### Azure portal
-You can enable Azure AD login for Windows Server 2019 Datacenter or Windows 10 1809 and later VM images.
+You can enable Azure AD login for VM images in Windows Server 2019 Datacenter or Windows 10 1809 and later.
-To create a Windows Server 2019 Datacenter VM in Azure with Azure AD logon:
+To create a Windows Server 2019 Datacenter VM in Azure with Azure AD login:
-1. Sign in to the [Azure portal](https://portal.azure.com), with an account that has access to create VMs, and select **+ Create a resource**.
-1. Type **Windows Server** in Search the Marketplace search bar.
- 1. Select **Windows Server** and choose **Windows Server 2019 Datacenter** from Select a software plan dropdown.
- 1. Select **Create**.
-1. On the "Management" tab, check the box to **Login with Azure AD** under the Azure AD section.
-1. Make sure **System assigned managed identity** under the Identity section is checked. This action should happen automatically once you enable Login with Azure AD.
-1. Go through the rest of the experience of creating a virtual machine. You'll have to create an administrator username and password for the VM.
+1. Sign in to the [Azure portal](https://portal.azure.com) by using an account that has access to create VMs, and select **+ Create a resource**.
+1. In the **Search the Marketplace** search bar, type **Windows Server**.
+1. Select **Windows Server**, and then choose **Windows Server 2019 Datacenter** from the **Select a software plan** dropdown list.
+1. Select **Create**.
+1. On the **Management** tab, select the **Login with Azure AD** checkbox in the **Azure AD** section.
-![Login with Azure AD credentials create a VM](./media/howto-vm-sign-in-azure-ad-windows/azure-portal-login-with-azure-ad.png)
+ ![Screenshot that shows the Management tab on the Azure portal page for creating a virtual machine.](./media/howto-vm-sign-in-azure-ad-windows/azure-portal-login-with-azure-ad.png)
+1. Make sure that **System assigned managed identity** in the **Identity** section is selected. This action should happen automatically after you enable login with Azure AD.
+1. Go through the rest of the experience of creating a virtual machine. You'll have to create an administrator username and password for the VM.
> [!NOTE]
-> In order to log in to the VM using your Azure AD credential, you will first need to configure role assignments for the VM as described in one of the sections below.
+> To log in to the VM by using your Azure AD credentials, you first need to [configure role assignments](#configure-role-assignments-for-the-vm) for the VM.
-### Using the Azure Cloud Shell experience to enable Azure AD login
+### Azure Cloud Shell
-Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the Copy button to copy the code, paste it in Cloud Shell, and then press Enter to run it. There are a few ways to open Cloud Shell:
+Azure Cloud Shell is a free, interactive shell that you can use to run the steps in this article. Common Azure tools are preinstalled and configured in Cloud Shell for you to use with your account. Just select the **Copy** button to copy the code, paste it in Cloud Shell, and then select the Enter key to run it. There are a few ways to open Cloud Shell:
- Select **Try It** in the upper-right corner of a code block. - Open Cloud Shell in your browser. - Select the Cloud Shell button on the menu in the upper-right corner of the [Azure portal](https://portal.azure.com).
-This article requires that you're running Azure CLI version 2.0.31 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article [Install Azure CLI](/cli/azure/install-azure-cli).
+This article requires you to run Azure CLI version 2.0.31 or later. Run `az --version` to find the version. If you need to install or upgrade, see the article [Install the Azure CLI](/cli/azure/install-azure-cli).
-1. Create a resource group with [az group create](/cli/azure/group#az-group-create).
-1. Create a VM with [az vm create](/cli/azure/vm#az-vm-create) using a supported distribution in a supported region.
+1. Create a resource group by running [az group create](/cli/azure/group#az-group-create).
+1. Create a VM by running [az vm create](/cli/azure/vm#az-vm-create). Use a supported distribution in a supported region.
1. Install the Azure AD login VM extension.
-The following example deploys a VM named myVM that uses Win2019Datacenter, into a resource group named myResourceGroup, in the southcentralus region. In the following examples, you can provide your own resource group and VM names as needed.
+The following example deploys a VM named `myVM` (that uses `Win2019Datacenter`) into a resource group named `myResourceGroup`, in the `southcentralus` region. In this example and the next one, you can provide your own resource group and VM names as needed.
```AzureCLI az group create --name myResourceGroup --location southcentralus
az vm create \
``` > [!NOTE]
-> It is required that you enable System assigned managed identity on your virtual machine before you install the Azure AD login VM extension.
+> You must enable system-assigned managed identity on your virtual machine before you install the Azure AD login VM extension.
It takes a few minutes to create the VM and supporting resources.
-Finally, install the Azure AD login VM extension to enable Azure AD login for Windows VM. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. Use [az vm extension](/cli/azure/vm/extension#az-vm-extension-set) set to install the AADLoginForWindows extension on the VM named `myVM` in the `myResourceGroup` resource group:
+Finally, install the Azure AD login VM extension to enable Azure AD login for Windows VMs. VM extensions are small applications that provide post-deployment configuration and automation tasks on Azure virtual machines. Use [az vm extension](/cli/azure/vm/extension#az-vm-extension-set) set to install the AADLoginForWindows extension on the VM named `myVM` in the `myResourceGroup` resource group.
-> [!NOTE]
-> You can install AADLoginForWindows extension on an existing Windows Server 2019 or Windows 10 1809 and later VM to enable it for Azure AD authentication. An example of AZ CLI is shown below.
+You can install the AADLoginForWindows extension on an existing Windows Server 2019 or Windows 10 1809 and later VM to enable it for Azure AD authentication. The following example uses the Azure CLI to install the extension:
```AzureCLI az vm extension set \
az vm extension set \
--vm-name myVM ```
-The `provisioningState` of `Succeeded` is shown, once the extension is installed on the VM.
+After the extension is installed on the VM, `provisioningState` shows `Succeeded`.
## Configure role assignments for the VM
-Now that you've created the VM, you need to configure Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
+Now that you've created the VM, you need to configure an Azure RBAC policy to determine who can log in to the VM. Two Azure roles are used to authorize VM login:
-- **Virtual Machine Administrator Login**: Users with this role assigned can log in to an Azure virtual machine with administrator privileges.-- **Virtual Machine User Login**: Users with this role assigned can log in to an Azure virtual machine with regular user privileges.
+- **Virtual Machine Administrator Login**: Users who have this role assigned can log in to an Azure virtual machine with administrator privileges.
+- **Virtual Machine User Login**: Users who have this role assigned can log in to an Azure virtual machine with regular user privileges.
-> [!NOTE]
-> To allow a user to log in to the VM over RDP, you must assign either the Virtual Machine Administrator Login or Virtual Machine User Login role to the Resource Group containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resources. An Azure user with the Owner or Contributor roles assigned for a VM do not automatically have privileges to log in to the VM over RDP. This is to provide audited separation between the set of people who control virtual machines versus the set of people who can access virtual machines.
+To allow a user to log in to the VM over RDP, you must assign the Virtual Machine Administrator Login or Virtual Machine User Login role to the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resources.
-There are multiple ways you can configure role assignments for VM:
+An Azure user who has the Owner or Contributor role assigned for a VM does not automatically have privileges to log in to the VM over RDP. The reason is to provide audited separation between the set of people who control virtual machines and the set of people who can access virtual machines.
-- Using the Azure AD Portal experience-- Using the Azure Cloud Shell experience
+There are two ways to configure role assignments for a VM:
+
+- Azure AD portal experience
+- Azure Cloud Shell experience
> [!NOTE]
-> The Virtual Machine Administrator Login and Virtual Machine User Login roles use dataActions and thus cannot be assigned at management group scope. Currently these roles can only be assigned at the subscription, resource group or resource scope.
+> The Virtual Machine Administrator Login and Virtual Machine User Login roles use `dataActions`, so they can't be assigned at the management group scope. Currently, you can assign these roles only at the subscription, resource group, or resource scope.
-### Using Azure AD Portal experience
+### Azure AD portal
-To configure role assignments for your Azure AD enabled Windows Server 2019 Datacenter VMs:
+To configure role assignments for your Azure AD-enabled Windows Server 2019 Datacenter VMs:
-1. Select the **Resource Group** containing the VM and its associated Virtual Network, Network Interface, Public IP Address or Load Balancer resource.
+1. For **Resource Group**, select the resource group that contains the VM and its associated virtual network, network interface, public IP address, or load balancer resource.
1. Select **Access control (IAM)**.
-1. Select **Add** > **Add role assignment** to open the Add role assignment page.
+1. Select **Add** > **Add role assignment** to open the **Add role assignment** page.
-1. Assign the following role. For detailed steps, see [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
+1. Assign the following role. For detailed steps, see [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md).
| Setting | Value | | | | | Role | **Virtual Machine Administrator Login** or **Virtual Machine User Login** | | Assign access to | User, group, service principal, or managed identity |
- ![Add role assignment page in Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+ ![Screenshot that shows the page for adding a role assignment in the Azure portal.](../../../includes/role-based-access-control/media/add-role-assignment-page.png)
+
+### Azure Cloud Shell
-### Using the Azure Cloud Shell experience
+The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. You obtain the username of your current Azure account by using [az account show](/cli/azure/account#az-account-show), and you set the scope to the VM created in a previous step by using [az vm show](/cli/azure/vm#az-vm-show).
-The following example uses [az role assignment create](/cli/azure/role/assignment#az-role-assignment-create) to assign the Virtual Machine Administrator Login role to the VM for your current Azure user. The username of your active Azure AD account is obtained with [az account show](/cli/azure/account#az-account-show), and the scope is set to the VM created in a previous step with [az vm show](/cli/azure/vm#az-vm-show). The scope could also be assigned at a resource group or subscription level, and normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure using Azure Active Directory authentication](../../virtual-machines/linux/login-using-aad.md).
+You can also assign the scope at a resource group or subscription level. Normal Azure RBAC inheritance permissions apply. For more information, see [Log in to a Linux virtual machine in Azure by using Azure Active Directory authentication](../../virtual-machines/linux/login-using-aad.md).
```AzureCLI $username=$(az account show --query user.name --output tsv)
az role assignment create \
``` > [!NOTE]
-> If your Azure AD domain and logon username domain do not match, you must specify the object ID of your user account with the `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account with [az ad user list](/cli/azure/ad/user#az-ad-user-list).
+> If your Azure AD domain and login username domain don't match, you must specify the object ID of your user account by using `--assignee-object-id`, not just the username for `--assignee`. You can obtain the object ID for your user account by using [az ad user list](/cli/azure/ad/user#az-ad-user-list).
-For more information on how to use Azure RBAC to manage access to your Azure subscription resources, see the following articles:
+For more information about how to use Azure RBAC to manage access to your Azure subscription resources, see the following articles:
-- [Assign Azure roles using Azure CLI](../../role-based-access-control/role-assignments-cli.md)-- [Assign Azure roles using the Azure portal](../../role-based-access-control/role-assignments-portal.md)-- [Assign Azure roles using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md).
+- [Assign Azure roles by using the Azure CLI](../../role-based-access-control/role-assignments-cli.md)
+- [Assign Azure roles by using the Azure portal](../../role-based-access-control/role-assignments-portal.md)
+- [Assign Azure roles by using Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md)
-## Using Conditional Access
+## Enforce Conditional Access policies
-You can enforce Conditional Access policies such as multi-factor authentication or user sign-in risk check before authorizing access to Windows VMs in Azure that are enabled with Azure AD sign in. To apply Conditional Access policy, you must select the "**Azure Windows VM Sign-In**" app from the cloud apps or actions assignment option and then use Sign-in risk as a condition and/or
-require multi-factor authentication as a grant access control.
+You can enforce Conditional Access policies, such as multifactor authentication or user sign-in risk check, before you authorize access to Windows VMs in Azure that are enabled with Azure AD login. To apply a Conditional Access policy, you must select the **Azure Windows VM Sign-In** app from the cloud apps or actions assignment option. Then use sign-in risk as a condition and/or require MFA as a control for granting access.
> [!NOTE]
-> If you use "Require multi-factor authentication" as a grant access control for requesting access to the "Azure Windows VM Sign-In" app, then you must supply multi-factor authentication claim as part of the client that initiates the RDP session to the target Windows VM in Azure. The only way to achieve this on a Windows 10 or newer client is to use Windows Hello for Business PIN or biometric authentication with the RDP client. Support for biometric authentication was added to the RDP client in Windows 10 version 1809. Remote desktop using Windows Hello for Business authentication is only available for deployments that use cert trust model and currently not available for key trust model.
+> If you require MFA as a control for granting access to the Azure Windows VM Sign-In app, then you must supply an MFA claim as part of the client that initiates the RDP session to the target Windows VM in Azure. The only way to achieve this on a Windows 10 or later client is to use a Windows Hello for Business PIN or biometric authentication with the RDP client. Support for biometric authentication was added to the RDP client in Windows 10 version 1809.
+>
+> Remote desktop using Windows Hello for Business authentication is available only for deployments that use a certificate trust model. It's currently not available for a key trust model.
> [!WARNING]
-> Per-user Enabled/Enforced Azure AD Multi-Factor Authentication is not supported for VM Sign-In.
+> The per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting is not supported for the Azure Windows VM Sign-In app.
-## Log in using Azure AD credentials to a Windows VM
+## Log in by using Azure AD credentials to a Windows VM
> [!IMPORTANT]
-> Remote connection to VMs joined to Azure AD is only allowed from Windows 10 or newer PCs that are either Azure AD registered (minimum required build is 20H1) or Azure AD joined or hybrid Azure AD joined to the **same** directory as the VM. Additionally, to RDP using Azure AD credentials, the user must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login. If using an Azure AD registered Windows 10 or newer PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, Azure Bastion can be used to log in with Azure AD authentication [using Azure CLI and the native RDP client **mstsc**](../../bastion/connect-native-client-windows.md).
+> Remote connection to VMs that are joined to Azure AD is allowed only from Windows 10 or later PCs that are either Azure AD registered (minimum required build is 20H1) or Azure AD joined or hybrid Azure AD joined to the *same* directory as the VM. Additionally, to RDP by using Azure AD credentials, users must belong to one of the two Azure roles, Virtual Machine Administrator Login or Virtual Machine User Login.
+>
+> If you're using an Azure AD-registered Windows 10 or later PC, you must enter credentials in the `AzureAD\UPN` format (for example, `AzureAD\john@contoso.com`). At this time, you can use Azure Bastion to log in with Azure AD authentication [via the Azure CLI and the native RDP client mstsc](../../bastion/connect-native-client-windows.md).
-To log in to your Windows Server 2019 virtual machine using Azure AD:
+To log in to your Windows Server 2019 virtual machine by using Azure AD:
-1. Navigate to the overview page of the virtual machine that has been enabled with Azure AD logon.
-1. Select **Connect** to open the Connect to virtual machine blade.
+1. Go to the overview page of the virtual machine that has been enabled with Azure AD login.
+1. Select **Connect** to open the **Connect to virtual machine** pane.
1. Select **Download RDP File**.
-1. Select **Open** to launch the Remote Desktop Connection client.
-1. Select **Connect** to launch the Windows logon dialog.
-1. Logon using your Azure AD credentials.
+1. Select **Open** to open the Remote Desktop Connection client.
+1. Select **Connect** to open the Windows login dialog.
+1. Log in by using your Azure AD credentials.
-You're now signed in to the Windows Server 2019 Azure virtual machine with the role permissions as assigned, such as VM User or VM Administrator.
+You're now logged in to the Windows Server 2019 Azure virtual machine with the role permissions as assigned, such as VM User or VM Administrator.
> [!NOTE]
-> You can save the .RDP file locally on your computer to launch future remote desktop connections to your virtual machine instead of having to navigate to virtual machine overview page in the Azure portal and using the connect option.
+> You can save the .rdp file locally on your computer to start future remote desktop connections to your virtual machine, instead of going to the virtual machine overview page in the Azure portal and using the connect option.
-## Using Azure Policy to ensure standards and assess compliance
+## Use Azure Policy to meet standards and assess compliance
-Use Azure Policy to ensure Azure AD login is enabled for your new and existing Windows virtual machines and assess compliance of your environment at scale on your Azure Policy compliance dashboard. With this capability, you can use many levels of enforcement: you can flag new and existing Windows VMs within your environment that don't have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Windows VMs that don't have Azure AD login enabled, and remediate existing Windows VMs to the same standard. In addition to these capabilities, you can also use Azure Policy to detect and flag Windows VMs that have non-approved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
+Use Azure Policy to:
-## Troubleshoot
+- Ensure that Azure AD login is enabled for your new and existing Windows virtual machines.
+- Assess compliance of your environment at scale on a compliance dashboard.
-### Troubleshoot deployment issues
+With this capability, you can use many levels of enforcement. You can flag new and existing Windows VMs within your environment that don't have Azure AD login enabled. You can also use Azure Policy to deploy the Azure AD extension on new Windows VMs that don't have Azure AD login enabled, and remediate existing Windows VMs to the same standard.
-The AADLoginForWindows extension must install successfully in order for the VM to complete the Azure AD join process. Perform the following steps if the VM extension fails to install correctly.
+In addition to these capabilities, you can use Azure Policy to detect and flag Windows VMs that have unapproved local accounts created on their machines. To learn more, review [Azure Policy](../../governance/policy/overview.md).
-1. RDP to the VM using the local administrator account and examine the `CommandExecution.log` file under:
-
- `C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.ActiveDirectory.AADLoginForWindows\1.0.0.1\`
+
+## Troubleshoot deployment problems
+
+The AADLoginForWindows extension must be installed successfully for the VM to complete the Azure AD join process. If the VM extension fails to be installed correctly, perform the following steps:
+
+1. RDP to the VM by using the local administrator account and examine the *CommandExecution.log* file under *C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.ActiveDirectory.AADLoginForWindows\1.0.0.1*.
> [!NOTE]
- > If the extension restarts after the initial failure, the log with the deployment error will be saved as `CommandExecution_YYYYMMDDHHMMSSSSS.log`.
+ > If the extension restarts after the initial failure, the log with the deployment error will be saved as *CommandExecution_YYYYMMDDHHMMSSSSS.log*.
-1. Open a PowerShell window on the VM and verify these queries against the Instance Metadata Service (IMDS) Endpoint running on the Azure host returns:
+1. Open a PowerShell window on the VM. Verify that the following queries against the Azure Instance Metadata Service endpoint running on the Azure host return the expected output:
| Command to run | Expected output | | | | | `curl -H Metadata:true "http://169.254.169.254/metadata/instance?api-version=2017-08-01"` | Correct information about the Azure VM |
- | `curl -H Metadata:true "http://169.254.169.254/metadata/identity/info?api-version=2018-02-01"` | Valid Tenant ID associated with the Azure Subscription |
+ | `curl -H Metadata:true "http://169.254.169.254/metadata/identity/info?api-version=2018-02-01"` | Valid tenant ID associated with the Azure subscription |
| `curl -H Metadata:true "http://169.254.169.254/metadata/identity/oauth2/token?resource=urn:ms-drs:enterpriseregistration.windows.net&api-version=2018-02-01"` | Valid access token issued by Azure Active Directory for the managed identity that is assigned to this VM | > [!NOTE]
- > The access token can be decoded using a tool like [calebb.net](http://calebb.net/). Verify the `oid` in the access token matches the managed identity assigned to the VM.
+ > You can decode the access token by using a tool like [calebb.net](http://calebb.net/). Verify that the `oid` value in the access token matches the managed identity that's assigned to the VM.
-1. Ensure the required endpoints are accessible from the VM using PowerShell:
+1. Ensure that the required endpoints are accessible from the VM via PowerShell:
- `curl.exe https://login.microsoftonline.com/ -D -` - `curl.exe https://login.microsoftonline.com/<TenantID>/ -D -`
The AADLoginForWindows extension must install successfully in order for the VM t
- `curl.exe https://pas.windows.net/ -D -` > [!NOTE]
- > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription.<br/> `login.microsoftonline.com/<TenantID>`, `enterpriseregistration.windows.net`, and `pas.windows.net` should return 404 Not Found, which is expected behavior.
+ > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. `login.microsoftonline.com/<TenantID>`, `enterpriseregistration.windows.net`, and `pas.windows.net` should return 404 Not Found, which is expected behavior.
-1. The Device State can be viewed by running `dsregcmd /status`. The goal is for Device State to show as `AzureAdJoined : YES`.
+1. View the device state by running `dsregcmd /status`. The goal is for the device state to show as `AzureAdJoined : YES`.
> [!NOTE]
- > Azure AD join activity is captured in Event viewer under the `User Device Registration\Admin` log at `Event Viewer (local)\Applications` and `Services Logs\Windows\Microsoft\User Device Registration\Admin`.
-
-If the AADLoginForWindows extension fails with certain error code, you can perform the following steps:
+ > Azure AD join activity is captured in Event Viewer under the *User Device Registration\Admin* log at *Event Viewer (local)\Applications* and *Services Logs\Windows\Microsoft\User Device Registration\Admin*.
-#### Issue 1: AADLoginForWindows extension fails to install with terminal error code '1007' and exit code: -2145648574.
+If the AADLoginForWindows extension fails with an error code, you can perform the following steps.
-This exit code translates to `DSREG_E_MSI_TENANTID_UNAVAILABLE` because the extension is unable to query the Azure AD Tenant information.
+### Terminal error code 1007 and exit code -2145648574.
-1. Verify the Azure VM can retrieve the TenantID from the Instance Metadata Service.
+Terminal error code 1007 and exit code -2145648574 translate to `DSREG_E_MSI_TENANTID_UNAVAILABLE`. The extension can't query the Azure AD tenant information.
- - Connect to the VM as a local administrator and verify the endpoint returns a valid Tenant ID. Run the following command from an elevated PowerShell window on the VM:
+Connect to the VM as a local administrator and verify that the endpoint returns a valid tenant ID from Azure Instance Metadata Service. Run the following command from an elevated PowerShell window on the VM:
- - `curl -H Metadata:true http://169.254.169.254/metadata/identity/info?api-version=2018-02-01`
+`curl -H Metadata:true http://169.254.169.254/metadata/identity/info?api-version=2018-02-01`
-1. The VM admin attempts to install the AADLoginForWindows extension, but a system assigned managed identity hasn't enabled the VM first. Navigate to the Identity blade of the VM. From the System assigned tab, verify Status is toggled to On.
+This problem can also happen when the VM admin attempts to install the AADLoginForWindows extension, but a system-assigned managed identity hasn't enabled the VM first. In that case, go to the **Identity** pane of the VM. On the **System assigned** tab, verify that the **Status** toggle is set to **On**.
-#### Issue 2: AADLoginForWindows extension fails to install with Exit code: -2145648607
+### Exit code -2145648607
-This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension isn't able to reach the `https://enterpriseregistration.windows.net` endpoint.
+Exit code -2145648607 translates to `DSREG_AUTOJOIN_DISC_FAILED`. The extension can't reach the `https://enterpriseregistration.windows.net` endpoint.
-1. Verify the required endpoints are accessible from the VM using PowerShell:
+1. Verify that the required endpoints are accessible from the VM via PowerShell:
- `curl https://login.microsoftonline.com/ -D -` - `curl https://login.microsoftonline.com/<TenantID>/ -D -`
This Exit code translates to `DSREG_AUTOJOIN_DISC_FAILED` because the extension
- `curl https://pas.windows.net/ -D -` > [!NOTE]
- > Replace `<TenantID>` with the Azure AD Tenant ID that is associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name to get the directory / tenant ID, or select **Azure Active Directory > Properties > Directory ID** in the Azure portal.<br/> Attempt to connect to `enterpriseregistration.windows.net` may return 404 Not Found, which is expected behavior.<br/> Attempt to connect to `pas.windows.net` may prompt for pin credentials (you do not need to enter the pin) or may return 404 Not Found. Either one is sufficient to verify the URL is reachable.
+ > Replace `<TenantID>` with the Azure AD tenant ID that's associated with the Azure subscription. If you need to find the tenant ID, you can hover over your account name or select **Azure Active Directory** > **Properties** > **Directory ID** in the Azure portal.
+ >
+ > Attempts to connect to `enterpriseregistration.windows.net` might return 404 Not Found, which is expected behavior. Attempts to connect to `pas.windows.net` might prompt for PIN credentials or might return 404 Not Found. (You don't need to enter the PIN.) Either one is sufficient to verify that the URL is reachable.
-1. If any of the commands fails with "Could not resolve host `<URL>`", try running this command to determine the DNS server that is being used by the VM.
+1. If any of the commands fails with "Could not resolve host `<URL>`," try running this command to determine which DNS server the VM is using:
`nslookup <URL>` > [!NOTE]
- > Replace `<URL>` with the fully qualified domain names used by the endpoints, such as `login.microsoftonline.com`.
+ > Replace `<URL>` with the fully qualified domain names that the endpoints use, such as `login.microsoftonline.com`.
-1. Next, see if specifying a public DNS server allows the command to succeed:
+1. See whether specifying a public DNS server allows the command to succeed:
`nslookup <URL> 208.67.222.222`
-1. If necessary, change the DNS server that is assigned to the network security group that the Azure VM belongs to.
-
-#### Issue 3: AADLoginForWindows extension fails to install with Exit code: 51
+1. If necessary, change the DNS server that's assigned to the network security group that the Azure VM belongs to.
-Exit code 51 translates to "This extension is not supported on the VM's operating system".
+### Exit code 51
-The AADLoginForWindows extension is only intended to be installed on Windows Server 2019 or Windows 10 (Build 1809 or later). Ensure the version of Windows is supported. If the build of Windows isn't supported, uninstall the VM Extension.
+Exit code 51 translates to "This extension is not supported on the VM's operating system."
-### Troubleshoot sign-in issues
+The AADLoginForWindows extension is intended to be installed only on Windows Server 2019 or Windows 10 (Build 1809 or later). Ensure that your version or build of Windows is supported. If it isn't supported, uninstall the extension.
-Some common errors when you try to RDP with Azure AD credentials include no Azure roles assigned, unauthorized client, or 2FA sign-in method required. Use the following information to correct these issues.
+## Troubleshoot sign-in problems
-The Device and SSO State can be viewed by running `dsregcmd /status`. The goal is for Device State to show as `AzureAdJoined : YES` and `SSO State` to show `AzureAdPrt : YES`.
+Use the following information to correct sign-in problems.
-RDP Sign-in using Azure AD accounts is captured in Event viewer under the `AAD\Operational` event logs.
+You can view the device and single sign-on (SSO) state by running `dsregcmd /status`. The goal is for the device state to show as `AzureAdJoined : YES` and for the SSO state to show `AzureAdPrt : YES`.
-#### Azure role not assigned
+RDP sign-in via Azure AD accounts is captured in Event Viewer under the *AAD\Operational* event logs.
-If you see the following error message when you initiate a remote desktop connection to your VM:
+### Azure role not assigned
-- Your account is configured to prevent you from using this device. For more info, contact your system administrator.
+You might get the following error message when you initiate a remote desktop connection to your VM: "Your account is configured to prevent you from using this device. For more info, contact your system administrator."
-![Your account is configured to prevent you from using this device.](./media/howto-vm-sign-in-azure-ad-windows/rbac-role-not-assigned.png)
+![Screenshot of the message that says your account is configured to prevent you from using this device.](./media/howto-vm-sign-in-azure-ad-windows/rbac-role-not-assigned.png)
-Verify that you have [configured Azure RBAC policies](../../virtual-machines/linux/login-using-aad.md) for the VM that grants the user either the Virtual Machine Administrator Login or Virtual Machine User Login role:
+Verify that you've [configured Azure RBAC policies](../../virtual-machines/linux/login-using-aad.md) for the VM that grant the user the Virtual Machine Administrator Login or Virtual Machine User Login role.
> [!NOTE]
-> If you are running into issues with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
+> If you're having problems with Azure role assignments, see [Troubleshoot Azure RBAC](../../role-based-access-control/troubleshooting.md#azure-role-assignments-limit).
-#### Unauthorized client
-
-If you see the following error message when you initiate a remote desktop connection to your VM:
--- Your credentials did not work.
+### Unauthorized client or password change required
-![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
+You might get the following error message when you initiate a remote desktop connection to your VM: "Your credentials did not work."
-The Windows 10 or newer PC you're using to initiate the remote desktop connection must be Azure AD joined, or hybrid Azure AD joined to the same Azure AD directory. For more information about device identity, see the article [What is a device identity](./overview.md).
+![Screenshot of the message that says your credentials did not work.](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-> [!NOTE]
-> Windows 10 Build 20H1 added support for an Azure AD registered PC to initiate RDP connection to your VM. When using an Azure AD registered (not Azure AD joined or hybrid Azure AD joined) PC as the RDP client to initiate connections to your VM, you must enter credentials in the format `AzureAD\UPN` (for example, `AzureAD\john@contoso.com`).
-
-Verify that the AADLoginForWindows extension wasn't uninstalled after the Azure AD join finished.
-
-Also, make sure that the security policy "Network security: Allow PKU2U authentication requests to this computer to use online identities" is enabled on both the server **and** the client.
+Try these solutions:
-#### Password change required
+- The Windows 10 or later PC that you're using to initiate the remote desktop connection must be Azure AD joined, or hybrid Azure AD joined to the same Azure AD directory. For more information about device identity, see the article [What is a device identity?](./overview.md).
-If you see the following error message when you initiate a remote desktop connection to your VM:
+ > [!NOTE]
+ > Windows 10 Build 20H1 added support for an Azure AD-registered PC to initiate an RDP connection to your VM. When you're using a PC that's Azure AD registered (not Azure AD joined or hybrid Azure AD joined) as the RDP client to initiate connections to your VM, you must enter credentials in the format `AzureAD\UPN` (for example, `AzureAD\john@contoso.com`).
-- Your credentials did not work.
+ Verify that the AADLoginForWindows extension wasn't uninstalled after the Azure AD join finished.
-![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
+ Also, make sure that the security policy **Network security: Allow PKU2U authentication requests to this computer to use online identities** is enabled on both the server *and* the client.
-Verify that the user doesn't have a temporary password. Temporary passwords can't be used to log in to a remote desktop connection.
+- Verify that the user doesn't have a temporary password. Temporary passwords can't be used to log in to a remote desktop connection.
-To resolve the issue, sign in with the user account in a web browser, for instance by opening the [Azure portal](https://portal.azure.com) in a private browsing window. If you're prompted to change the password, set a new password, then try connecting again.
+ Sign in with the user account in a web browser. For instance, open the [Azure portal](https://portal.azure.com) in a private browsing window. If you're prompted to change the password, set a new password. Then try connecting again.
-#### MFA sign-in method required
+### MFA sign-in method required
-If you see the following error message when you initiate a remote desktop connection to your VM:
+You might see the following error message when you initiate a remote desktop connection to your VM: "The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator."
-- The sign-in method you're trying to use isn't allowed. Try a different sign-in method or contact your system administrator.
+![Screenshot of the message that says the sign-in method you're trying to use isn't allowed.](./media/howto-vm-sign-in-azure-ad-windows/mfa-sign-in-method-required.png)
-![The sign-in method you're trying to use isn't allowed.](./media/howto-vm-sign-in-azure-ad-windows/mfa-sign-in-method-required.png)
+If you've configured a Conditional Access policy that requires MFA before you can access the resource, you need to ensure that the Windows 10 or later PC that's initiating the remote desktop connection to your VM signs in by using a strong authentication method such as Windows Hello. If you don't use a strong authentication method for your remote desktop connection, you'll see the error.
-If you've configured a Conditional Access policy that requires multi-factor authentication (MFA) before you can access the resource, then you need to ensure that the Windows 10 or newer PC initiating the remote desktop connection to your VM signs in using a strong authentication method such as Windows Hello. If you don't use a strong authentication method for your remote desktop connection, you'll see the previous error.
+Another MFA-related error message is the one described previously: "Your credentials did not work."
-- Your credentials did not work.
+![Screenshot of the message that says your credentials didn't work.](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
> [!WARNING]
-> Legacy per-user Enabled/Enforced Azure AD Multi-Factor Authentication is not supported for VM Sign-In. This setting causes Sign-in to fail with ΓÇ£Your credentials do not work.ΓÇ¥ error message.
+> The legacy per-user **Enabled/Enforced Azure AD Multi-Factor Authentication** setting is not supported for the Azure Windows VM Sign-In app. This setting causes sign-in to fail with the "Your credentials did not work" error message.
-![Your credentials did not work](./media/howto-vm-sign-in-azure-ad-windows/your-credentials-did-not-work.png)
-
-You can resolve the above issue by removing the per-user MFA setting, by following these steps:
+You can resolve the problem by removing the per-user MFA setting through these commands:
```
Set-MsolUser -UserPrincipalName username@contoso.com -StrongAuthenticationRequir
```
-If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can exclude MFA requirement by configuring Conditional Access policy that excludes "**Azure Windows VM Sign-In**" app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business Overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification).
+If you haven't deployed Windows Hello for Business and if that isn't an option for now, you can configure a Conditional Access policy that excludes the Azure Windows VM Sign-In app from the list of cloud apps that require MFA. To learn more about Windows Hello for Business, see [Windows Hello for Business overview](/windows/security/identity-protection/hello-for-business/hello-identity-verification).
> [!NOTE]
-> Windows Hello for Business PIN authentication with RDP has been supported by Windows 10 for several versions, however support for Biometric authentication with RDP was added in Windows 10 version 1809. Using Windows Hello for Business authentication during RDP is only available for deployments that use cert trust model and currently not available for key trust model.
+> Windows Hello for Business PIN authentication with RDP has been supported for several versions of Windows 10. Support for biometric authentication with RDP was added in Windows 10 version 1809. Using Windows Hello for Business authentication during RDP is available only for deployments that use a certificate trust model. It's currently not available for a key trust model.
-Share your feedback about this feature or report issues using it on the [Azure AD feedback forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
+Share your feedback about this feature or report problems with using it on the [Azure AD feedback forum](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789).
### Missing application
-If the Azure Windows VM Sign-In application is missing from Conditional Access, use the following steps to remediate the issue:
+If the Azure Windows VM Sign-In application is missing from Conditional Access, make sure that the application isn't in the tenant:
-1. Check to make sure the application isn't in the tenant by:
- 1. Sign in to the **Azure portal**.
- 1. Browse to **Azure Active Directory** > **Enterprise applications**
- 1. Remove the filters to see all applications, and search for "VM". If you don't see Azure Windows VM Sign-In as a result, the service principal is missing from the tenant.
+1. Sign in to the Azure portal.
+1. Browse to **Azure Active Directory** > **Enterprise applications**.
+1. Remove the filters to see all applications, and search for **VM**. If you don't see **Azure Windows VM Sign-In** as a result, the service principal is missing from the tenant.
Another way to verify it is via Graph PowerShell: 1. [Install the Graph PowerShell SDK](/powershell/microsoftgraph/installation) if you haven't already done so.
-1. `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All","Application.ReadWrite.All"`
-1. Sign-in with a Global Admin account
-1. Consent to permission prompt
-1. `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Windows VM Sign-In"'`
- 1. If this command results in no output and returns you to the PowerShell prompt, you can create the Service Principal with the following Graph PowerShell command:
- 1. `New-MgServicePrincipal -AppId 372140e0-b3b7-4226-8ef9-d57986796201`
- 1. Successful output will show that the AppID and the Application Name Azure Windows VM Sign-In was created.
-1. Sign out of Graph PowerShell when complete with the following command: `Disconnect-MgGraph`
+1. Run `Connect-MgGraph -Scopes "ServicePrincipalEndpoint.ReadWrite.All"`, followed by `"Application.ReadWrite.All"`.
+1. Sign in with a Global Admin account.
+1. Consent to the permission prompt.
+1. Run `Get-MgServicePrincipal -ConsistencyLevel eventual -Search '"DisplayName:Azure Windows VM Sign-In"'`.
+ - If this command results in no output and returns you to the PowerShell prompt, you can create the service principal with the following Graph PowerShell command:
+
+ `New-MgServicePrincipal -AppId 372140e0-b3b7-4226-8ef9-d57986796201`
+ - Successful output will show that the Azure Windows VM Sign-In app and its ID were created.
+1. Sign out of Graph PowerShell by using the `Disconnect-MgGraph` command.
## Next steps
-For more information on Azure Active Directory, see [What is Azure Active Directory](../fundamentals/active-directory-whatis.md).
+For more information about Azure AD, see [What is Azure Active Directory?](../fundamentals/active-directory-whatis.md).
active-directory Cross Cloud Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-cloud-settings.md
After each organization has completed these steps, Azure AD B2B collaboration be
In your Microsoft cloud settings, enable the Microsoft Azure cloud you want to collaborate with.
-> [!NOTE]
-> The admin experience is currently still deploying to national clouds. To access the admin experience in Microsoft Azure Government or Microsoft Azure China, you can use these links:
->
->Microsoft Azure Government - https://aka.ms/cloudsettingsusgov
->
->Microsoft Azure China - https://aka.ms/cloudsettingschina
- 1. Sign in to the [Azure portal](https://portal.azure.com) using a Global administrator or Security administrator account. Then open the **Azure Active Directory** service. 1. Select **External Identities**, and then select **Cross-tenant access settings**. 1. Select **Microsoft cloud settings (Preview)**.
active-directory Cross Tenant Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-overview.md
To set up B2B collaboration, both organizations configure their Microsoft cloud
For configuration steps, see [Configure Microsoft cloud settings for B2B collaboration (Preview)](cross-cloud-settings.md).
-> [!NOTE]
-> The admin experience is currently still deploying to national clouds. To access the admin experience in Microsoft Azure Government or Microsoft Azure China, you can use these links:
->
->Microsoft Azure Government - https://aka.ms/cloudsettingsusgov
->
->Microsoft Azure China - https://aka.ms/cloudsettingschina
- ### Default settings in cross-cloud scenarios To collaborate with a partner tenant in a different Microsoft Azure cloud, both organizations need to mutually enable B2B collaboration with each other. The first step is to enable the partner's cloud in your cross-tenant settings. When you first enable another cloud, B2B collaboration is blocked for all tenants in that cloud. You need to add the tenant you want to collaborate with to your Organizational settings, and at that point your default settings go into effect for that tenant only. You can allow the default settings to remain in effect, or you can modify the organizational settings for the tenant.
active-directory Cross Tenant Access Settings B2b Collaboration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-collaboration.md
With outbound settings, you select which of your users and groups will be able t
- Select the user or group in the search results. - When you're done selecting the users and groups you want to add, choose **Select**.
+ > [!NOTE]
+ > When targeting your users and groups, you won't be able to select users who have configured [SMS-based authentication](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-sms-signin). This is because users who have a "federated credential" on their user object are blocked to prevent external users from being added to outbound access settings. As a workaround, you can use the [Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-1.0) to add the user's object ID directly or target a group the user belongs to.
+ 1. Select the **External applications** tab. 1. Under **Access status**, select one of the following:
active-directory Cross Tenant Access Settings B2b Direct Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/cross-tenant-access-settings-b2b-direct-connect.md
With outbound settings, you select which of your users and groups will be able t
- In the **Select** pane, type the user name or the group name in the search box. - When you're done selecting users and groups, choose **Select**.
+ > [!NOTE]
+ > When targeting your users and groups, you won't be able to select users who have configured [SMS-based authentication](https://docs.microsoft.com/azure/active-directory/authentication/howto-authentication-sms-signin). This is because users who have a "federated credential" on their user object are blocked to prevent external users from being added to outbound access settings. As a workaround, you can use the [Microsoft Graph API](https://docs.microsoft.com/graph/api/resources/crosstenantaccesspolicy-overview?view=graph-rest-1.0) to add the user's object ID directly or target a group the user belongs to.
+ 1. Select **Save**. 1. Select the **External applications** tab. 1. Under **Access status**, select one of the following:
active-directory Auth Ssh https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/auth-ssh.md
Title: SSH authentication with Azure Active Directory
-description: Architectural guidance on achieving SSH integration with Azure Active Directory
+description: Get architectural guidance on achieving SSH integration with Azure Active Directory.
-# SSH
+# SSH authentication with Azure Active Directory
-Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. SSH also provides a command-line sign-in, executes remote commands, and securely transfer files. It's commonly used in Unix-based systems such as Linux®. SSH replaces the Telnet protocol, which doesn't provide encryption in an unsecured network.
+Secure Shell (SSH) is a network protocol that provides encryption for operating network services securely over an unsecured network. It's commonly used in Unix-based systems such as Linux. SSH replaces the Telnet protocol, which doesn't provide encryption in an unsecured network.
-Azure Active Directory (Azure AD) provides a Virtual Machine (VM) extension for Linux®-based systems running on Azure, and a client extension that integrates with [Azure CLI](/cli/azure/) and the OpenSSH client.
+Azure Active Directory (Azure AD) provides a virtual machine (VM) extension for Linux-based systems that run on Azure. It also provides a client extension that integrates with the [Azure CLI](/cli/azure/) and the OpenSSH client.
-## Use when 
+You can use SSH authentication with Active Directory when you're:
-* Working with Linux®-based VMs that require remote sign-in
+* Working with Linux-based VMs that require remote command-line sign-in.
-* Executing remote commands in Linux®-based systems
+* Running remote commands in Linux-based systems.
-* Securely transfer files in an unsecured network
+* Securely transferring files in an unsecured network.
-![diagram of Azure AD with SSH protocol](./media/authentication-patterns/ssh-auth.png)
+## Components of the system 
-## Components of system 
+The following diagram shows the process of SSH authentication with Azure AD:
-* **User**: Starts Azure CLI and SSH client to set up a connection with the Linux® VMs and provides credentials for authentication.
+![Diagram of Azure AD with the SSH protocol.](./media/authentication-patterns/ssh-auth.png)
-* **Azure CLI**: The component that the user interacts with to initiate their session with Azure AD, request short-lived OpenSSH user certificates from Azure AD, and initiate the SSH session.
+The system includes the following components:
-* **Web browser**: The component that the user interacts with to authenticate their Azure CLI session. It communicates with the Identity Provider (Azure AD) to securely authenticate and authorize the user.
+* **User**: The user starts the Azure CLI and the SSH client to set up a connection with the Linux VMs. The user also provides credentials for authentication.
-* **OpenSSH Client**: This client is used by Azure CLI, or (optionally) directly by the end user, to initiate a connection to the Linux VM.
+* **Azure CLI**: The user interacts with the Azure CLI to start a session with Azure AD, request short-lived OpenSSH user certificates from Azure AD, and start the SSH session.
-* **Azure AD**: Authenticates the identity of the user and issues short-lived OpenSSH user certificates to their Azure CLI client.
+* **Web browser**: The user opens a browser to authenticate the Azure CLI session. The browser communicates with the identity provider (Azure AD) to securely authenticate and authorize the user.
-* **Linux VM**: Accepts OpenSSH user certificate and provides successful connection.
+* **OpenSSH client**: The Azure CLI (or the user) uses the OpenSSH client to start a connection to the Linux VM.
-## Implement SSH with Azure AD 
+* **Azure AD**: Azure AD authenticates the identity of the user and issues short-lived OpenSSH user certificates to the Azure CLI client.
-* [Log in to a Linux® VM with Azure Active Directory credentials - Azure Virtual Machines ](../devices/howto-vm-sign-in-azure-ad-linux.md)
+* **Linux VM**: The Linux VM accepts the OpenSSH user certificate and provides a successful connection.
+
+## Next steps
+
+* To implement SSH with Azure AD, see [Log in to a Linux VM by using Azure AD credentials](../devices/howto-vm-sign-in-azure-ad-linux.md).
active-directory Recover From Deletions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recover-from-deletions.md
To restore an application from the Azure portal, select **App registrations** >
[![Screenshot that shows the app registration restore process in the azure portal.](./media/recoverability/deletion-restore-application.png)](./media/recoverability/deletion-restore-application.png#lightbox)
+To restore applications using Microsoft Graph, see [Restore deleted item - Microsoft Graph v1.0.](/graph/api/directory-deleteditems-restore?tabs=http)
+ ## Hard deletions A hard deletion is the permanent removal of an object from your Azure AD tenant. Objects that don't support soft delete are removed in this way. Similarly, soft-deleted objects are hard deleted after a deletion time of 30 days. The only object types that support a soft delete are:
Ensure you have a process to frequently review items in the soft-delete state an
* Ensure that you have specific roles or users assigned to evaluate and restore items as appropriate. * Develop and test a continuity management plan. For more information, see [Considerations for your Enterprise Business Continuity Management Plan](/compliance/assurance/assurance-developing-your-ebcm-plan).
-For more information on how to avoid unwanted deletions, see the following topics in [Recoverability best practices](recoverability-overview.md):
+For more information on how to avoid unwanted deletions, see the following articles in [Recoverability best practices](recoverability-overview.md):
* Business continuity and disaster planning * Document known good states
active-directory Recoverability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/recoverability-overview.md
The Audit log always records a "Delete \<object\>" event when an object in the t
:::image type="content" source="media/recoverability/deletions-audit-log.png" alt-text="Screenshot that shows Audit log detail." lightbox="media/recoverability/deletions-audit-log.png":::
-A Delete event for applications, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete.
+A Delete event for applications, service principals, users, and Microsoft 365 Groups is a soft delete. For any other object type, it's a hard delete.
| Object type | Activity in log| Result | | - | - | - |
-| Application| Delete application| Soft deleted |
-| Application| Hard delete application| Hard deleted |
+| Application| Delete application and service principal| Soft deleted |
+| Application| Hard delete application | Hard deleted |
+| Service principal| Delete service principal| Soft deleted |
+| Service principal| Hard delete service principal| Hard deleted |
| User| Delete user| Soft deleted | | User| Hard delete user| Hard deleted | | Microsoft 365 Groups| Delete group| Soft deleted |
active-directory Road To The Cloud Establish https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-establish.md
# Establish an Azure AD footprint
+## Required tasks
+ If you're using Microsoft Office 365, Exchange Online, or Teams then you are already using Azure AD. If you do, your next step is to establish more Azure AD capabilities. * Establish hybrid identity synchronization between AD and Azure AD using [Azure AD Connect](../hybrid/whatis-azure-ad-connect.md) or [Azure AD Connect Cloud Sync](../cloud-sync/what-is-cloud-sync.md).
active-directory Road To The Cloud Implement https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-implement.md
These links provide additional information on this topic but are not specific to
* [Provide optional claims to Azure AD apps - Microsoft identity platform](../develop/active-directory-optional-claims.md)
+These links provide additional information relevant to groups:
+ * [Create or edit a dynamic group and get status - Azure AD](../enterprise-users/groups-create-rule.md) * Use dynamic groups for automated group management
The organization has a process to evaluate Azure AD alternatives when considerin
* [Azure Files](../../storage/files/storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the industry standard SMB or NFS protocol. Customers can use native [Azure AD authentication to Azure Files](../../virtual-desktop/create-profile-container-azure-ad.md) over the internet without line of sight to a DC.
- * Azure AD also works with third party applications in our [Application Gallery](/security/business/identity-access-management/integrated-apps-azure-ad)
+ * Azure AD also works with third party applications in our [Application Gallery](/microsoft-365/enterprise/integrated-apps-and-azure-ads)
* Print Servers
active-directory Road To The Cloud Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-migrate.md
To enable self-service capabilities, your authentication methods must be updated
### To scale out
-Gradually register and enable SSPR. For example, roll out by region, subsidiary, department, etc. for all users. This enables both MFA and SSPR. Refer to [Sample SSPR rollout materials](/download/details.aspx?id=56768) to assist with required end-user communications and evangelizing.
+Gradually register and enable SSPR. For example, roll out by region, subsidiary, department, etc. for all users. This enables both MFA and SSPR. Refer to [Sample SSPR rollout materials](https://www.microsoft.com/download/details.aspx?id=56768) to assist with required end-user communications and evangelizing.
**Key points:**
To transform groups and distribution lists:
* Upgrade your [distribution lists to Microsoft 365 groups in Outlook](https://support.microsoft.com/office/7fb3d880-593b-4909-aafa-950dd50ce188) and [decommission your on-premises Exchange server](/exchange/decommission-on-premises-exchange).
-### Move application provisioning
+### Move provisioning of users and groups to applications
This workstream will help you to simplify your environment by removing application provisioning flows from on-premises IDM systems such as Microsoft Identity Manager. Based on your application discovery, categorize your application based on the following:
This project has two primary initiatives. The first is to plan and implement a V
For more information, see:
-* [Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure](/virtual-desktop/deploy-azure-ad-joined-vm)
+* [Deploy Azure AD joined VMs in Azure Virtual Desktop - Azure](/azure/virtual-desktop/deploy-azure-ad-joined-vm)
* [Windows 365 planning guide](/windows-365/enterprise/planning-guide)
The following tools can help you to discover applications that use LDAP.
* [Event1644Reader](/troubleshoot/windows-server/identity/event1644reader-analyze-ldap-query-performance) : Sample tool for collecting data on LDAP Queries made to Domain Controllers using Field Engineering Logs.
-* [Microsoft Microsoft 365 Defender for Identity](/ATPDocs/monitored-activities.md): Utilize the sign in Operations monitoring capability (note captures binds using LDAP, but not Secure LDAP.
+* [Microsoft Microsoft 365 Defender for Identity](/defender-for-identity/monitored-activities): Utilize the sign in Operations monitoring capability (note captures binds using LDAP, but not Secure LDAP.
* [PSLDAPQueryLogging](https://github.com/RamblingCookieMonster/PSLDAPQueryLogging) : GitHub tool for reporting on LDAP queries.
Legacy applications have different areas of dependencies to AD:
To reduce or eliminate the dependencies above, there are three main approaches, listed below in order of preference:
-**Approach 1** Replace with SaaS alternatives that use modern authentication. In this approach, undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Azure AD directly.
+* **Approach 1** Replace with SaaS alternatives that use modern authentication. In this approach, undertake projects to migrate from legacy applications to SaaS alternatives that use modern authentication. Have the SaaS alternatives authenticate to Azure AD directly.
-**Approach 2** Replatform (for example, adopt serverless/PaaS) to support modern hosting without servers and/or update the code to support modern authentication. In this approach, undertake projects to update authentication code for applications that will be modernized or replatform on serverless/PaaS to eliminate the need for underlying server management. Enable the app to use modern authentication and integrate to Azure AD directly. [Learn about MSAL - Microsoft identity platform](../develop/msal-overview.md).
+* **Approach 2** Replatform (for example, adopt serverless/PaaS) to support modern hosting without servers and/or update the code to support modern authentication. In this approach, undertake projects to update authentication code for applications that will be modernized or replatform on serverless/PaaS to eliminate the need for underlying server management. Enable the app to use modern authentication and integrate to Azure AD directly. [Learn about MSAL - Microsoft identity platform](../develop/msal-overview.md).
-**Approach 3** Leave the applications as legacy applications for the foreseeable future or sunset the applications and opportunity arises. We recommend that this is considered as a last resort.
+* **Approach 3** Leave the applications as legacy applications for the foreseeable future or sunset the applications and opportunity arises. We recommend that this is considered as a last resort.
Based on the app dependencies, you have three migration options:
-#### Migration option #1
-
-* Utilize Azure AD Domain Services if the dependencies are aligned with [Common deployment scenarios for Azure AD Domain Services](../../active-directory-domain-services/scenarios.md).
-
-* To validate if Azure AD DS is a good fit, you might use tools like Service Map [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [Automatic Dependency Mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867).
-
-* Validate your SQL server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
-
-##### Option 1 steps
+#### Implement approach #1
1. Deploy Azure AD Domain Services into an Azure virtual network
Based on the app dependencies, you have three migration options:
4. As legacy apps retire through attrition, eventually decommission Azure AD Domain Services running in the Azure virtual network
-#### Migration option #2
+>[!NOTE]
+>* Utilize Azure AD Domain Services if the dependencies are aligned with [Common deployment scenarios for Azure AD Domain Services](../../active-directory-domain-services/scenarios.md).
+>* To validate if Azure AD DS is a good fit, you might use tools like Service Map [Microsoft Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/Microsoft.ServiceMapOMS?tab=Overview) and [Automatic Dependency Mapping with Service Map and Live Maps](https://techcommunity.microsoft.com/t5/system-center-blog/automatic-dependency-mapping-with-service-map-and-live-maps/ba-p/351867).
+>* Validate your SQL server instantiations can be [migrated to a different domain](https://social.technet.microsoft.com/wiki/contents/articles/24960.migrating-sql-server-to-new-domain.aspx). If your SQL service is running in virtual machines, [use this guidance](/azure-sql/migration-guides/virtual-machines/sql-server-to-sql-on-azure-vm-individual-databases-guide).
-Extend on-premises AD to Azure IaaS. If #1 isn't possible and an application has a strong dependency on AD
+#### Implement approach #2
-##### Option 2 steps
+Extend on-premises AD to Azure IaaS. If #1 isn't possible and an application has a strong dependency on AD
1. Connect an Azure virtual network to the on-premises network via VPN or ExpressRoute
Extend on-premises AD to Azure IaaS. If #1 isn't possible and an application has
6. As legacy apps retire through attrition, eventually decommission the Active Directory running in the Azure virtual network
-#### Migration option #3
+#### Implement approach #3
Deploy a new AD to Azure IaaS. If migration option #1 isn't possible and an application has a strong dependency on AD. This approach enables you to decouple the app from the existing AD to reduce surface area.
-##### Option 3 steps
- 1. Deploy a new Active Directory as virtual machines into an Azure virtual network 2. Lift and shift legacy apps to VMs on the Azure virtual network that are domain-joined to the new Active Directory
active-directory Road To The Cloud Posture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/road-to-the-cloud-posture.md
In enterprise-sized organizations, IAM transformation, or even transformation fr
[ ![Diagram that shows five elements, each depicting a possible network architecture. Options include cloud attached, hybrid, cloud first, AD minimized, and 100% cloud.](media/road-to-cloud-posture/road-to-the-cloud-five-states.png) ](media/road-to-cloud-posture/road-to-the-cloud-five-states.png#lightbox) >[!NOTE]
-> The states in this diagram represent a logical progression of cloud transformation.
+> The states in this diagram represent a logical progression of cloud transformation. Your ability to move from one state to the next is dependent on the functionality that you have implemented and the capabilities within that functionality to move to the cloud.
**State 1 Cloud attached** - In this state, organizations have created an Azure AD tenant to enable user productivity and collaboration tools and the tenant is fully operational. Most companies that use Microsoft products and services in their IT environment are already in or beyond this state. In this state operational costs may be higher because there's an on-premises environment and cloud environment to maintain and make interactive. Also, people must have expertise in both environments to support their users and the organization. In this state:
active-directory Recover Deleted Apps Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/recover-deleted-apps-faq.md
+
+ Title: Frequently asked questions about recovering deleted apps
+
+description: Find answers to frequently asked questions (FAQs) about recovering deleted apps and service principals.
+++++++ Last updated : 05/24/2022++++++
+# Recover deleted applications in Azure Active Directory FAQs
+
+This page answers frequently asked questions about deleting and restoring deleted application registrations and service principals.
+
+## When I create applications, I'm getting Directory_QuotaExceeded error. How can I avoid this problem?
+A non-admin user can create no more than 250 Azure AD resources that include applications and service principals. Both active resources and deleted resources that are available to restore count toward this quota. Even if you delete more applications that you don't need, they'll still add count to the quota. Hence, to free up the quota, you need to [permanently delete](/graph/api/directory-deleteditems-delete?tabs=http) objects in the deleted items container. You can learn more about the service limits through [this link](/azure/azure-resource-manager/management/azure-subscription-service-limits?msclkid=6cb6cc54c68711ec93eb9539fce3cc28#active-directory-limits).
+
+The quota limit set for Azure AD resources is applicable when creating applications or service principals using a delegated flow such as using Azure AD app registrations or Enterprise apps portal. Creating applications using the Microsoft Graph API programmatically using application flow won't have this restriction.
+
+## Where can I find all the deleted applications and service principals?
+
+Soft-deleted application and service principal objects go into the [deleted items](/graph/api/resources/directory?tabs=http) container and remain available to restore for up to 30 days. After 30 days, they're permanently deleted, and this frees up the quota.
+You find the deleted applications by using one of the following approaches:
+
+- Using the Azure portal
+
+Recently deleted application objects can be found under the **Deleted applications** tab on the App registrations blade of Azure portal.
+
+ :::image type="content" source="media/delete-application-portal/recover-deleted-apps.png" alt-text="Screenshot shows list of deleted items.":::
+
+- Using the Microsoft Graph API
+
+Recently deleted application and service principal objects can be found using the [List deletedItems](/graph/api/directory-deleteditems-list?tabs=http) API.
+
+- Using PowerShell
+
+Recently deleted application and service principal objects can be found using the
+[Get-AzureADMSDeletedDirectoryObject](/powershell/module/azuread/get-azureadmsdeleteddirectoryobject?tabs=http) cmdlet.
+
+## How do I restore deleted applications or service principals?
+
+- Using Microsoft Graph API
+
+Deleted objects can be restored using the [Restore deleted item](/graph/api/directory-deleteditems-restore?tabs=http) API.
+
+- Using PowerShell
+
+Deleted objects can be restored using the [Restore-AzureADMSDeletedDirectoryObject](/powershell/module/azuread/restore-azureadmsdeleteddirectoryobject?tabs=http) cmdlet.
+
+## How do I permanently delete soft deleted applications or service principals?
+
+- Using the Microsoft Graph API
+
+Soft deleted objects can be permanently deleted by using the [Permanently delete an item from deleted items](/graph/api/directory-deleteditems-delete?tabs=http) API.
+
+- Using PowerShell
+
+Soft deleted objects can be permanently deleted using the [Remove-AzureADMSDeletedDirectoryObject](/powershell/module/azuread/remove-azureadmsdeleteddirectoryobject?tabs=http) cmdlet.
+
+## Can I configure the interval in which applications and service principals are permanently deleted by Azure AD?
+
+No. You canΓÇÖt configure the periodicity of hard deletion.
+
+## I restored a deleted application using the App registrations portal experience. I don't see the SAML SSO configurations I made to the app prior to deletion.
+
+The SAML SSO configurations are stored on the service principal object. When you restore an application from the App registrations UI, it recovers the app object but creates a new service principal. Hence, the SAML SSO configurations done earlier to the app are lost when restoring a deleted application using the App registrations UI.
+
+To correct this problem, delete the new service principal the app registrations experience created and restore the original service principal using the [Microsoft Graph API](/graph/api/directory-deleteditems-restore?tabs=http) or the [Microsoft Graph PowerShell cmdlet](/powershell/module/azuread/restore-azureadmsdeleteddirectoryobject?tabs=http).
+
+If you recorded the object ID of the service principal before deleting the application, use the [Restore deleted item](/graph/api/directory-deleteditems-restore?tabs=http) API to recover the service principal. Otherwise, use the [list deleted items](/graph/api/directory-deleteditems-list?tabs=http) API to fetch the deleted service principal and filter the results by the client's application ID (**appId**) property using the following syntax:
+
+`https://graph.microsoft.com/v1.0/directory/deletedItems/microsoft.graph.servicePrincipal?$filter=appId eq '{appId}'`
+
+## Why canΓÇÖt I recover managed identities?
+
+[Managed identities](../managed-identities-azure-resources/overview.md) are a special type of service principals. Deleted managed identities canΓÇÖt be recovered currently.
+
+## I canΓÇÖt see the provisioning data from a recovered service principal. How can I recover it back?
+
+After recovering an SP, you may initially see the error in the following screenshot. This issue will resolve itself between 40 mins and 1 day. If you would like the provisioning job to start immediately, you can hit restart to force the provisioning service to run again. Hitting restart will trigger an initial cycle that can take time for customers with 100 K+ users or group memberships.
+
+
+## I recovered my application that was configured for application proxy. I canΓÇÖt see app proxy configurations after the recovery. How can I recover it back?
+
+App proxy configurations can't be recovered through the portal UI. Use the API to recover app proxy settings. Expect a delay of up to 24 hours as the app proxy data gets synced back.
+
+## I canΓÇÖt see the policies I set on the service principal object after the recovery. How can I recover them?
+
+Policies can't be recovered currently. When you restore a service principal, you'll have to configure the policies again.
+
+## Next steps
+
+- [Delete a service principal](delete-application-portal.md)
+- [Delete an application registration](../develop/howto-restore-app.md)
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
In addition to in-tree driver features, Azure disk CSI driver supports the follo
- Performance improvements during concurrent disk attach and detach - In-tree drivers attach or detach disks in serial, while CSI drivers attach or detach disks in batch. There is significant improvement when there are multiple disks attaching to one node. - Zone-redundant storage (ZRS) disk support
- - `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md)
+ - `Premium_ZRS`, `StandardSSD_ZRS` disk types are supported, ZRS disk could be scheduled on the zone or non-zone node, without the restriction that disk volume should be co-located in the same zone as a given node, check more details about [Zone-redundant storage for managed disks](../virtual-machines/disks-redundancy.md)
- [Snapshot](#volume-snapshots) - [Volume clone](#clone-volumes) - [Resize disk PV without downtime(Preview)](#resize-a-persistent-volume-without-downtime-preview)
$ kubectl exec -it busybox-azuredisk-0 -- cat c:\mnt\azuredisk\data.txt # on Win
[az-provider-register]: /cli/azure/provider#az_provider_register [az-on-demand-bursting]: ../virtual-machines/disk-bursting.md#on-demand-bursting [enable-on-demand-bursting]: ../virtual-machines/disks-enable-bursting.md?tabs=azure-cli
-[az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds
+[az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds
aks Azure Disk Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-customer-managed-keys.md
$keyVaultKeyUrl=az keyvault key show --vault-name myKeyVaultName --name myKeyNa
az disk-encryption-set create -n myDiskEncryptionSetName -l myAzureRegionName -g myResourceGroup --source-vault $keyVaultId --key-url $keyVaultKeyUrl ```
+> [!IMPORTANT]
+> Ensure your AKS cluster identity has read permission of DiskEncryptionSet
+ ## Grant the DiskEncryptionSet access to key vault Use the DiskEncryptionSet and resource groups you created on the prior steps, and grant the DiskEncryptionSet resource access to the Azure Key Vault.
aks Csi Secrets Store Driver https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-secrets-store-driver.md
aks-secrets-store-provider-azure-6pqmv 1/1 Running 0 4m24s
aks-secrets-store-provider-azure-f5qlm 1/1 Running 0 4m25s ```
-Be sure that a Secrets Store CSI Driver pod and an Azure Key Vault Provider pod are running on each node in your cluster's node pools.
+Be sure that a Secrets Store CSI Driver pod and a Secrets Store Provider Azure pod are running on each node in your cluster's node pools.
## Create or use an existing Azure key vault
api-management Api Management Access Restriction Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-access-restriction-policies.md
This policy can be used in the following policy [sections](./api-management-howt
## <a name="ValidateJWT"></a> Validate JWT
-The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from either a specified HTTP header or a specified query parameter.
+The `validate-jwt` policy enforces existence and validity of a JSON web token (JWT) extracted from a specified HTTP header, extracted from a specified query parameter, or matching a specific value.
> [!IMPORTANT] > The `validate-jwt` policy requires that the `exp` registered claim is included in the JWT token, unless `require-expiration-time` attribute is specified and set to `false`.
The `validate-jwt` policy enforces existence and validity of a JSON web token (J
```xml <validate-jwt
- header-name="name of http header containing the token (use query-parameter-name attribute if the token is passed in the URL)"
+ header-name="name of HTTP header containing the token (alternatively, use query-parameter-name or token-value attribute to specify token)"
+ query-parameter-name="name of query parameter used to pass the token (alternative, use header-name or token-value attribute to specify token)"
+ token-value="expression returning the token as a string (alternatively, use header-name or query-parameter attribute to specify token)"
failed-validation-httpcode="http status code to return on failure" failed-validation-error-message="error message to return on failure"
- token-value="expression returning JWT token as a string"
require-expiration-time="true|false" require-scheme="scheme" require-signed-tokens="true|false"
This example shows how to use the [Validate JWT](api-management-access-restricti
| failed-validation-httpcode | HTTP Status code to return if the JWT doesn't pass validation. | No | 401 | | header-name | The name of the HTTP header holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A | | query-parameter-name | The name of the query parameter holding the token. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
-| token-value | Expression returning a string containing JWT token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
+| token-value | Expression returning a string containing the token. You must not return `Bearer ` as part of the token value. | One of `header-name`, `query-parameter-name` or `token-value` must be specified. | N/A |
| id | The `id` attribute on the `key` element allows you to specify the string that will be matched against `kid` claim in the token (if present) to find out the appropriate key to use for signature validation. | No | N/A | | match | The `match` attribute on the `claim` element specifies whether every claim value in the policy must be present in the token for validation to succeed. Possible values are:<br /><br /> - `all` - every claim value in the policy must be present in the token for validation to succeed.<br /><br /> - `any` - at least one claim value must be present in the token for validation to succeed. | No | all | | require-expiration-time | Boolean. Specifies whether an expiration claim is required in the token. | No | true |
api-management How To Configure Service Fabric Backend https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-configure-service-fabric-backend.md
To test the integration of API Management with the cluster, add the correspondin
:::image type="content" source="media/backends/configure-get-operation.png" alt-text="Add GET operation to API":::
-### Configure `set-backend` policy
+### Configure `set-backend-service` policy
Add the [`set-backend-service`](api-management-transformation-policies.md#SetBackendService) policy to the test API.
api-management Validation Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/validation-policies.md
To add a schema to your API Management instance using the Azure portal:
1. In the [portal](https://portal.azure.com), navigate to your API Management instance. 1. In the **APIs** section of the left-hand menu, select **Schemas** > **+ Add**. 1. In the **Create schema** window, do the following:
- 1. Enter a **Name** for the schema.
+ 1. Enter a **Name** (Id) for the schema.
1. In **Schema type**, select **JSON** or **XML**. 1. Enter a **Description**. 1. In **Create method**, do one of the following:
To add a schema to your API Management instance using the Azure portal:
:::image type="content" source="media/validation-policies/add-schema.png" alt-text="Create schema":::
-After the schema is created, it appears in the list on the **Schemas** page. Select a schema to view its properties or to edit in a schema editor.
+API Management adds the schema resource at the relative URI `/schemas/<schemaId>`, and the schema appears in the list on the **Schemas** page. Select a schema to view its properties or to edit in a schema editor.
> [!NOTE]
-> * A schema may cross-reference another schema that is added to the API Management instance.
-> * Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import).
+> A schema may cross-reference another schema that is added to the API Management instance. For example, include an XML schema added to API Management by using an element similar to:<br/><br/>`<xs:include schemaLocation="/schemas/myschema" />`
++
+> [!TIP]
+> Open-source tools to resolve WSDL and XSD schema references and to batch-import generated schemas to API Management are available on [GitHub](https://github.com/Azure-Samples/api-management-schema-import).
### Usage
application-gateway Application Gateway Ssl Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-ssl-policy-overview.md
Application Gateway supports the following cipher suites from which you can choo
- The connections to backend servers are always with minimum protocol TLS v1.0 and up to TLS v1.2. Therefore, only TLS versions 1.0, 1.1 and 1.2 are supported to establish a secured connection with backend servers. - As of now, the TLS 1.3 implementation is not enabled with &#34;Zero Round Trip Time (0-RTT)&#34; feature.-- The Portal support for the new policies and TLS 1.3 is currently unavailable. - Application Gateway v2 does not support the following DHE ciphers. These won't be used for the TLS connections with clients even though they are mentioned in the predefined policies. Instead of DHE ciphers, secure and faster ECDHE ciphers are recommended. - TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 - TLS_DHE_RSA_WITH_AES_128_CBC_SHA
application-gateway Redirect Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/redirect-overview.md
A common redirection scenario for many web applications is to support automatic
A redirect type sets the response status code for the clients to understand the purpose of the redirect. The following types of redirection are supported: - 301 (Moved permanently): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource will use one of the enclosed URIs. Use 301 status code for HTTP to HTTPS redirection.
+- 303 (Permanent redirect): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs.
- 302 (Found): Indicates that the target resource is temporarily under a different URI. Since the redirection can change on occasion, the client should continue to use the effective request URI for future requests. - 307 (Temporary redirect): Indicates that the target resource is temporarily under a different URI. The user agent MUST NOT change the request method if it does an automatic redirection to that URI. Since the redirection can change over time, the client ought to continue using the original effective request URI for future requests.-- 308 (Permanent redirect): Indicates that the target resource has been assigned a new permanent URI. Any future references to this resource should use one of the enclosed URIs. ## Redirection capabilities
application-gateway Tutorial Ingress Controller Add On New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-ingress-controller-add-on-new.md
Title: 'Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Azure Application Gateway'
-description: Use this tutorial to learn how to enable the Ingress Controller add-on for your new AKS cluster with a new Application Gateway instance.
+ Title: 'Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Azure application gateway'
+description: Use this tutorial to learn how to enable the Ingress Controller add-on for your new AKS cluster with a new application gateway instance.
Previously updated : 03/02/2021 Last updated : 07/12/2022 +
-# Tutorial: Enable the Ingress Controller add-on for a new AKS cluster with a new Application Gateway instance
+# Tutorial: Enable the ingress controller add-on for a new AKS cluster with a new application gateway instance
-You can use the Azure CLI to enable the [Application Gateway Ingress Controller (AGIC)](ingress-controller-overview.md) add-on for a new [Azure Kubernetes Services (AKS)](https://azure.microsoft.com/services/kubernetes-service/) cluster.
+You can use the Azure CLI to enable the [application gateway ingress controller (AGIC)](ingress-controller-overview.md) add-on for a new [Azure Kubernetes Services (AKS)](https://azure.microsoft.com/services/kubernetes-service/) cluster.
-In this tutorial, you'll create an AKS cluster with the AGIC add-on enabled. Creating the cluster will automatically create an Azure Application Gateway instance to use. You'll then deploy a sample application that will use the add-on to expose the application through Application Gateway.
+In this tutorial, you'll create an AKS cluster with the AGIC add-on enabled. Creating the cluster will automatically create an Azure application gateway instance to use. You'll then deploy a sample application that will use the add-on to expose the application through application gateway.
The add-on provides a much faster way to deploy AGIC for your AKS cluster than [previously through Helm](ingress-controller-overview.md#difference-between-helm-deployment-and-aks-add-on). It also offers a fully managed experience.
In this tutorial, you learn how to:
> * Create a resource group. > * Create a new AKS cluster with the AGIC add-on enabled. > * Deploy a sample application by using AGIC for ingress on the AKS cluster.
-> * Check that the application is reachable through Application Gateway.
+> * Check that the application is reachable through application gateway.
[!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)]
In this tutorial, you learn how to:
## Create a resource group
-In Azure, you allocate related resources to a resource group. Create a resource group by using [az group create](/cli/azure/group#az-group-create). The following example creates a resource group named *myResourceGroup* in the *canadacentral* location (region):
+In Azure, you allocate related resources to a resource group. Create a resource group by using [az group create](/cli/azure/group#az-group-create). The following example creates a resource group named **myResourceGroup** in the **East US** location (region):
```azurecli-interactive
-az group create --name myResourceGroup --location canadacentral
+az group create --name myResourceGroup --location eastus
``` ## Deploy an AKS cluster with the add-on enabled
-You'll now deploy a new AKS cluster with the AGIC add-on enabled. If you don't provide an existing Application Gateway instance to use in this process, we'll automatically create and set up a new Application Gateway instance to serve traffic to the AKS cluster.
+You'll now deploy a new AKS cluster with the AGIC add-on enabled. If you don't provide an existing application gateway instance to use in this process, you'll automatically create and set up a new application gateway instance to serve traffic to the AKS cluster.
> [!NOTE]
-> The Application Gateway Ingress Controller add-on supports *only* Application Gateway v2 SKUs (Standard and WAF), and *not* the Application Gateway v1 SKUs. When you're deploying a new Application Gateway instance through the AGIC add-on, you can deploy only an Application Gateway Standard_v2 SKU. If you want to enable the add-on for an Application Gateway WAF_v2 SKU, use either of these methods:
+> The application gateway ingress controller add-on supports *only* application gateway v2 SKUs (Standard and WAF), and *not* the application gateway v1 SKUs. When you're deploying a new application gateway instance through the AGIC add-on, you can deploy only an application gateway Standard_v2 SKU. If you want to enable the add-on for an application gateway WAF_v2 SKU, use either of these methods:
>
-> - Enable WAF on Application Gateway through the portal.
-> - Create the WAF_v2 Application Gateway instance first, and then follow instructions on how to [enable the AGIC add-on with an existing AKS cluster and existing Application Gateway instance](tutorial-ingress-controller-add-on-existing.md).
+> - Enable WAF on application gateway through the portal.
+> - Create the WAF_v2 application gateway instance first, and then follow instructions on how to [enable the AGIC add-on with an existing AKS cluster and existing application gateway instance](tutorial-ingress-controller-add-on-existing.md).
-In the following example, you'll deploy a new AKS cluster named *myCluster* by using [Azure CNI](../aks/concepts-network.md#azure-cni-advanced-networking) and [managed identities](../aks/use-managed-identity.md). The AGIC add-on will be enabled in the resource group that you created, *myResourceGroup*.
+In the following example, you'll deploy a new AKS cluster named *myCluster* by using [Azure CNI](../aks/concepts-network.md#azure-cni-advanced-networking) and [managed identities](../aks/use-managed-identity.md). The AGIC add-on will be enabled in the resource group that you created, **myResourceGroup**.
-Deploying a new AKS cluster with the AGIC add-on enabled without specifying an existing Application Gateway instance will mean an automatic creation of a Standard_v2 SKU Application Gateway instance. So, you'll also specify the name and subnet address space of the Application Gateway instance. The name of the Application Gateway instance will be *myApplicationGateway*, and the subnet address space we're using is 10.2.0.0/16.
+Deploying a new AKS cluster with the AGIC add-on enabled without specifying an existing application gateway instance will mean an automatic creation of a Standard_v2 SKU application gateway instance. So, you'll also specify the name and subnet address space of the application gateway instance. The name of the application gateway instance will be **myApplicationGateway**, and the subnet address space will be **10.225.0.0/16**.
```azurecli-interactive
-az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.2.0.0/16" --generate-ssh-keys
+az aks create -n myCluster -g myResourceGroup --network-plugin azure --enable-managed-identity -a ingress-appgw --appgw-name myApplicationGateway --appgw-subnet-cidr "10.225.0.0/16" --generate-ssh-keys
```
-To configure additional parameters for the `az aks create` command, see [these references](/cli/azure/aks#az-aks-create).
+To configure more parameters for the above command, got to [az aks create](/cli/azure/aks#az-aks-create).
> [!NOTE]
-> The AKS cluster that you created will appear in the resource group that you created, *myResourceGroup*. However, the automatically created Application Gateway instance will be in the node resource group, where the agent pools are. The node resource group by is named *MC_resource-group-name_cluster-name_location* by default, but can be modified.
+> The AKS cluster that you created will appear in the resource group that you created, **myResourceGroup**. However, the automatically created application gateway instance will be in the node resource group, where the agent pools are. The node resource group is named **MC_resource-group-name_cluster-name_location** by default, but can be modified.
## Deploy a sample application by using AGIC
-You'll now deploy a sample application to the AKS cluster that you created. The application will use the AGIC add-on for ingress and connect the Application Gateway instance to the AKS cluster.
+You'll now deploy a sample application to the AKS cluster that you created. The application will use the AGIC add-on for ingress and connect the application gateway instance to the AKS cluster.
First, get credentials to the AKS cluster by running the `az aks get-credentials` command:
First, get credentials to the AKS cluster by running the `az aks get-credentials
az aks get-credentials -n myCluster -g myResourceGroup ```
-Now that you have credentials, run the following command to set up a sample application that uses AGIC for ingress to the cluster. AGIC will update the Application Gateway instance that you set up earlier with corresponding routing rules to the new sample application that you deployed.
+Now that you have credentials, run the following command to set up a sample application that uses AGIC for ingress to the cluster. AGIC will update the application gateway instance that you set up earlier with corresponding routing rules to the sample application you're deploying.
```azurecli-interactive kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kubernetes-ingress/master/docs/examples/aspnetapp.yaml
kubectl apply -f https://raw.githubusercontent.com/Azure/application-gateway-kub
## Check that the application is reachable
-Now that the Application Gateway instance is set up to serve traffic to the AKS cluster, let's verify that your application is reachable. First, get the IP address of the ingress:
+Now that the application gateway instance is set up to serve traffic to the AKS cluster, let's verify that your application is reachable. First, get the IP address of the ingress:
```azurecli-interactive kubectl get ingress
kubectl get ingress
Check that the sample application that you created is running by either: -- Visiting the IP address of the Application Gateway instance that you got from running the preceding command.
+- Visiting the IP address of the application gateway instance that you got from running the preceding command.
- Using `curl`.
-Application Gateway might take a minute to get the update. If Application Gateway is still in an **Updating** state on the portal, let it finish before you try to reach the IP address.
+Application gateway might take a minute to get the update. If application gateway is still in an **Updating** state on the portal, let it finish before you try to reach the IP address.
## Clean up resources
-When you no longer need them, remove the resource group, the Application Gateway instance, and all related resources:
+When you no longer need them, delete all resources created in this tutorial by deleting **myResourceGroup** and **MC_myResourceGroup_myCluster_eastus** resource groups:
```azurecli-interactive az group delete --name myResourceGroup
+az group delete --name MC_myResourceGroup_myCluster_eastus
``` ## Next steps
+In this tutorial, you:
+
+- Created new AKS cluster with the AGIC add-on enabled
+- Deployed a sample application by using AGIC for ingress on the AKS cluster
+
+To learn more about AGIC, see [What is Application Gateway Ingress Controller?](ingress-controller-overview.md) and [Disable and re-enable AGIC add-on for your AKS cluster](ingress-controller-disable-addon.md)
+
+To learn how to enable application gateway ingress controller add-on for an existing AKS cluster with an existing application gateway, advance to the next tutorial.
+ > [!div class="nextstepaction"]
-> [Learn about disabling the AGIC add-on](./ingress-controller-disable-addon.md)
+> [Enable AGIC for existing AKS and application gateway](tutorial-ingress-controller-add-on-existing.md)
applied-ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-custom-neural.md
Tabular fields are also useful when extracting repeating information within a do
## Supported regions
-As of August 01 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
+Starting on August 1st 2022, Form Recognizer custom neural model training will only be available in the following Azure regions until further notice:
* Brazil South * Canada Central * Central India * Japan East
-* North Europe
+* West Europe
* South Central US * Southeast Asia
automation Add User Assigned Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/add-user-assigned-identity.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- An Azure Automation account. For instructions, see [Create an Azure Automation account](./quickstarts/create-account-portal.md). -- A system-assigned managed identity. For instructions, see [Using a system-assigned managed identity for an Azure Automation account](enable-managed-identity-for-automation.md).--- A user-assigned managed identity. For instructions, see [Create a user-assigned managed identity](../active-directory/managed-identities-azure-resources/how-to-manage-ua-identity-portal.md#create-a-user-assigned-managed-identity).- - The user-assigned managed identity and the target Azure resources that your runbook manages using that identity can be in different Azure subscriptions. - The latest version of Azure Account modules. Currently this is 2.2.8. (See [Az.Accounts](https://www.powershellgallery.com/packages/Az.Accounts/) for details about this version.) - An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the user-assigned managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant. -- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- To add the user assigned managed identity you must have the ```Microsoft.ManagedIdentity/userAssignedIdentities/*/read``` and ```Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action``` permissions over the user assigned managed identity, which are granted to [Managed Identity Operator](/azure/role-based-access-control/built-in-roles.md#managed-identity-operator) and [Managed Identity Contributor](/azure/role-based-access-control/built-in-roles.md#managed-identity-contributor)
+
+- To assign an Azure role to the managed identity, you must have ```Microsoft.Authorization/roleAssignments/write``` permission, which is granted either to [User Access Administrator](/azure/role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](/azure/role-based-access-control/built-in-roles.md#owner)
## Add user-assigned managed identity for Azure Automation account
automation Automation Child Runbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-child-runbooks.md
The parameters of a child runbook called inline can be of any data type, includi
### Runbook types
-Currently, PowerShell 5.1 and 7.1 (preview) are supported and only certain runbook types can call each other:
+Currently, PowerShell 5.1 is supported and only certain runbook types can call each other:
* A [PowerShell runbook](automation-runbook-types.md#powershell-runbooks) and a [graphical runbook](automation-runbook-types.md#graphical-runbooks) can call each other inline, because both are PowerShell based. * A [PowerShell Workflow runbook](automation-runbook-types.md#powershell-workflow-runbooks) and a graphical PowerShell Workflow runbook can call each other inline, because both are PowerShell Workflow based. * The PowerShell types and the PowerShell Workflow types can't call each other inline. They must use `Start-AzAutomationRunbook`.
+> [!IMPORTANT]
+> Executing child scripts using `.\child-runbook.ps1` is not supported in PowerShell 7.1 preview.
+ **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook.
+ The publish order of runbooks matters only for PowerShell Workflow and graphical PowerShell Workflow runbooks. When your runbook calls a graphical or PowerShell Workflow child runbook by using inline execution, it uses the name of the runbook. The name must start with `.\\` to specify that the script is in the local directory.
automation Automation Create Standalone Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-create-standalone-account.md
Review your new Automation account.
When the Automation account is successfully created, several resources are automatically created for you. After creation, these runbooks can be safely deleted if you do not wish to keep them. The managed identities can be used to authenticate to your account in a runbook, and should be left unless you create another one or do not require them. The Automation access keys are also created during Automation account creation. The following table summarizes resources for the account.
-|Resource |Description |
-||||
-|AzureAutomationTutorial Runbook |An example graphical runbook that demonstrates how to authenticate by using a Run As account. The runbook gets all Resource Manager resources. |
-|AzureAutomationTutorialScript |An example PowerShell runbook that demonstrates how to authenticate by using a Run As account. The runbook gets all Resource Manager resources.|
-|AzureAutomationTutorialPython2Runbook |An example Python runbook that demonstrates how to authenticate by using a Run As account. The runbook lists all resource groups present in the subscription.|
+| **Resource** |**Description** |
+|||
+|AzureAutomationTutorialWithIdentityGraphical |An example graphical runbook that demonstrates how to authenticate by using the Managed Identity. The runbook gets all Resource Manager resources. |
+|AzureAutomationTutorialWithIdentity|An example PowerShell runbook that demonstrates how to authenticate by using the Managed Identity. The runbook gets all Resource Manager resources. |
+ > [!NOTE] > The tutorial runbooks have not been updated to authenticate using a managed identity. Review the [Using system-assigned identity](enable-managed-identity-for-automation.md#assign-role-to-a-system-assigned-managed-identity) or [Using user-assigned identity](add-user-assigned-identity.md#assign-a-role-to-a-user-assigned-managed-identity) to learn how to grant the managed identity access to resources and configure your runbooks to authenticate using either type of managed identity.
The DSC node registers with the State Configuration service using the registrati
* To get started with PowerShell runbooks, see [Tutorial: Create a PowerShell runbook](./learn/powershell-runbook-managed-identity.md). * To get started with PowerShell Workflow runbooks, see [Tutorial: Create a PowerShell workflow runbook](learn/automation-tutorial-runbook-textual.md). * To get started with Python 3 runbooks, see [Tutorial: Create a Python 3 runbook](learn/automation-tutorial-runbook-textual-python-3.md).
-* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
+* For a PowerShell cmdlet reference, see [Az.Automation](/powershell/module/az.automation).
automation Enable Managed Identity For Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/enable-managed-identity-for-automation.md
If you don't have an Azure subscription, create a [free account](https://azure.m
- An Azure resource that you want to access from your Automation runbook. This resource needs to have a role defined for the managed identity, which helps the Automation runbook authenticate access to the resource. To add roles, you need to be an owner for the resource in the corresponding Azure AD tenant. -- If you want to execute hybrid jobs using a managed identity, update the Hybrid Runbook Worker to the latest version. The minimum required versions are:
+- If you want to execute hybrid jobs using a managed identity, update the agent-based Hybrid Runbook Worker to the latest version. There is no minimum version requirement for extension-based Hybrid Runbook Worker, and all the versions would work. The minimum required versions for the agent-based Hybrid Worker are:
- - Windows Hybrid Runbook Worker: version 7.3.1125.0
- - Linux Hybrid Runbook Worker: version 1.7.4.0
+ - Windows Hybrid Runbook Worker: version 7.3.1125.0
+ - Linux Hybrid Runbook Worker: version 1.7.4.0
+
+ To check the versions:
+ - Windows Hybrid Runbook Worker: Go to the installation path - `C:\ProgramFiles\Microsoft Monitoring Agent\Agent\AzureAutomation\.` and the folder *Azure Automation* contains a sub-folder with the version number as the name of sub-folder.
+ - Linux Hybrid Runbook Worker: Go to the path - `vi/opt/microsoft/omsconfig/modules/nxOMSAutomationWorker/VERSION.` and the file *VERSION* has the version number of the Hybrid Worker.
-- To assign an Azure role, you must have ```Microsoft.Authorization/roleAssignments/write``` permissions, such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
+- To assign an Azure role you must have ```Microsoft.Authorization/roleAssignments/write``` permission such as [User Access Administrator](../role-based-access-control/built-in-roles.md#user-access-administrator) or [Owner](../role-based-access-control/built-in-roles.md#owner).
## Enable a system-assigned managed identity for an Azure Automation account
automation Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/troubleshoot/managed-identity.md
This article discusses solutions to problems that you might encounter when you use a managed identity with your Automation account. For general information about using managed identity with Automation accounts, see [Azure Automation account authentication overview](../automation-security-overview.md#managed-identities).
+## Scenario: Managed Identity in a Runbook cannot authenticate against Azure
+
+### Issue
+When using a Managed Identity in your runbook, you receive an error as:
+`connect-azaccount : ManagedIdentityCredential authentication failed: Failed to get MSI token for account d94c0db6-5540-438c-9eb3-aa20e02e1226 and resource https://management.core.windows.net/. Status: 500 (Internal Server Error)`
+
+### Cause
+
+This can happen either when:
+
+- **Cause 1**: You use the Automation account System Managed Identity, which has not yet been created and the `Code Connect-AzAccount -Identity` tries to authenticate to Azure and run a runbook in Azure or on a Hybrid Runbook Worker.
+
+- **Cause 2**: The Automation account has a User managed identity assigned and not a System Managed Identity and the - `Code Connect-AzAccount -Identity` tries to authenticate to Azure and run a runbook on an Azure virtual machine Hybrid Runbook Worker using the Azure VM System Managed Identity.
++
+### Resolution
+
+- **Resolution 1**: You must create the Automation Account System Managed Identity and grant it access to the Azure Resources.
+
+- **Resolution 2**: As appropriate for your requirements, you can:
+
+ - Create the Automation Account System Managed Identity and use it to authenticate.</br>
+ Or </br>
+ - Delete the Automation Account User Assigned Managed Identity.
+
+## Scenario: Unable to find the user assigned managed identity to add it to the Automation account
+
+### Issue
+
+You want to add a user-assigned managed identity to the Automation account. However, you can't find the account in the Automation blade.
+
+### Cause
+
+This issue occurs when you don't have the following permissions for the user-assigned managed identity to view it in the Automation blade.
+
+- `Microsoft.ManagedIdentity/userAssignedIdentities/*/read`
+- `Microsoft.ManagedIdentity/userAssignedIdentities/*/assign/action`
+
+>[!NOTE]
+> The above permissions are granted by default to Managed Identity Operator and Managed Identity Contributor.
+
+### Resolution
+Ensure that you have [Identity Operator role permission](/azure/role-based-access-control/built-in-roles#managed-identity-operator) to add the user-assigned managed identity to your Automation account.
++ ## Scenario: Runbook fails with "this.Client.SubscriptionId cannot be null." error message ### Issue
availability-zones Az Region https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/az-region.md
In the Product Catalog, always-available services are listed as "non-regional" s
| **Products** | **Resiliency** | | | | | [Azure Active Directory Domain Services](../active-directory-domain-services/overview.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
-| [Azure API Management](../api-management/zone-redundancy.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
+| [Azure API Management](migrate-api-mgt.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) |
| [Azure App Configuration](../azure-app-configuration/faq.yml#how-does-app-configuration-ensure-high-data-availability) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure App Service](migrate-app-service.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) | | [Azure App Service: App Service Environment](migrate-app-service-environment.md) | ![An icon that signifies this service is zone redundant.](media/icon-zone-redundant.svg) ![An icon that signifies this service is zonal](media/icon-zonal.svg) |
availability-zones Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/availability-zones/migrate-api-mgt.md
+
+ Title: Migrate Azure API Management to availability zone support
+description: Learn how to migrate your Azure API Management instances to availability zone support.
+++ Last updated : 07/07/2022+++++
+# Migrate Azure API Management to availability zone support
+
+This guide describes how to enable availability zone support for your API Management instance. The API Management service supports [Zone redundancy](../availability-zones/az-overview.md#availability-zones), which provides resiliency and high availability to a service instance in a specific Azure region. With zone redundancy, the gateway and the control plane of your API Management instance (Management API, developer portal, Git configuration) are replicated across datacenters in physically separated zones, making it resilient to a zone failure.
+
+In this article, we'll take you through the different options for availability zone migration.
+
+## Prerequisites
+
+* To configure API Management for zone redundancy, your instance must be in one of the following regions:
+
+ * Australia East
+ * Brazil South
+ * Canada Central
+ * Central India
+ * Central US
+ * East Asia
+ * East US
+ * East US 2
+ * France Central
+ * Germany West Central
+ * Japan East
+ * Korea Central (*)
+ * North Europe
+ * Norway East (*)
+ * South Africa North (*)
+ * South Central US
+ * Southeast Asia
+ * Switzerland North
+ * UK South
+ * West Europe
+ * West US 2
+ * West US 3
+
+ > [!IMPORTANT]
+ > The regions with * against them have restrictive access in an Azure subscription to enable availability zone support. Please work with your Microsoft sales or customer representative.
+
+* If you haven't yet created an API Management service instance, see [Create an API Management service instance](../api-management/get-started-create-service-instance.md). Select the Premium service tier.
+
+* API Management service must be in the Premium tier. If it isn't, you can [upgrade](../api-management/upgrade-and-scale.md#change-your-api-management-service-tier) to the Premium tier.
+
+* If your API Management instance is deployed (injected) in a [Azure virtual network (VNet)](../api-management/api-management-using-with-vnet.md), check the version of the [compute platform](../api-management/compute-infrastructure.md) (stv1 or stv2) that hosts the service.
+
+## Downtime requirements
+
+There are no downtime requirements for any of the migration options.
+
+## Considerations
+
+* Changes can take from 15 to 45 minutes to apply. The API Management gateway can continue to handle API requests during this time.
+
+* Migrating to availability zones or changing the availability zone configuration will trigger a public IP address change.
+
+* If you've configured autoscaling for your API Management instance in the primary location, you might need to adjust your autoscale settings after enabling zone redundancy. The number of API Management units in autoscale rules and limits must be a multiple of the number of zones.
++
+## Option 1: Migrate existing location of API Management instance, not injected in VNet
+
+Use this option to migrate an existing location of your API Management instance to availability zones when itΓÇÖs not injected (deployed) in a virtual network.
+
+### How to migrate API Management in a VNet
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. Select **Apply**, and then select **Save**.
+
+ :::image type="content" alt-text="Screenshot of how to migrate existing location of API Management instance not injected in VNet." source ="media/migrate-api-mgt/option-one-not-injected-in-vnet.png":::
+
++
+## Option 2: Migrate existing location of API Management instance (stv1 platform), injected in VNet
+
+Use this option to migrate an existing location of your API Management instance to availability zones when it is currently injected (deployed) in a virtual network. The following steps are needed when the API Management instance is currently hosted on the stv1 platform. Migrating to availability zones will also migrate the instance to the stv2 platform.
+
+1. Create a new subnet and public IP address in location to migrate to availability zones. Detailed requirements are in [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. Select the new subnet and new public IP address in the location.
+
+1. Select **Apply**, and then select **Save**.
++
+ :::image type="content" alt-text="Screenshot of how to migrate existing location of API Management instance injected in VNet." source ="media/migrate-api-mgt/option-two-injected-in-vnet.png":::
+
+## Option 3: Migrate existing location of API Management instance (stv2 platform), injected in VNet
+
+Use this option to migrate an existing location of your API Management instance to availability zones when it is currently injected (deployed) in a virtual network. The following steps are used when the API Management instance is already hosted on the stv2 platform.
+
+1. Create a new subnet and public IP address in location to migrate to availability zones. Detailed requirements are in [virtual networking guidance](../api-management/api-management-using-with-vnet.md?tabs=stv2#prerequisites).
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **Locations** in the menu, and then select the location to be migrated. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. Select the new public IP address in the location.
+
+1. Select **Apply**, and then select **Save**.
+
+ :::image type="content" alt-text="Screenshot of how to migrate existing location of API Management instance (stv2 platform) injected in VNet." source ="media/migrate-api-mgt/option-three-stv2-injected-in-vnet.png":::
+
+## Option 4. Add new location for API Management instance (with or without VNet) with availability zones
+
+Use this option to add a new location to your API Management instance and enable availability zones in that location.
+
+If your API Management instance is deployed in a virtual network in the primary location, ensure that you set up a [virtual network](../api-management/api-management-using-with-vnet.md?tabs=stv2), subnet, and public IP address in any new location where you plan to enable zone redundancy.
+
+1. In the Azure portal, navigate to your API Management service.
+
+1. Select **+ Add** in the top bar to add a new location. The location must [support availability zones](#prerequisites).
+
+1. Select the number of scale [Units](../api-management/upgrade-and-scale.md) desired in the location.
+
+1. In **Availability zones**, select one or more zones. The number of units selected must be distributed evenly across the availability zones. For example, if you selected 3 units, select 3 zones so that each zone hosts one unit.
+
+1. If your API Management instance is deployed in a [virtual network](../api-management/api-management-using-with-vnet.md?tabs=stv2), select the virtual network, subnet, and public IP address that are available in the location.
+
+1. Select **Add**, and then select **Save**.
+
+ :::image type="content" alt-text="Screenshot of how to add new location for API Management instance with or without VNet." source ="media/migrate-api-mgt/option-four-add-new-location.png":::
+
+## Next steps
+
+Learn more about:
+
+> [!div class="nextstepaction"]
+> [deploying an Azure API Management service instance to multiple Azure regions](../api-management/api-management-howto-deploy-multi-region.md).
+
+> [!div class="nextstepaction"]
+> [building for reliability](/azure/architecture/framework/resiliency/app-design) in Azure.
+
+> [!div class="nextstepaction"]
+> [Regions and Availability Zones in Azure](az-overview.md)
+
+> [!div class="nextstepaction"]
+> [Azure Services that support Availability Zones](az-region.md)
azure-arc Create Data Controller Direct Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-direct-cli.md
You can create them individually or in a unified experience.
In the unified experience, you can create the Arc data controller extension, custom location, and Arc data controller all in one command as follows:
-```az
-az arcdata dc create -n <name> -g <resource-group> --custom-location <custom-location> --cluster-name <cluster> --connectivity-mode direct --profile <the-deployment-profile>
+
+##### [Linux](#tab/linux)
+
+```console
+## variables for Azure subscription, resource group, cluster name, location, extension, and namespace.
+export resourceGroup=<Your resource group>
+export clusterName=<name of your connected Kubernetes cluster>
+export customLocationName=<name of your custom location>
+
+## variables for logs and metrics dashboard credentials
+export AZDATA_LOGSUI_USERNAME=<username for Kibana dashboard>
+export AZDATA_LOGSUI_PASSWORD=<password for Kibana dashboard>
+export AZDATA_METRICSUI_USERNAME=<username for Grafana dashboard>
+export AZDATA_METRICSUI_PASSWORD=<password for Grafana dashboard>
+```
+
+##### [Windows (PowerShell)](#tab/windows)
+
+``` PowerShell
+## variables for Azure location, extension and namespace
+$ENV:resourceGroup="<Your resource group>"
+$ENV:clusterName="<name of your connected Kubernetes cluster>"
+$ENV:customLocationName="<name of your custom location>"
+
+## variables for Metrics and Monitoring dashboard credentials
+$ENV:AZDATA_LOGSUI_USERNAME="<username for Kibana dashboard>"
+$ENV:AZDATA_LOGSUI_PASSWORD="<password for Kibana dashboard>"
+$ENV:AZDATA_METRICSUI_USERNAME="<username for Grafana dashboard>"
+$ENV:AZDATA_METRICSUI_PASSWORD="<password for Grafana dashboard>"
```
+
+
+Deploy the Azure Arc data controller using released profile
+##### [Linux](#tab/linux)
+
+```azurecli
+az arcdata dc create -name <name> -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
+
+# Example
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass
+```
+
+##### [Windows (PowerShell)](#tab/windows)
+
+```azurecli
+az arcdata dc create -name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name <the-deployment-profile> --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
+
+# Example
+az arcdata dc create --name arc-dc1 --g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --profile-name azure-arc-aks-premium-storage --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass
+
+```
++
+If you want to create the Azure Arc data controller using a custom configuration template, follow the steps described in [Create custom configuration profile](create-custom-configuration-template.md) and provide the path to the file as follows:
+##### [Linux](#tab/linux)
+
+```azurecli
+az arcdata dc create --name -g ${resourceGroup} --custom-location ${customLocationName} --cluster-name ${clusterName} --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true
+
+# Example
+az arcdata dc create --name arc-dc1 --resource-group my-resource-group -custom-location cl-name --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true
+```
+
+##### [Windows (PowerShell)](#tab/windows)
+
+```azurecli
+az arcdata dc create --name <name> -g $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --storage-class <storageclass>
+
+# Example
+az arcdata dc create --name arc-dc1 --resource-group $ENV:resourceGroup --custom-location $ENV:customLocationName --cluster-name $ENV:clusterName --connectivity-mode direct --path ./azure-arc-custom --auto-upload-metrics true --auto-upload-logs true --storage-class mystorageclass
+
+```
+++ ## Deploy - individual experience ### Step 1: Create an Azure Arc-enabled data services extension
azure-arc Create Data Controller Using Kubernetes Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/create-data-controller-using-kubernetes-native-tools.md
To create the data controller using Kubernetes tools you will need to have the K
The bootstrapper service handles incoming requests for creating, editing, and deleting custom resources such as a data controller.
-Save a copy of [arcdata-deployer.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper-unified.yaml), and replace the placeholder `{{NAMESPACE}}` in *all the places* in the file with the desired namespace name, for example: `arc`.
+Save a copy of [bootstrapper-unified.yaml](https://raw.githubusercontent.com/microsoft/azure_arc/main/arc_data_services/deploy/yaml/bootstrapper-unified.yaml), and replace the placeholder `{{NAMESPACE}}` in *all the places* in the file with the desired namespace name, for example: `arc`.
> [!IMPORTANT] > The bootstrapper-unified.yaml template file defaults to pulling the bootstrapper container image from the Microsoft Container Registry (MCR). If your environment can't directly access the Microsoft Container Registry, you can do the following:
azure-arc Azure Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/azure-rbac.md
A conceptual overview of this feature is available in the [Azure RBAC on Azure A
az ad app update --id "${SERVER_APP_ID}" --set groupMembershipClaims=All ```
-1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster.
+1. Create a service principal and get its `password` field value. This value is required later as `serverApplicationSecret` when you're enabling this feature on the cluster. Please note that this secret is valid for 1 year by default and will need to be [rotated after that](./azure-rbac.md#refresh-the-secret-of-the-server-application). Please refer to [this](/cli/azure/ad/sp/credential?view=azure-cli-latest&preserve-view=true#az-ad-sp-credential-reset) to set a custom expiry duration.
```azurecli az ad sp create --id "${SERVER_APP_ID}"
node-2 Ready agent 6m42s v1.18.14
node-3 Ready agent 6m33s v1.18.14 ```
+## Refresh the secret of the server application
+
+If the secret for the server application's service principal has expired, you will need to rotate it.
+
+```azurecli
+SERVER_APP_SECRET=$(az ad sp credential reset --name "${SERVER_APP_ID}" --credential-description "ArcSecret" --query password -o tsv)
+```
+
+Update the secret on the cluster. Please add any optional parameters you configured when this command was originally run.
+```azurecli
+az connectedk8s enable-features -n <clusterName> -g <resourceGroupName> --features azure-rbac --app-id "${SERVER_APP_ID}" --app-secret "${SERVER_APP_SECRET}"
+```
+ ## Next steps > [!div class="nextstepaction"]
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
A conceptual overview of this feature is available in [Cluster connect - Azure A
kubectl create serviceaccount demo-user ```
-1. Create ClusterRoleBinding or RoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example:
+1. Create ClusterRoleBinding to grant this [service account the appropriate permissions on the cluster](https://kubernetes.io/docs/reference/access-authn-authz/rbac/#kubectl-create-rolebinding). Example:
```console kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user
azure-arc Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/troubleshooting.md
To resolve this issue, try the following steps.
cluster-metadata-operator-664bc5f4d-chgkl 2/2 Running 0 4m14s clusterconnect-agent-7cb8b565c7-wklsh 2/3 CrashLoopBackOff 0 1m15s clusteridentityoperator-76d645d8bf-5qx5c 2/2 Running 0 4m15s
- config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
+ config-agent-65d5df564f-lffqm 1/2 CrashLoopBackOff 0 1m14s
``` 3. If the certificate below isn't present, the system assigned managed identity hasn't been installed.
To resolve this issue, try the following steps.
name: azure-identity-certificate ```
- To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy.
+ To resolve this issue, try deleting the Arc deployment by running the `az connectedk8s delete` command and reinstalling it. If the issue continues to happen, it could be an issue with your proxy settings. In that case, [try connecting your cluster to Azure Arc via a proxy](./quickstart-connect-cluster.md#connect-using-an-outbound-proxy-server) to connect your cluster to Arc via a proxy. Please also verify if all the [network prerequisites](quickstart-connect-cluster.md#meet-network-requirements) have been met.
4. If the `clusterconnect-agent` and the `config-agent` pods are running, but the `kube-aad-proxy` pod is missing, check your pod security policies. This pod uses the `azure-arc-kube-aad-proxy-sa` service account, which doesn't have admin permissions but requires the permission to mount host path.
+5. If the `kube-aad-proxy` pod is stuck in `ContainerCreating` state, check whether the kube-aad-proxy certificate has been downloaded onto the cluster.
+
+ ```console
+ kubectl get secret -n azure-arc -o yaml | grep name:
+ ```
+
+ ```output
+ name: kube-aad-proxy-certificate
+ ```
+
+ If the certificate is missing, please contact support.
+ ### Helm validation error Helm `v3.3.0-rc.1` version has an [issue](https://github.com/helm/helm/pull/8527) where helm install/upgrade (used by the `connectedk8s` CLI extension) results in running of all hooks leading to the following error:
azure-cache-for-redis Cache How To Premium Clustering https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-premium-clustering.md
Title: Configure Redis clustering - Premium Azure Cache for Redis description: Learn how to create and manage Redis clustering for your Premium tier Azure Cache for Redis instances + Previously updated : 02/28/2022 Last updated : 07/13/2022 + # Configure Redis clustering for a Premium Azure Cache for Redis instance Azure Cache for Redis offers Redis cluster as [implemented in Redis](https://redis.io/topics/cluster-tutorial). With Redis Cluster, you get the following benefits:
Clustering is enabled **New Azure Cache for Redis** on the left during cache cr
:::image type="content" source="media/cache-how-to-premium-clustering/redis-cache-clustering-selected.png" alt-text="Clustering toggle selected.":::
- Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
+ Once the cache is created, you connect to it and use it just like a non-clustered cache. Redis distributes the data throughout the Cache shards. If diagnostics is [enabled](cache-how-to-monitor.md#use-a-storage-account-to-export-cache-metrics), metrics are captured separately for each shard, and can be [viewed](cache-how-to-monitor.md) in Azure Cache for Redis on the left.
1. Select the **Next: Tags** tab or select the **Next: Tags** button at the bottom of the page.
For sample code on working with clustering with the StackExchange.Redis client,
## Change the cluster size on a running premium cache
-To change the cluster size on a running premium cache with clustering enabled, select **Cluster Size** from the **Resource menu**.
+To change the cluster size on a premium cache that you created earlier, and is already running with clustering enabled, select **Cluster size** from the Resource menu.
To change the cluster size, use the slider or type a number between 1 and 10 in the **Shard count** text box. Then, select **OK** to save.
The following list contains answers to commonly asked questions about Azure Cach
### How are keys distributed in a cluster?
-Per the Redis [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model) documentation: The key space is split into 16384 slots. Each key is hashed and assigned to one of these slots, which are distributed across the nodes of the cluster. You can configure which part of the key is hashed to ensure that multiple keys are located in the same shard using hash tags.
+Per the Redis [Keys distribution model](https://redis.io/topics/cluster-spec#keys-distribution-model) documentation: The key space is split into 16,384 slots. Each key is hashed and assigned to one of these slots, which are distributed across the nodes of the cluster. You can configure which part of the key is hashed to ensure that multiple keys are located in the same shard using hash tags.
* Keys with a hash tag - if any part of the key is enclosed in `{` and `}`, only that part of the key is hashed for the purposes of determining the hash slot of a key. For example, the following three keys would be located in the same shard: `{key}1`, `{key}2`, and `{key}3` since only the `key` part of the name is hashed. For a complete list of keys hash tag specifications, see [Keys hash tags](https://redis.io/topics/cluster-spec#keys-hash-tags). * Keys without a hash tag - the entire key name is used for hashing, resulting in a statistically even distribution across the shards of the cache.
For sample code about working with clustering and locating keys in the same shar
### What is the largest cache size I can create?
-The largest cache size you can have is 1.2 TB. This will be a clustered P5 cache with 10 shards. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
+The largest cache size you can have is 1.2 TB. This result is a clustered P5 cache with 10 shards. For more information, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/).
### Do all Redis clients support clustering?
-Many clients support Redis clustering but not all. Check the documentation for the library you're using to verify you're using a library and version that support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
+Many clients libraries support Redis clustering but not all. Check the documentation for the library you're using to verify you're using a library and version that support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
-The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client, which doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests.
+The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client library that doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests.
> [!NOTE] > If you're using StackExchange.Redis as your client, ensure you're using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do).
azure-functions Create First Function Cli Csharp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-csharp.md
adobe-target-content: ./create-first-function-cli-csharp-ieux
In this article, you use command-line tools to create a C# function that responds to HTTP requests. After testing the code locally, you deploy it to the serverless environment of Azure Functions.
+This article supports creating both types of compiled C# functions:
+ [!INCLUDE [functions-dotnet-execution-model](../../includes/functions-dotnet-execution-model.md)] This article creates an HTTP triggered function that runs on .NET 6.0. There is also a [Visual Studio Code-based version](create-first-function-vs-code-csharp.md) of this article.
azure-functions Create First Function Cli Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/create-first-function-cli-powershell.md
Each binding requires a direction, a type, and a unique name. The HTTP trigger h
# [Azure PowerShell](#tab/azure-powershell) ```azurepowershell
- New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime PowerShell -FunctionsVersion 3 -Location '<REGION>'
+ New-AzFunctionApp -Name <APP_NAME> -ResourceGroupName AzureFunctionsQuickstart-rg -StorageAccount <STORAGE_NAME> -Runtime PowerShell -FunctionsVersion 4 -Location '<REGION>'
``` The [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) cmdlet creates the function app in Azure.
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
This section describes the current state of the functional and behavioral differ
| ReadyToRun | [Supported](functions-dotnet-class-library.md#readytorun) | _TBD_ | | Application Insights dependencies | [Supported](functions-monitoring.md#dependencies) | Not Supported | +
+## Remote Debugging using Visual Studio
+
+Because your isolated process app runs outside the Functions runtime, you need to attach the remote debugger to a separate process. To learn more about debugging using Visual Studio, see [Remote Debugging](functions-develop-vs.md?tabs=isolated-process#remote-debugging).
## Next steps + [Learn more about triggers and bindings](functions-triggers-bindings.md)
azure-functions Durable Functions Unit Testing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-unit-testing.md
durableClientMock
// Notice that even though the HttpStart function does not call IDurableClient.CreateCheckStatusResponse() // with the optional parameter returnInternalServerErrorOnFailure, moq requires the method to be set up // with each of the optional parameters provided. Simply use It.IsAny<> for each optional parameter
- .Setup(x => x.CreateCheckStatusResponse(It.IsAny<HttpRequestMessage>(), instanceId, returnInternalServerErrorOnFailure: It.IsAny<bool>())
+ .Setup(x => x.CreateCheckStatusResponse(It.IsAny<HttpRequestMessage>(), instanceId, returnInternalServerErrorOnFailure: It.IsAny<bool>()))
.Returns(new HttpResponseMessage { StatusCode = HttpStatusCode.OK,
azure-functions Functions Create Your First Function Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-create-your-first-function-visual-studio.md
In this article, you learn how to:
> * Create a function that responds to HTTP requests. > * Run your code locally to verify function behavior. > * Deploy your code project to Azure Functions.
-
+ Completing this quickstart incurs a small cost of a few USD cents or less in your Azure account. ## Prerequisites
azure-functions Functions Develop Vs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-develop-vs.md
Unless otherwise noted, procedures and examples shown are for Visual Studio 2022
## Prerequisites -- Azure Functions Tools. To add Azure Function Tools, include the **Azure development** workload in your Visual Studio installation. If you are using Visual Studio 2017, you may need to [follow some additional installation steps](#azure-functions-tools-with-visual-studio-2017).
+- Azure Functions Tools. To add Azure Function Tools, include the **Azure development** workload in your Visual Studio installation. If you're using Visual Studio 2017, you may need to [follow some extra installation steps](#azure-functions-tools-with-visual-studio-2017).
- Other resources that you need, such as an Azure Storage account, are created in your subscription during the publishing process.
In C# class library functions, the bindings used by the function are defined by
![Create a Queue storage trigger function](./media/functions-develop-vs/functions-vstools-create-queuetrigger.png)
- You will then be prompted to choose between two Azure storage emulators or referencing a provisioned Azure storage account.
+ You'll then be prompted to choose between two Azure storage emulators or referencing a provisioned Azure storage account.
This trigger example uses a connection string with a key named `QueueStorage`. This key, stored in the [local.settings.json file](functions-develop-local.md#local-settings-file), either references the Azure storage emulators or an Azure storage account.
As with triggers, input and output bindings are added to your function as bindin
- In this example replace `<BINDING_TYPE>` with the name specific to the binding extension and `<TARGET_VERSION>` with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
+ In this example, replace `<BINDING_TYPE>` with the name specific to the binding extension and `<TARGET_VERSION>` with a specific version of the package, such as `3.0.0-beta5`. Valid versions are listed on the individual package pages at [NuGet.org](https://nuget.org). The major versions that correspond to Functions runtime 1.x or 2.x are specified in the reference article for the binding.
3. If there are app settings that the binding needs, add them to the `Values` collection in the [local setting file](functions-develop-local.md#local-settings-file).
For a full list of the bindings supported by Functions, see [Supported bindings]
## Run functions locally
-Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project, the local Functions host (func.exe) starts to listen on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
+Azure Functions Core Tools lets you run Azure Functions project on your local development computer. When you press F5 to debug a Functions project, the local Functions host (func.exe) starts to listen on a local port (usually 7071). Any callable function endpoints are written to the output, and you can use these endpoints for testing your functions. For more information, see [Work with Azure Functions Core Tools](functions-run-local.md). You're prompted to install these tools the first time you start a function from Visual Studio.
To start your function in Visual Studio in debug mode:
You can also manage application settings in one of these other ways:
* [Use the `--publish-local-settings` publish option in the Azure Functions Core Tools](functions-run-local.md#publish). * [Use the Azure CLI](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set).
+## Remote Debugging
+
+To debug your function app remotely, you must publish a debug configuration of your project. You also need to enable remote debugging in your function app in Azure.
+
+This section assumes you've already published to your function app using a release configuration.
+
+### Remote debugging considerations
+
+* Remote debugging isn't recommended on a production service.
+* If you have [Just My Code debugging](/visualstudio/debugger/just-my-code#BKMK_Enable_or_disable_Just_My_Code) enabled, disable it.
+* Avoid long stops at breakpoints when remote debugging. Azure treats a process that is stopped for longer than a few minutes as an unresponsive process, and shuts it down.
+* While you're debugging, the server is sending data to Visual Studio, which could affect bandwidth charges. For information about bandwidth rates, see [Azure Pricing](https://azure.microsoft.com/pricing/calculator/).
+* Remote debugging is automatically disabled in your function app after 48 hours. After 48 hours, you'll need to reenable remote debugging.
+
+### Attach the debugger
+
+The way you attach the debugger depends on your execution mode. When debugging an isolated process app, you currently need to attach the remote debugger to a separate .NET process, and several other configuration steps are required.
+
+When you're done, you should [disable remote debugging](#disable-remote-debugging).
+
+# [In-process](#tab/in-process)
+
+To attach a remote debugger to a function app running in-process with the Functions host:
+++ From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Attach debugger**. +
+ :::image type="content" source="media/functions-develop-vs/attach-to-process-in-process.png" alt-text="Screenshot of attaching the debugger from Visual Studio.":::
+
+Visual Studio connects to your function app and enables remote debugging, if it's not already enabled. It also locates and attaches the debugger to the host process for the app. At this point, you can debug your function app as normal.
+
+# [Isolated process](#tab/isolated-process)
+
+To attach a remote debugger to a function app running in a process separate from the Functions host:
+
+1. From the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Download publish profile**. This action downloads a copy of the publish profile and opens the download location. You need this file, which contains the credentials used to attach to your isolated process running in Azure.
+
+ > [!CAUTION]
+ > The .publishsettings file contains your credentials (unencoded) that are used to administer your function app. The security best practice for this file is to store it temporarily outside your source directories (for example in the Libraries\Documents folder), and then delete it after it's no longer needed. A malicious user who gains access to the .publishsettings file can edit, create, and delete your function app.
+
+1. Again from the **Publish** tab, select the ellipses (**...**) in the **Hosting** section, and then choose **Attach debugger**.
+
+ :::image type="content" source="media/functions-develop-vs/attach-to-process-in-process.png" alt-text="Screenshot of attaching the debugger from Visual Studio.":::
+
+ Visual Studio connects to your function app and enables remote debugging, if it's not already enabled.
+
+ > [!NOTE]
+ > Because the remote debugger isn't able to connect to the host process, you could see an error. In any case, the default debugging won't break into your code.
+
+1. Back in Visual Studio, copy the URL for the **Site** under **Hosting** in the **Publish** page.
+
+1. From the **Debug** menu, select **Attach to Process**, and in the **Attach to process** window, paste the URL in the **Connection Target**, remove `https://` and append the port `:4024`.
+
+ Verify that your target looks like `<FUNCTION_APP>.azurewebsites.net:4024` and press **Enter**.
+
+ ![Visual Studio attach to process dialog](./media/functions-develop-vs/attach-to-process-dialog.png)
+
+1. If prompted, allow Visual Studio access through your local firewall.
+
+1. When prompted for credentials, instead of local user credentials choose a different account (**More choices** on Windows). Provide the values of **userName** and **userPWD** from the published profile for **Email address** and **Password** in the authentication dialog on Windows. After a secure connection is established with the deployment server, the available processes are shown.
+
+ ![Visual Studio enter credential](./media/functions-develop-vs/creds-dialog.png)
+
+1. Check **Show process from all users** and then choose **dotnet.exe** and select **Attach**. When the operation completes, you're attached to your C# class library code running in an isolated process. At this point, you can debug your function app as normal.
+++
+### Disable remote debugging
+
+After you're done remote debugging your code, you should disable remote debugging in the [Azure portal](https://portal.azure.com). Remote debugging is automatically disabled after 48 hours, in case you forget.
+
+1. In the **Publish** tab in your project, select the ellipses (**...**) in the **Hosting** section, and choose **Open in Azure portal**. This action opens the function app in the Azure portal to which your project is deployed.
+
+1. In the functions app, select **Configuration** under **settings**, choose **General Settings**, set **Remote Debugging** to **Off**, and select **Save** then **Continue**.
+
+After the function app restarts, you can no longer remotely connect to your remote processes. You can use this same tab in the Azure portal to enable remote debugging outside of Visual Studio.
+ ## Monitoring functions The recommended way to monitor the execution of your functions is by integrating your function app with Azure Application Insights. When you create a function app in the Azure portal, this integration is done for you by default. However, when you create your function app during Visual Studio publishing, the integration in your function app in Azure isn't done. To learn how to connect Application Insights to your function app, see [Enable Application Insights integration](configure-monitoring.md#enable-application-insights-integration).
azure-functions Functions Identity Based Connections Tutorial 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-identity-based-connections-tutorial-2.md
You've granted your function app access to the service bus namespace using manag
1. After you create the two settings, select **Save** > **Confirm**.
+> [!NOTE]
+> When using [Azure App Configuration](../../articles/azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
+>
+> For example, `ServiceBusConnection:fullyQualifiedNamespace`.
+ Now that you've prepared the function app to connect to the service bus namespace using a managed identity, you can add a new function that uses a Service Bus trigger to your local project. ++ ## Add a Service Bus triggered function 1. Run the `func init` command, as follows, to create a functions project in a folder named LocalFunctionProj with the specified runtime:
Now that you've prepared the function app to connect to the service bus namespac
> [!NOTE] > If you try to run your functions now using `func start` you'll receive an error. This is because you don't have an identity-based connection defined locally. If you want to run your function locally, set the app setting `ServiceBusConnection__fullyQualifiedNamespace` in `local.settings.json` as you did in [the previous section](#connect-to-service-bus-in-your-function-app). In addition, you'll need to assign the role to your developer identity. For more details, please refer to the [local development with identity-based connections documentation](./functions-reference.md#local-development-with-identity-based-connections).
+> [!NOTE]
+> When using [Azure App Configuration](../../articles/azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
+>
+> For example, `ServiceBusConnection:fullyQualifiedNamespace`.
+ ## Publish the updated project 1. Run the following command to locally generate the files needed for the deployment package:
azure-functions Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference.md
However, a connection name can also refer to a collection of multiple configurat
For example, the `connection` property for an Azure Blob trigger definition might be "Storage1". As long as there is no single string value configured by an environment variable named "Storage1", an environment variable named `Storage1__blobServiceUri` could be used to inform the `blobServiceUri` property of the connection. The connection properties are different for each service. Refer to the documentation for the component that uses the connection.
+> [!NOTE]
+> When using [Azure App Configuration](../azure-app-configuration/quickstart-azure-functions-csharp.md) or [Key Vault](../key-vault/general/overview.md) to provide settings for Managed Identity connections, setting names should use a valid key separator such as `:` or `/` in place of the `__` to ensure names are resolved correctly.
+>
+> For example, `Storage1:blobServiceUri`.
+ ### Configure an identity-based connection Some connections in Azure Functions can be configured to use an identity instead of a secret. Support depends on the extension using the connection. In some cases, a connection string may still be required in Functions even though the service to which you are connecting supports identity-based connections. For a tutorial on configuring your function apps with managed identities, see the [creating a function app with identity-based connections tutorial](./functions-identity-based-connections-tutorial.md).
azure-monitor Data Sources Performance Counters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-performance-counters.md
description: Performance counters are collected by Azure Monitor to analyze perf
Previously updated : 02/26/2021 Last updated : 06/28/2022
azure-monitor Diagnostics Extension Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-logs.md
description: Azure Monitor can read the logs for Azure services that write diagn
Previously updated : 02/14/2020 Last updated : 07/12/2022
-# Collect data from Azure diagnostics extension to Azure Monitor Logs
+# Send data from Azure diagnostics extension to Azure Monitor Logs
Azure diagnostics extension is an [agent in Azure Monitor](../agents/agents-overview.md) that collects monitoring data from the guest operating system of Azure compute resources including virtual machines. This article describes how to collect data collected by the diagnostics extension from Azure Storage to Azure Monitor Logs. > [!NOTE]
azure-monitor Diagnostics Extension Schema Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-schema-windows.md
description: Configuration schema reference for Windows diagnostics extension (W
Previously updated : 01/20/2020 Last updated : 07/12/2022
azure-monitor Diagnostics Extension Stream Event Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-stream-event-hubs.md
description: Configure diagnostics extension in Azure Monitor to send data to Az
Previously updated : 02/18/2020 Last updated : 07/12/2022
azure-monitor Diagnostics Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-versions.md
description: Relevant to collecting perf counters in Azure Virtual Machines, VM
Previously updated : 01/29/2020 Last updated : 07/12/2022
azure-monitor Diagnostics Extension Windows Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/diagnostics-extension-windows-install.md
description: Learn about installing and configuring the Windows diagnostics exte
Previously updated : 02/17/2020 Last updated : 07/12/2022 ms.devlang: azurecli
azure-monitor Alerts Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-types.md
You may also decide not to split when you want a condition applied to multiple r
You can monitor at scale by applying the same metric alert rule to multiple resources of the same type for resources that exist in the same Azure region. Individual notifications are sent for each monitored resource.
-These platform metrics for these services in the following Azure clouds are supported:
+The platform metrics for these services in the following Azure clouds are supported:
| Service | Global Azure | Government | China | |:--|:-|:--|:--|
These platform metrics for these services in the following Azure clouds are supp
| Recovery Services vaults | Yes | No | No | > [!NOTE]
- > Platform metrics are not supported for virtual machine network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
+ > Multi-resource metric alerts are not supported for the following scenarios:
+ > - Alerting on virtual machines' guest metrics
+ > - Alerting on virtual machines' network metrics (Network In Total, Network Out Total, Inbound Flows, Outbound Flows, Inbound Flows Maximum Creation Rate, Outbound Flows Maximum Creation Rate).
You can specify the scope of monitoring with a single metric alert rule in one of three ways. For example, with virtual machines you can specify the scope as:
azure-monitor Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/console.md
namespace ConsoleApp
// before exit, flush the remaining data telemetryClient.Flush();
- // flush is not blocking when not using InMemoryChannel so wait a bit. There is an active issue regarding the need for `Sleep`/`Delay`
- // which is tracked here: https://github.com/microsoft/ApplicationInsights-dotnet/issues/407
+ // Console apps should use the WorkerService package.
+ // This uses ServerTelemetryChannel which does not have synchronous flushing.
+ // For this reason we add a short 5s delay in this sample.
+
Task.Delay(5000).Wait();
+ // If you're using InMemoryChannel, Flush() is synchronous and the short delay is not required.
+ } static DependencyTrackingTelemetryModule InitializeDependencyTracking(TelemetryConfiguration configuration)
azure-monitor Create New Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/create-new-resource.md
az monitor app-insights component create --app
#### Example ```azurecli
-az monitor app-insights component create --app demoApp --location westus2 --kind web -g demoRg --application-type web
+az monitor app-insights component create --app demoApp --location westus2 --kind web --resource-group demoRg --application-type web
``` #### Results ```azurecli
-az monitor app-insights component create --app demoApp --location eastus --kind web -g demoApp --application-type web
+az monitor app-insights component create --app demoApp --location eastus --kind web --resource-group demoApp --application-type web
{ "appId": "87ba512c-e8c9-48d7-b6eb-118d4aee2697", "applicationId": "demoApp",
azure-monitor Search Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/search-jobs.md
Last updated 01/27/2022
Search jobs are asynchronous queries that fetch records into a new search table within your workspace for further analytics. The search job uses parallel processing and can run for hours across extremely large datasets. This article describes how to create a search job and how to query its resulting data.
+> [!NOTE]
+> The search job feature is currently in public preview and is not supported in workspaces with [customer-managed keys](customer-managed-keys.md).
+ ## When to use search jobs Use a search job when the log query timeout of 10 minutes isn't enough time to search through large volumes of data or when you're running a slow query.
azure-monitor Monitor Virtual Machine Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-alerts.md
Previously updated : 06/21/2021 Last updated : 06/28/2022
azure-monitor Monitor Virtual Machine Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-security.md
Previously updated : 06/21/2021 Last updated : 06/28/2022
azure-monitor Monitor Virtual Machine Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-workloads.md
Previously updated : 06/21/2021 Last updated : 06/28/2022
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
+## June, 2022
+
+### General
+
+| Article | Description |
+|:|:|
+| [Tutorial - Editing Data Collection Rules](essentials/data-collection-rule-edit.md) | New article|
++
+### Application Insights
+
+| Article | Description |
+|:|:|
+| [Application Insights logging with .NET](app/ilogger.md) | Connection string sample code has been added.|
+| [Application Insights SDK support guidance](app/sdk-support-guidance.md) | Updated SDK supportability guidance. |
+| [Azure AD authentication for Application Insights](app/azure-ad-authentication.md) | Azure AD authenticated telemetry ingestion has been reached general availability.|
+| [Azure Application Insights for JavaScript web apps](app/javascript.md) | Our Java on-premises page has been retired and redirected to [Azure Monitor OpenTelemetry-based auto-instrumentation for Java applications](app/java-in-process-agent.md).|
+| [Azure Application Insights Telemetry Data Model - Telemetry Context](app/data-model-context.md) | Clarified that Anonymous User ID is simply User.Id for easy selection in Intellisense.|
+| [Continuous export of telemetry from Application Insights](app/export-telemetry.md) | On February 29, 2024, continuous export will be deprecated as part of the classic Application Insights deprecation.|
+| [Dependency Tracking in Azure Application Insights](app/asp-net-dependencies.md) | The EventHub Client SDK and ServiceBus Client SDK information has been updated.|
+| [Monitor Azure app services performance .NET Core](app/azure-web-apps-net-core.md) | Updated Linux troubleshooting guidance. |
+| [Performance counters in Application Insights](app/performance-counters.md) | A prerequisite section has been added to ensure performance counter data is accessible.|
+
+### Agents
+
+| Article | Description |
+|:|:|
+| [Collect text and IIS logs with Azure Monitor agent (preview)](agents/data-collection-text-log.md) | Added troubleshooting section.|
+| [Tools for migrating to Azure Monitor Agent from legacy agents](agents/azure-monitor-agent-migration-tools.md) | New article that explains how to install and use tools for migrating from legacy agents to the new Azure Monitor agent (AMA).|
+
+### Visualizations
+Azure Monitor Workbooks documentation previously resided on an external GitHub repository. We have migrated all Azure Workbooks content to the same repo as all other Azure Monitor content.
+++ ## May, 2022 ### General
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | service / templates | service | 1-80 | Alphanumerics and hyphens.<br><br>Start with letter and end with alphanumeric. | > | service / users | service | 1-80 | Alphanumerics and hyphens.<br><br>Start with letter and end with alphanumeric. |
+## Microsoft.App
+
+> [!div class="mx-tableFixed"]
+> | Entity | Scope | Length | Valid Characters |
+> | | | | |
+> | containerApps | resource group | 2-32 | Lowercase letters, numbers, and hyphens..<br><br>Start with letter and end with alphanumeric. |
+ ## Microsoft.AppConfiguration > [!div class="mx-tableFixed"]
azure-video-indexer Invite Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/invite-users.md
Title: Invite users to Azure Video Indexer (former Azure Video Indexer) - Azure
-description: This article shows how to invite users to Azure Video Indexer (former Azure Video Indexer).
+ Title: Invite users to Azure Video Indexer
+description: This article shows how to invite users to Azure Video Indexer.
Last updated 09/14/2021
azure-vmware Azure Vmware Solution Platform Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/azure-vmware-solution-platform-updates.md
Last updated 07/07/2022
Azure VMware Solution will apply important updates starting in March 2021. You'll receive a notification through Azure Service Health that includes the timeline of the maintenance. For more information, see [Host maintenance and lifecycle management](concepts-private-clouds-clusters.md#host-maintenance-and-lifecycle-management).
+## July 8, 2022
+
+HCX cloud manager in Azure VMware solutions can now be accessible over a public IP address. You can pair HCX sites and create a service mesh from on-premises to Azure VMware Solutions private cloud using Public IP.
+
+HCX with public IP is especially useful in cases where On-premises sites are not connected to Azure via Express Route or VPN. HCX service mesh appliances can be configured with public IPs to avoid lower tunnel MTUs due to double encapsulation if a VPN is used for on-premises to cloud connections.
+
+For more information, please see [Enable HCX over the internet](/azure/azure-vmware/enable-hcx-access-over-internet)
++ ## July 7, 2022 All new Azure VMware Solution private clouds are now deployed with VMware vCenter Server version 7.0 Update 3c and ESXi version 7.0 Update 3c.
backup Backup Azure Monitoring Built In Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-monitoring-built-in-monitor.md
Title: Monitor Azure Backup protected workloads description: In this article, learn about the monitoring and notification capabilities for Azure Backup workloads using the Azure portal. Previously updated : 07/06/2022 Last updated : 07/13/2022 ms.assetid: 86ebeb03-f5fa-4794-8a5f-aa5cbbf68a81
Jobs from the following Azure Backup solutions are shown here:
Jobs from System Center Data Protection Manager (SC-DPM), Microsoft Azure Backup Server (MABS) aren't displayed. > [!NOTE]
-> Azure workloads such as SQL and SAP HANA backups within Azure VMs have huge number of backup jobs. For example, log backups can run for every 15 minutes. So for such DB workloads, only user triggered operations are displayed. Scheduled backup operations aren't displayed.
+>- Azure workloads such as SQL and SAP HANA backups within Azure VMs have huge number of backup jobs. For example, log backups can run for every 15 minutes. So for such DB workloads, only user triggered operations are displayed. Scheduled backup operations aren't displayed.
+>- In Backup center you can view jobs for upto last 14 days. If you want to view jobs for a large duration, you can go to the individual Recovery Services vaults and select the **Backup Jobs** tab. For jobs older than 6 months, we recommend you to use Log Analytics and/or [Backup Reports](configure-reports.md) to reliably and efficiently query older jobs.
## Backup Alerts in Recovery Services vault
backup Metrics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/metrics-overview.md
Title: Monitor the health of your backups using Azure Backup Metrics (preview)
description: In this article, learn about the metrics available for Azure Backup to monitor your backup health Previously updated : 03/21/2022 Last updated : 07/13/2022
You can use the different programmatic clients, such as PowerShell, CLI, or REST
### Sample alert scenarios
-#### Fire a single alert if all backups for a vault were successful in last 24 hours
+#### Fire a single alert if all triggered backups for a vault were successful in last 24 hours
**Alert Rule: Fire an alert if Backup Health Events < 1 in last 24 hours for**:
-Dimensions["HealthStatus"]="Persistent Unhealthy / Transient Unhealthy / Persistent Degraded / Transient Degraded"
+Dimensions["HealthStatus"] != "Healthy"
#### Fire an alert after every failed backup job **Alert Rule: Fire an alert if Backup Health Events > 0 in last 5 minutes for**: -- Dimensions["HealthStatus"]= "Persistent Unhealthy / Transient Unhealthy / Persistent Degraded / Transient Degraded"
+- Dimensions["HealthStatus"]!= "Healthy"
- Dimensions["DatasourceId"]= "All current and future values" #### Fire an alert if there were consecutive backup failures for the same item in last 24 hours **Alert Rule: Fire an alert if Backup Health Events > 1 in last 24 hours for**: -- Dimensions["HealthStatus"]= "Persistent Unhealthy / Transient Unhealthy / Persistent Degraded / Transient Degraded"
+- Dimensions["HealthStatus"]!= "Healthy"
- Dimensions["DatasourceId"]= "All current and future values" #### Fire an alert if no backup job was executed for an item in last 24 hours
cdn Cdn Caching Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cdn/cdn-caching-rules.md
For global and custom caching rules, you can specify the following **Caching beh
- **Override**: Ignore origin-provided cache duration; use the provided cache duration instead. This will not override cache-control: no-cache.
+> [!NOTE]
+> For **Azure CDN from Microsoft** profiles, cache expiration override is only applicable to status codes 200 and 206.
+ - **Set if missing**: Honor origin-provided cache-directive headers, if they exist; otherwise, use the provided cache duration. ![Global caching rules](./media/cdn-caching-rules/cdn-global-caching-rules.png) ![Custom caching rules](./media/cdn-caching-rules/cdn-custom-caching-rules.png) ++ ## Cache expiration duration For global and custom caching rules, you can specify the cache expiration duration in days, hours, minutes, and seconds:
When these rules are set, a request for _&lt;endpoint hostname&gt;_.azureedge.ne
## See also - [How caching works](cdn-how-caching-works.md)-- [Tutorial: Set Azure CDN caching rules](cdn-caching-rules-tutorial.md)
+- [Tutorial: Set Azure CDN caching rules](cdn-caching-rules-tutorial.md)
cognitive-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/language-support.md
There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (pre
| Czech (Czech) | `cs-CZ` | No |No| | Danish (Denmark) | `da-DK` | No |No| | Dutch (Netherlands) | `nl-NL` | No |No|
-| English (Australia) | `en-AU` | Yes |No|
+| English (Australia) | `en-AU` | Yes |Yes|
| English (Canada) | `en-CA` | No |Yes| | English (India) | `en-IN` | No |No| | English (Ireland) | `en-IE` | No |No|
There are two Custom Neural Voice (CNV) project types: CNV Pro and CNV Lite (pre
| Hungarian (Hungary) | `hu-HU` | No |No| | Indonesian (Indonesia) | `id-ID` | No |No| | Italian (Italy) | `it-IT` | Yes |Yes|
-| Japanese (Japan) | `ja-JP` | Yes |No|
+| Japanese (Japan) | `ja-JP` | Yes |Yes|
| Korean (Korea) | `ko-KR` | Yes |Yes| | Malay (Malaysia) | `ms-MY` | No |No| | Norwegian (Bokmål, Norway) | `nb-NO` | No |No|
cognitive-services Rest Speech To Text V3 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/rest-speech-to-text-v3-1.md
+
+ Title: Speech-to-text REST API v3.1 Public Preview - Speech service
+
+description: Get reference documentation for Speech-to-text REST API v3.1 (Public Preview).
++++++ Last updated : 07/11/2022+
+ms.devlang: csharp
+++
+# Speech-to-text REST API v3.1 (preview)
+
+The Speech-to-text REST API v3.1 is used for [Batch transcription](batch-transcription.md) and [Custom Speech](custom-speech-overview.md). It is currently in Public Preview.
+
+> [!TIP]
+> See the [Speech to Text API v3.1 preview1](https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1-preview1/) reference documentation for details. This is an updated version of the [Speech to Text API v3.0](./rest-speech-to-text.md)
+
+Use the REST API v3.1 to:
+- Copy models to other subscriptions if you want colleagues to have access to a model that you built, or if you want to deploy a model to more than one region.
+- Transcribe data from a container (bulk transcription) and provide multiple URLs for audio files.
+- Upload data from Azure storage accounts by using a shared access signature (SAS) URI.
+- Get logs for each endpoint if logs have been requested for that endpoint.
+- Request the manifest of the models that you create, to set up on-premises containers.
+
+## Changes to the v3.0 API
+
+### Batch transcription changes:
+- In **Create Transcription** the following three new fields were added to properties:
+ - **displayFormWordLevelTimestampsEnabled** can be used to enable the reporting of word-level timestamps on the display form of the transcription results.
+ - **diarization** can be used to specify hints for the minimum and maximum number of speaker labels to generate when performing optional diarization (speaker separation). With this feature, the service is now able to generate speaker labels for more than two speakers.
+ - **languageIdentification** can be used specify settings for optional language identification on the input prior to transcription. Up to 10 candidate locales are supported for language identification. For the preview API, transcription can only be performed with base models for the respective locales. The ability to use custom models for transcription will be added for the GA version.
+- **Get Transcriptions**, **Get Transcription Files**, **Get Transcriptions For Project** now include a new optional parameter to simplify finding the right resource:
+ - **filter** can be used to provide a filtering expression for selecting a subset of the available resources. You can filter by displayName, description, createdDateTime, lastActionDateTime, status and locale. Example: filter=createdDateTime gt 2022-02-01T11:00:00Z
+
+### Custom Speech changes
+- **Create Dataset** now supports a new data type of **LanguageMarkdown** to support upload of the new structured text data.
+ It also now supports uploading data in multiple blocks for which the following new operations were added:
+ - **Upload Data Block** - Upload a block of data for the dataset. The maximum size of the block is 8MiB.
+ - **Get Uploaded Blocks** - Get the list of uploaded blocks for this dataset.
+ - **Commit Block List** - Commit block list to complete the upload of the dataset.
+- **Get Base Models** and **Get Base Model** now provide information on the type of adaptation supported by a base model:
+ ```json
+ "features": {
+ …
+ "supportsAdaptationsWith": [
+ ΓÇ£AcousticΓÇ¥,
+ "Language",
+ ΓÇ£LanguageMarkdownΓÇ¥,
+ "Pronunciation"
+ ]
+ }
+```
+
+|Adaptation Type |DescriptionText |
+|||
+|Acoustic |Supports adapting the model with the audio provided to adapt to the audio condition or specific speaker characteristics. |
+|Language |Supports adapting with Plain Text. |
+|LanguageMarkdown |Supports adapting with Structured Text. |
+|Pronunciation |Supports adapting with a Pronunciation File. |
+- **Create Model** has a new optional parameter under **properties** called **customModelWeightPercent** that lets you specify the weight used when the Custom Language Model (trained from plain or structured text data) is combined with the Base Language Model. Valid values are integers between 1 and 100. The default value is currently 30.
+- **Get Base Models**, **Get Datasets**, **Get Datasets For Project**, **Get Data Set Files**, **Get Endpoints**, **Get Endpoints For Project**, **Get Evaluations**, **Get Evaluations For Project**, **Get Evaluation Files**, **Get Models**, **Get Models For Project**, **Get Projects** now include a new optional parameter to simplify finding the right resource:
+ - **filter** can be used to provide a filtering expression for selecting a subset of the available resources. You can filter by displayName, description, createdDateTime, lastActionDateTime, status, locale and kind. Example: filter=locale eq 'en-US'
+
+- Added a new **Get Model Files** operation to get the files of the model identified by the given ID as well as a new **Get Model File** operation to get one specific file (identified with fileId) from a model (identified with id). This lets you retrieve a **ModelReport** file that provides information on the data processed during training.
+
+## Next steps
+
+- [Customize acoustic models](./how-to-custom-speech-train-model.md)
+- [Customize language models](./how-to-custom-speech-train-model.md)
+- [Get familiar with batch transcription](batch-transcription.md)
+
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
In the following tables, the parameters without the **Adjustable** row aren't ad
| Quota | Free (F0)<sup>3</sup> | Standard (S0) | |--|--|--| | **Max number of transactions per certain time period per Speech service resource** | | |
-| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) |
-| Adjustable | No<sup>4</sup> | Yes<sup>5</sup> |
+| Real-time API. Prebuilt neural voices and custom neural voices. | 20 transactions per 60 seconds | 200 transactions per second (TPS) (default value) |
+| Adjustable | No<sup>4</sup> | Yes<sup>5</sup>, up to 1000 TPS |
| **HTTP-specific quotas** | | | | Max audio length produced per request | 10 min | 10 min | | Max total number of distinct `<voice>` and `<audio>` tags in SSML | 50 | 50 |
In the following tables, the parameters without the **Adjustable** row aren't ad
| Default value | N/A | 10 | | Adjustable | N/A | Yes<sup>5</sup> |
-#### Audio Content Creation tool
+#### Audio Content Creation tool
| Quota | Free (F0)| Standard (S0) | |--|--|--|
To minimize issues related to throttling, it's a good idea to use the following
- Implement retry logic in your application. - Avoid sharp changes in the workload. Increase the workload gradually. For example, let's say your application is using text-to-speech, and your current workload is 5 TPS. The next second, you increase the load to 20 TPS (that is, four times more). Speech service immediately starts scaling up to fulfill the new load, but is unable to scale as needed within one second. Some of the requests will get response code 429 (too many requests). - Test different load increase patterns. For more information, see the [workload pattern example](#example-of-a-workload-pattern-best-practice).-- Create additional Speech service resources in the same or different regions, and distribute the workload among them. This is especially important for the text-to-speech TPS) parameter, which is set to 200 per resource, and can't be adjusted.
+- Create additional Speech service resources in *different* regions, and distribute the workload among them. (Creating multiple Speech service resources in the same region will not affect the performance, because all resources will be served by the same backend cluster).
The next sections describe specific cases of adjusting quotas.
Initiate the increase of the limit for concurrent requests for your resource, or
- Choose either the base or custom model. - The Azure resource information you [collected previously](#have-the-required-information-ready). - Any other required information.
-1. On the **Review + create** tab, select **Create**.
+1. On the **Review + create** tab, select **Create**.
1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request. ### Example of a workload pattern best practice
Initiate the increase of the limit for concurrent requests for your resource, or
- Choose either the base or custom model. - The Azure resource information you [collected previously](#have-the-required-information-ready). - Any other required information.
-1. On the **Review + create** tab, select **Create**.
+1. On the **Review + create** tab, select **Create**.
1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
communication-services Custom Teams Endpoint Use Cases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/interop/custom-teams-endpoint-use-cases.md
Title: Use cases for custom Teams endpoint
+ Title: Use cases for Azure Communication Services support Teams identities
-description: This article describes use cases for a custom Teams endpoint.
+description: This article describes use cases for Azure Communication Services support Teams identities.
-# Custom Teams Endpoint ΓÇö Use cases
+# Azure Communication Services support Teams identities ΓÇö Use cases
Microsoft Teams provides identities managed by Azure Active Directory and calling experiences controlled by Teams Admin Center and policies. Users might have assigned licenses to enable PSTN connectivity and advanced calling capabilities of Teams Phone System. Azure Communication Services are supporting Teams identities for managing Teams VoIP calls, Teams PSTN calls, and join Teams meetings. Developers might extend the Azure Communication Services with Graph API to provide contextual data from Microsoft 365 ecosystem. This page is providing inspiration on how to use existing Microsoft technologies to provide an end-to-end experience for calling scenarios with Teams users and Azure Communication Services calling SDKs.
communication-services Teams Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/teams-endpoint.md
Title: Build a custom Teams endpoint
+ Title: Integrate communication as Teams user with Azure Communication Services
-description: This article discusses how to build a custom Teams endpoint.
+description: This article discusses how to integrate communication as Teams user with Azure Communication Services.
-# Build a custom Teams endpoint
+# Integrate communication as Teams user with Azure Communication Services and Graph API
[!INCLUDE [Public Preview](../includes/public-preview-include-document.md)]
-You can use Azure Communication Services and Graph API to build custom Teams endpoints to communicate with the Microsoft Teams client or other custom Teams endpoints. With a custom Teams endpoint, you can customize a voice, video, chat, and screen-sharing experience for Teams users.
+You can use Azure Communication Services and Graph API to integrate communication as Teams user into your products to communicate with other people in and outside your organization. With Azure Communication Services supporting Teams identities and Graph API, you can customize a voice, video, chat, and screen-sharing experience for Teams users.
You can use the Azure Communication Services Identity SDK to exchange Azure Active Directory (Azure AD) access tokens of Teams users for Communication Identity access tokens. The diagrams in the next sections demonstrate multitenant use cases, where fictional company Fabrikam is the customer of fictional company Contoso. ## Calling
-Voice, video, and screen-sharing capabilities are provided via Azure Communication Services Calling SDKs. The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with custom Teams endpoints.
+Voice, video, and screen-sharing capabilities are provided via Azure Communication Services Calling SDKs. The following diagram shows an overview of the process you'll follow as you integrate your calling experiences with Azure Communication Services support Teams identities.
-![Diagram of the process of enabling the calling feature for a custom Teams endpoint experience.](./media/teams-identities/teams-identity-calling-overview.svg)
+![Diagram of the process to integrate the calling capabilities into your product with Azure Communication Services.](./media/teams-identities/teams-identity-calling-overview.svg)
## Chat
-Optionally, you can also use custom Teams endpoints to integrate chat capabilities by using Graph APIs. For more information about the Graph API, see the [chat resource type](/graph/api/channel-post-messages) documentation.
+Optionally, you can also use Graph API to integrate chat capabilities into your product. For more information about the Graph API, see the [chat resource type](/graph/api/channel-post-messages) documentation.
-![Diagram of the process of enabling the chat feature for a custom Teams endpoint experience.](./media/teams-identities/teams-identity-chat-overview.png)
+![Diagram of the process to integrate the chat capabilities into your product with Graph API.](./media/teams-identities/teams-identity-chat-overview.png)
## Azure Communication Services permissions
communication-services Pre Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/pre-call-diagnostics.md
In the case that devices are not available, the user shouldn't continue into joi
### InCall diagnostics Performs a quick call to check in-call metrics for audio and video and provides results back. Includes connectivity (`connected`, boolean), bandwidth quality (`bandWidth`, `'Bad' | 'Average' | 'Good'`) and call diagnostics for audio and video (`diagnostics`). Diagnostic are provided `jitter`, `packetLoss` and `rtt` and results are generated using a simple quality grade (`'Bad' | 'Average' | 'Good'`).
+InCall diagnostics leverages [media quality stats](./media-quality-sdk.md) to calculate quality scores and diagnose issues. During the pre-call diagnostic, the full set of media quality stats are available for consumption. These will include raw values across video and audio metrics that can be used programatically. The InCall diagnostic provides a convenience layer on top of media quality stats to consume the results without the need to process all the raw data. See section on media stats for instructions to access.
+ ```javascript const inCallDiagnostics = await preCallDiagnosticsResult.inCallDiagnostics;
At this step, there are multiple failure points to watch out for:
- If bandwidth is `Bad`, the user should be prompted to try out a different network or verify the bandwidth availability on their current one. Ensure no other high bandwidth activities might be taking place. ### Media stats
-For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. You can subscribe to the call media stats to get full collection of them.
+For granular stats on quality metrics like jitter, packet loss, rtt, etc. `callMediaStatistics` are provided as part of the `preCallDiagnosticsResult` feature. See the [full list and description of the available metrics](./media-quality-sdk.md) in the linked article. You can subscribe to the call media stats to get full collection of them. This is the raw metrics that are used to calculate InCall diagnostic results and which can be consumed granularly for further analysis.
## Pricing
communication-services Eligible Teams Licenses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/eligible-teams-licenses.md
The following articles might be of interest to you:
- Try [quickstart for authentication of Teams users](./manage-teams-identity.md). - Try [quickstart for calling to a Teams user](./voice-video-calling/get-started-with-voice-video-calling-custom-teams-client.md).-- Learn more about [Custom Teams endpoint](../concepts/teams-endpoint.md)
+- Learn more about [Azure Communication Services support Teams identities](../concepts/teams-endpoint.md)
- Learn more about [Teams interoperability](../concepts/teams-interop.md)
communication-services Manage Teams Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/manage-teams-identity.md
[!INCLUDE [Public Preview](../../communication-services/includes/public-preview-include-document.md)]
-In this quickstart, you'll build a .NET console application to authenticate a Microsoft 365 user by using the Microsoft Authentication Library (MSAL) and retrieving a Microsoft Azure Active Directory (Azure AD) user token. You'll then exchange that token for an access token of Teams user with the Azure Communication Services Identity SDK. The access token for Teams user can then be used by the Communication Services Calling SDK to build a custom Teams endpoint.
+In this quickstart, you'll build a .NET console application to authenticate a Microsoft 365 user by using the Microsoft Authentication Library (MSAL) and retrieving a Microsoft Azure Active Directory (Azure AD) user token. You'll then exchange that token for an access token of Teams user with the Azure Communication Services Identity SDK. The access token for Teams user can then be used by the Communication Services Calling SDK to integrate calling capability as Teams user.
> [!NOTE] > When you're in a production environment, we recommend that you implement this exchange mechanism in back-end services, because requests for an exchange are signed with a secret.
The following sections will guide you through the steps for administrators, deve
The Administrator role has extended permissions in Azure AD. Members of this role can set up resources and can read information from the Azure portal. In the following diagram, you can see all actions that have to be executed by Administrators.
-![Administrator actions to enable custom Teams endpoint experience](./media/teams-identities/teams-identity-admin-overview.svg)
+![Administrator actions to enable Azure Communication Services support for Teams identities.](./media/teams-identities/teams-identity-admin-overview.svg)
1. The Contoso Administrator creates or selects an existing *application* in Azure Active Directory. The property *Supported account types* defines whether users from various tenants can authenticate to the application. The property *Redirect URI* redirects a successful authentication request to the Contoso *server*. 1. The Contoso Administrator adds API permissions to `Teams.ManageCalls` and `Teams.ManageChats` from Communication Services.
The Contoso developer needs to set up the *client application* to authenticate u
The developer's required actions are shown in following diagram:
-![Diagram of developer actions to enable the custom Teams endpoint experience.](./media/teams-identities/teams-identity-developer-overview.svg)
+![Diagram of developer actions to enable Azure Communication Services support for Teams identities.](./media/teams-identities/teams-identity-developer-overview.svg)
1. The Contoso developer configures the Microsoft Authentication Library (MSAL) to authenticate the user for the application that was created earlier by the Administrator for Communication Services Teams.ManageCalls and Teams.ManageChats permissions. 1. The Contoso developer initializes the Communication Services Identity SDK and exchanges the incoming Azure AD user token for the access token of Teams user via the identity SDK. The access token of Teams user is then returned to the *client application*.
For more information about setting up environments in public documentation, see
The user represents the Fabrikam users of the Contoso application. The user experience is shown in the following diagram:
-![Diagram of user actions to enable the custom Teams endpoint experience.](./media/teams-identities/teams-identity-user-overview.svg)
+![Diagram of user actions to enable Azure Communication Services support for Teams identities.](./media/teams-identities/teams-identity-user-overview.svg)
1. The Fabrikam user uses the Contoso *client application* and is prompted to authenticate. 1. The Contoso *client application* uses the MSAL to authenticate the user against the Fabrikam Azure AD tenant for the Contoso application with Communication Services Teams.ManageCalls and Teams.ManageChats permissions. 1. Authentication is redirected to the *server*, as defined in the property *Redirect URI* in the MSAL and the Contoso application. 1. The Contoso *server* exchanges the Azure AD user token for the access token of Teams user by using the Communication Services Identity SDK and returns the access token of Teams user to the *client application*.
-With a valid access token for Teams user in the *client application*, developers can integrate the Communication Services Calling SDK and build a custom Teams endpoint.
+With a valid access token for Teams user in the *client application*, developers can integrate the Communication Services Calling SDK and manage calls as Teams user.
## Next steps
In this quickstart, you learned how to:
Learn about the following concepts: -- [Custom Teams endpoint](../concepts/teams-endpoint.md)
+- [Azure Communication Services support Teams identities](../concepts/teams-endpoint.md)
- [Teams interoperability](../concepts/teams-interop.md)
communication-services Trusted Auth Sample https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/samples/trusted-auth-sample.md
This repository provides a sample of a server implementation of an authenticatio
This sample can help you in the following scenarios: - As a developer, you need to enable an authentication flow to generate Azure Communication Services user identities mapped to an Azure Active Directory identity. Using this identity you then will provision access tokens to be used in calling and chat experiences.-- As a developer, you need to enable an authentication flow for Custom Teams Endpoint, which is done by using an Microsoft 365 Azure Active Directory identity of a Teams' user to fetch an Azure Communication Services token to be able to join Teams calling/chat.
+- As a developer, you need to enable an authentication flow for Azure Communication Services support Teams identities, which is done by using an Microsoft 365 Azure Active Directory identity of a Teams' user to fetch an Azure Communication Services token to be able to join Teams calling/chat.
> [!NOTE] >If you are looking to get started with Azure Communication Services, but are still in learning / prototyping phases, check out our [quickstarts for getting started with Azure communication services users and access tokens](../quickstarts/access-tokens.md?pivots=programming-language-csharp).
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
# Securing a custom VNET in Azure Container Apps
-Firewall settings Network Security Groups (NSGs) needed to configure virtual networks closely resemble the settings required by Kubernetes.
+Network Security Groups (NSGs) needed to configure virtual networks closely resemble the settings required by Kubernetes.
-Some outbound dependencies of Azure Kubernetes Service (AKS) clusters rely exclusively on fully qualified domain names (FQDN), therefore securing an AKS cluster purely with NSGs isn't possible. Refer to [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md) for details.
+You can lock down a network via NSGs with more restrictive rules than the default NSG rules to control all inbound and outbound traffic for the Container App Environment.
-* You can lock down a network via NSGs with more restrictive rules than the default NSG rules.
-* To fully secure a cluster, use a combination of NSGs and a firewall.
+Using custom user-defined routes (UDRs) or ExpressRoutes, other than with UDRs of selected destinations that you own, are not yet supported for Container App Environments with VNETs. Therefore, securing a Container App Environment with a firewall is not yet supported.
## NSG allow rules
The following tables describe how to configure a collection of NSG allow rules.
| Protocol | Port | ServiceTag | Description | |--|--|--|--|
-| Any | \* | Control plane subnet address space | Allow communication between IPs in the control plane subnet. This address is passed to as a parameter when you create an environment. For example, `10.0.0.0/21`. |
-| Any | \* | App subnet address space | Allow communication between nodes in the app subnet. This address is passed as a parameter when you create an environment. For example, `10.0.8.0/21`. |
+| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/23`. |
+| Any | \* | AzureLoadBalancer | Allow the Azure infrastructure load balancer to communicate with your environment. |
### Outbound with ServiceTags
The following tables describe how to configure a collection of NSG allow rules.
### Outbound with wild card IP rules
-As the following rules require allowing all IPs, use a Firewall solution to lock down to specific FQDNs.
- | Protocol | Port | IP | Description | |--|--|--|--|
-| TCP | `443` | \* | Allow all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. |
-| UDP | `123` | \* | NTP server. If using firewall, allowlist `ntp.ubuntu.com:123`. |
-| Any | \* | Control plane subnet address space | Allow communication between IPs in the control plane subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/21`. |
-| Any | \* | App subnet address space | Allow communication between nodes in the App subnet. This address is passed as a parameter when you create an environment. For example, `10.0.8.0/21`. |
-
-## Firewall configuration
-
-### Outbound FQDN dependencies
-
-| FQDN | Protocol | Port | Description |
-|--|--|--|--|
-| `*.hcp.<REGION>.azmk8s.io` | HTTPS | `443` | Required for internal AKS secure connection between nodes and control plane. |
-| `mcr.microsoft.com` | HTTPS | `443` | Required to access images in Microsoft Container Registry (MCR). This registry contains first-party images and charts (for example, coreDNS). These images are required for the correct creation and functioning of the cluster, including scale and upgrade operations. |
-| `*.data.mcr.microsoft.com` | HTTPS | `443` | Required for MCR storage backed by the Azure content delivery network (CDN). |
-| `management.azure.com` | HTTPS | `443` | Required for Kubernetes operations against the Azure API. |
-| `login.microsoftonline.com` | HTTPS | `443` | Required for Azure Active Directory authentication. |
-| `packages.microsoft.com` | HTTPS | `443` | This address is the Microsoft packages repository used for cached apt-get operations. Example packages include Moby, PowerShell, and Azure CLI. |
-| `acs-mirror.azureedge.net` | HTTPS | `443` | This address is for the repository required to download and install required binaries like `kubenet` and Azure Container Networking Interface. |
-| `dc.services.visualstudio.com` | HTTPS | `443` | This endpoint is used for metrics and monitoring using Azure Monitor. |
-| `*.ods.opinsights.azure.com` | HTTPS | `443` | This endpoint is used by Azure Monitor for ingesting log analytics data. |
-| `*.oms.opinsights.azure.com` | HTTPS | `443` | This endpoint is used by `omsagent`, which is used to authenticate the log analytics service. |
-| `*.monitoring.azure.com` | HTTPS | `443` | This endpoint is used to send metrics data to Azure Monitor. |
+| TCP | `443` | \* | Allowing all outbound on port `443` provides a way to allow all FQDN based outbound dependencies that don't have a static IP. |
+| UDP | `123` | \* | NTP server. |
+| Any | \* | Infrastructure subnet address space | Allow communication between IPs in the infrastructure subnet. This address is passed as a parameter when you create an environment. For example, `10.0.0.0/23`. |
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
+
+ Title: Built-in policy definitions for Azure Container Apps
+description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources.
++ Last updated : 07/08/2022++++
+# Azure Policy built-in definitions for Azure Container Apps
+
+This page is an index of [Azure Policy](../governance/policy/overview.md) built-in policy
+definitions for Azure Container Apps. For additional Azure Policy built-ins for other services, see
+[Azure Policy built-in definitions](../governance/policy/samples/built-in-policies.md).
+
+The name of each built-in policy definition links to the policy definition in the Azure portal. Use
+the link in the **Version** column to view the source on the
+[Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+
+## Policy definitions
++++
+## Next steps
+
+- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
+- Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
data-factory Concepts Data Flow Debug Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-debug-mode.md
Previously updated : 10/01/2021 Last updated : 07/12/2022 # Mapping data flow Debug Mode
The default IR used for debug mode in data flows is a small 4-core single worker
## Data preview
-With debug on, the Data Preview tab will light-up on the bottom panel. Without debug mode on, Data Flow will show you only the current metadata in and out of each of your transformations in the Inspect tab. The data preview will only query the number of rows that you have set as your limit in your debug settings. Click **Refresh** to fetch the data preview.
+With debug on, the Data Preview tab will light-up on the bottom panel. Without debug mode on, Data Flow will show you only the current metadata in and out of each of your transformations in the Inspect tab. The data preview will only query the number of rows that you have set as your limit in your debug settings. Click **Refresh** to update the data preview based on your current transformations. If your source data has changed, then click the Refresh > Refetch from source.
:::image type="content" source="media/data-flow/datapreview.png" alt-text="Data preview":::
data-factory Deploy Linked Arm Templates With Vsts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/deploy-linked-arm-templates-with-vsts.md
The scenario we walk through here is to deploy VNet with a Network Security Gro
- Linked ARM template: - For Template, point to ArmTemplate_master.json instead of ArmTemplateForFactory.json
- - For Template Parameters, point to 'ArmTemplateParamter_master.json' instead of 'ArmTemplateParametersForFactory.json'
+ - For Template Parameters, point to 'ArmTemplateParameters_master.json' instead of 'ArmTemplateParametersForFactory.json'
- Under override Template parameters update two additional parameters - **containerUri** ΓÇô Paste the URL of container created above. - **containerSasToken** - If the secret's name is 'StorageSASToken', enter '$(StorageSASToken)' for this value.
The scenario we walk through here is to deploy VNet with a Network Security Gro
1. Save the release pipeline and trigger a release. ## Next steps-- [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
+- [Automate continuous integration using Azure Pipelines releases](continuous-integration-delivery-automate-azure-pipelines.md)
defender-for-cloud Adaptive Application Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-application-controls.md
No enforcement options are currently available. Adaptive application controls ar
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers)|
|Supported machines:|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure and non-Azure machines running Windows and Linux<br>:::image type="icon" source="./media/icons/yes-icon.png"::: [Azure Arc](../azure-arc/index.yml) machines| |Required roles and permissions:|**Security Reader** and **Reader** roles can both view groups and the lists of known-safe applications<br>**Contributor** and **Security Admin** roles can both edit groups and the lists of known-safe applications| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
defender-for-cloud Adaptive Network Hardening https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/adaptive-network-hardening.md
This page explains how to configure and manage adaptive network hardening in Def
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers)|
|Required roles and permissions:|Write permissions on the machineΓÇÖs NSGs| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Connected AWS accounts|
defender-for-cloud Defender For Containers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-containers-introduction.md
Title: Container security with Microsoft Defender for Cloud description: Learn about Microsoft Defender for Containers Previously updated : 06/28/2022 Last updated : 07/12/2022 # Overview of Microsoft Defender for Containers
-Microsoft Defender for Containers is the cloud-native solution for securing your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications.
+Microsoft Defender for Containers is the cloud-native solution that is used to secure your containers so you can improve, monitor, and maintain the security of your clusters, containers, and their applications.
-[How does Defender for Containers work in each Kubernetes platform?](defender-for-containers-architecture.md)
+Defender for Containers assists you with the three core aspects of container security:
-You can learn more by watching this video from the Defender for Cloud in the Field video series:
-- [Microsoft Defender for Containers](episode-three.md)
+- [**Environment hardening**](#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.
+
+- [**Vulnerability assessment**](#vulnerability-assessment) - Vulnerability assessment and management tools for images stored in ACR registries and running in Azure Kubernetes Service.
+
+- [**Run-time threat protection for nodes and clusters**](#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.
+
+You can learn more by watching this video from the Defender for Cloud in the Field video series: [Microsoft Defender for Containers](episode-three.md).
## Microsoft Defender for Containers plan availability
You can learn more by watching this video from the Defender for Cloud in the Fie
| Required roles and permissions: | ΓÇó To auto provision the required components, see the [permissions for each of the components](enable-data-collection.md?tabs=autoprovision-containers)<br> ΓÇó **Security admin** can dismiss alerts<br> ΓÇó **Security reader** can view vulnerability assessment findings<br> See also [Azure Container Registry roles and permissions](../container-registry/container-registry-roles.md) | | Clouds: | **Azure**:<br>:::image type="icon" source="./medi#defender-for-containers-feature-availability). |
-## What are the benefits of Microsoft Defender for Containers?
-
-Defender for Containers helps with the core aspects of container security:
--- [**Environment hardening**](#hardening) - Defender for Containers protects your Kubernetes clusters whether they're running on Azure Kubernetes Service, Kubernetes on-premises/IaaS, or Amazon EKS. Defender for Containers continuously assesses clusters to provide visibility into misconfigurations and guidelines to help mitigate identified threats.--- [**Vulnerability assessment**](#vulnerability-assessment) - Vulnerability assessment and management tools for images **stored** in ACR registries and **running** in Azure Kubernetes Service.--- [**Run-time threat protection for nodes and clusters**](#run-time-protection-for-kubernetes-nodes-and-clusters) - Threat protection for clusters and Linux nodes generates security alerts for suspicious activities.- ## Hardening ### Continuous monitoring of your Kubernetes clusters - wherever they're hosted
-Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations. Use Defender for Cloud's **recommendations page** to view recommendations and remediate issues. For details of the relevant Defender for Cloud recommendations that might appear for this feature, see the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
+Defender for Cloud continuously assesses the configurations of your clusters and compares them with the initiatives applied to your subscriptions. When it finds misconfigurations, Defender for Cloud generates security recommendations that are available on Defender for Cloud's Recommendations page. The recommendations allow you to investigate and remediate issues. For details on the recommendations that might appear for this feature, check out the [compute section](recommendations-reference.md#recs-container) of the recommendations reference table.
-For Kubernetes clusters on EKS, you'll need to [connect your AWS account to Microsoft Defender for Cloud](quickstart-onboard-aws.md). Then ensure you've enabled the CSPM plan.
+For Kubernetes clusters on EKS, you'll need to [connect your AWS account to Microsoft Defender for Cloud](quickstart-onboard-aws.md) and ensure you've enabled the CSPM plan.
-When reviewing the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page, you can use the resource filter:
+You can use the resource filter to review the outstanding recommendations for your container-related resources, whether in asset inventory or the recommendations page:
### Kubernetes data plane hardening
-To protect the workloads of your Kubernetes containers with tailored recommendations, install the **Azure Policy for Kubernetes**. You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma).
+To protect the workloads of your Kubernetes containers with tailored recommendations, you can install the [Azure Policy for Kubernetes](../governance/policy/concepts/policy-for-kubernetes.md). You can also auto deploy this component as explained in [enable auto provisioning of agents and extensions](enable-data-collection.md#auto-provision-mma).
-With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure to **enforce** the best practices and mandate them for future workloads.
+With the add-on on your AKS cluster, every request to the Kubernetes API server will be monitored against the predefined set of best practices before being persisted to the cluster. You can then configure it to enforce the best practices and mandate them for future workloads.
For example, you can mandate that privileged containers shouldn't be created, and any future requests to do so will be blocked.
-Learn more in [Kubernetes data plane hardening](kubernetes-workload-protections.md).
+You can learn more about [Kubernetes data plane hardening](kubernetes-workload-protections.md).
## Vulnerability assessment
Learn more in [Kubernetes data plane hardening](kubernetes-workload-protections.
Defender for Containers includes an integrated vulnerability scanner for scanning images in Azure Container Registry registries. The vulnerability scanner runs on an image: -- When you push the image to your registry-- Weekly on any image that was pulled within the last 30-- When you import the image to your Azure Container Registry-- Continuously in specific situations
+ - When you push the image to your registry
+ - Weekly on any image that was pulled within the last 30
+ - When you import the image to your Azure Container Registry
+ - Continuously in specific situations
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
Learn more in [Vulnerability assessment](defender-for-containers-usage.md).
### View vulnerabilities for running images
-The recommendation **Running container images should have vulnerability findings resolved** shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non-ACR registry, will appear under the **Not applicable** tab.
+The recommendation `Running container images should have vulnerability findings resolved` shows vulnerabilities for running images by using the scan results from ACR registries and information on running images from the Defender security profile/extension. Images that are deployed from a non-ACR registry, will appear under the Not applicable tab.
:::image type="content" source="media/defender-for-containers/running-image-vulnerabilities-recommendation.png" alt-text="Screenshot showing where the recommendation is viewable." lightbox="media/defender-for-containers/running-image-vulnerabilities-recommendation-expanded.png":::
The recommendation **Running container images should have vulnerability findings
Defender for Containers provides real-time threat protection for your containerized environments and generates alerts for suspicious activities. You can use this information to quickly remediate security issues and improve the security of your containers. Threat protection at the cluster level is provided by the Defender profile and analysis of the Kubernetes audit logs. Examples of events at this level include exposed Kubernetes dashboards, creation of high-privileged roles, and the creation of sensitive mounts.
-In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes **host-level threat detection** with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload. Our global team of security researchers constantly monitor the threat landscape. They add container-specific alerts and vulnerabilities as they're discovered.
+In addition, our threat detection goes beyond the Kubernetes management layer. Defender for Containers includes host-level threat detection with over 60 Kubernetes-aware analytics, AI, and anomaly detections based on your runtime workload.
This solution monitors the growing attack surface of multicloud Kubernetes deployments and tracks the [MITRE ATT&CK® matrix for Containers](https://www.microsoft.com/security/blog/2021/04/29/center-for-threat-informed-defense-teams-up-with-microsoft-partners-to-build-the-attck-for-containers-matrix/), a framework that was developed by the [Center for Threat-Informed Defense](https://mitre-engenuity.org/ctid/) in close partnership with Microsoft and others.
-The full list of available alerts can be found in the [Reference table of alerts](alerts-reference.md#alerts-k8scluster).
-- ## FAQ - Defender for Containers - [What are the options to enable the new plan at scale?](#what-are-the-options-to-enable-the-new-plan-at-scale)
The full list of available alerts can be found in the [Reference table of alerts
### What are the options to enable the new plan at scale?
-WeΓÇÖve rolled out a new policy in Azure Policy, **Configure Microsoft Defender for Containers to be enabled**, to make it easier to enable the new plan at scale.
+You can use the Azure Policy `Configure Microsoft Defender for Containers to be enabled`, to enable Defender for Containers at scale. You can also see all of the options that are available to [enable Microsoft Defender for Containers](defender-for-containers-enable.md).
### Does Microsoft Defender for Containers support AKS clusters with virtual machines scale sets?
No, AKS is a managed service, and manipulation of the IaaS resources isn't suppo
## Learn More
-Learn more about Defender for Containers:
+Learn more about Defender for Containers in the following blogs:
- [Introducing Microsoft Defender for Containers](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/introducing-microsoft-defender-for-containers/ba-p/2952317) - [Demonstrating Microsoft Defender for Cloud](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/how-to-demonstrate-the-new-containers-features-in-microsoft/ba-p/3281172)-- The release state of Defender for Containers is broken down by two dimensions: environment and feature. So, for example:+
+The release state of Defender for Containers is broken down by two dimensions: environment and feature. So, for example:
- **Kubernetes data plane recommendations** for AKS clusters are GA - **Kubernetes data plane recommendations** for EKS clusters are preview
defender-for-cloud Defender For Servers Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/defender-for-servers-introduction.md
Title: Microsoft Defender for Servers - the benefits and features description: Learn about the benefits and features of Microsoft Defender for Servers. Previously updated : 06/29/2022 Last updated : 07/13/2022 # Overview of Microsoft Defender for Servers
-Microsoft Defender for Servers is one of the enhanced security features of Microsoft Defender for Cloud. Use it to add threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, and on-premises environment.
+Defender for Servers is one of the enhanced security features available in Microsoft Defender for Cloud. You can use it to add threat detection and advanced defenses to your Windows and Linux machines that exist in hybrid and multicloud environments.
-To protect machines in hybrid and multicloud environments, Defender for Cloud uses [Azure Arc](../azure-arc/index.yml). Connect your hybrid and multicloud machines as explained in the relevant quickstart:
-- [Connect your non-Azure machines to Microsoft Defender for Cloud](quickstart-onboard-machines.md)-- [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md)
+To protect your machines, Defender for Cloud uses [Azure Arc](../azure-arc/index.yml). You can [Connect your non-Azure machines to Microsoft Defender for Cloud](quickstart-onboard-machines.md), [Connect your AWS accounts to Microsoft Defender for Cloud](quickstart-onboard-aws.md) or [Connect your GCP projects to Microsoft Defender for Cloud](quickstart-onboard-gcp.md).
> [!TIP]
-> For details of which Defender for Servers features are relevant for machines running on other cloud environments, see [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers).
+> You can check out the [Supported features for virtual machines and servers](supported-machines-endpoint-solutions-clouds-servers.md?tabs=features-windows#supported-features-for-virtual-machines-and-servers) for details on which Defender for Servers features are relevant for machines running on other cloud environments.
You can learn more by watching these videos from the Defender for Cloud in the Field video series: - [Microsoft Defender for Servers](episode-five.md) - [Enhanced workload protection features in Defender for Servers](episode-twelve.md) - [Deploy in Defender for Servers in AWS and GCP](episode-fourteen.md)
-## What are the Microsoft Defender for server plans?
+## Available Defender for Server plans
-Microsoft Defender for Servers provides threat detection and advanced defenses to your Windows and Linux machines whether they're running in Azure, AWS, GCP, or on-premises. Microsoft Defender for Servers is available in two plans:
+Defender for Servers offers you a choice between two paid plans:
-- **Microsoft Defender for Servers Plan 1** - deploys Microsoft Defender for Endpoint to your servers and provides these capabilities:
- - Microsoft Defender for Endpoint licenses are charged per hour instead of per seat, lowering costs for protecting virtual machines only when they are in use.
- - Microsoft Defender for Endpoint deploys automatically to all cloud workloads so that you know they're protected when they spin up.
- - Alerts and vulnerability data from Microsoft Defender for Endpoint is shown in Microsoft Defender for Cloud
+| Feature | [Defender for Servers Plan 1](#plan-1) | [Defender for Servers Plan 2](#plan-2-formerly-defender-for-servers) |
+|:|::|::|
+| Automatic onboarding for resources in Azure, AWS, GCP | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Microsoft threat and vulnerability management | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| [Integration of Microsoft Defender for Cloud and Microsoft Defender for Endpoint](#integrated-license-for-microsoft-defender-for-endpoint) (alerts, software inventory, Vulnerability Assessment) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Security Policy and Regulatory Compliance | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Log-analytics (500 MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| [Vulnerability Assessment using Qualys](#vulnerability-scanner-powered-by-qualys) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| Threat detections: OS level, network layer, control plane | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| [Adaptive application controls](#adaptive-application-controls-aac) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| [File integrity monitoring](#file-integrity-monitoring-fim) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| [Just-in time VM access](#just-in-time-jit-virtual-machine-vm-access) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+| [Adaptive network hardening](#adaptive-network-hardening-anh) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
+
+### Plan 1
+
+Plan 1 includes the following benefits:
+
+- Automatic onboarding for resources in Azure, AWS, GCP
+- Microsoft threat and vulnerability management
+- Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal
+- A Microsoft Defender for Endpoint subscription that includes access to alerts, software inventory, Vulnerability Assessment and an automatic integration with Microsoft Defender for Cloud.
-- **Microsoft Defender for Servers Plan 2** (formerly Defender for Servers) - includes the benefits of Plan 1 and support for all of the other Microsoft Defender for Servers features.
+The subscription to Microsoft Defender for Endpoint allows you to deploy Defender for Endpoint to your servers. Defender for Endpoint includes the following capabilities:
+
+- Licenses are charged per hour instead of per seat, lowering your costs to protect virtual machines only when they are in use.
+- Microsoft Defender for Endpoint deploys automatically to all cloud workloads so that you know that they're protected when they spin up.
+- Alerts and vulnerability data is shown in Microsoft Defender for Cloud.
+
+### Plan 2 (formerly Defender for Servers)
+
+Plan 2 includes all of the benefits included with Plan 1. However, plan 2 also includes all of the other Microsoft Defender for Servers features listed in the [table above](#available-defender-for-server-plans).
For pricing details in your currency of choice and according to your region, see the [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/).
-To enable the Microsoft Defender for Servers plans:
+## Select a plan
-1. Go to **Environment settings** and select your subscription.
-2. If Microsoft Defender for Servers isn't enabled, set it to **On**.
- Plan 2 is selected by default.
+You can select your plan when you [Enable enhanced security features on your subscriptions and workspaces:](enable-enhanced-security.md#enable-enhanced-security-features-on-your-subscriptions-and-workspaces). By default, plan 2 is selected when you set the Defender for Servers plan to On.
- If you want to change the Defender for Servers plan:
- 1. In the **Plan/Pricing** column, select **Change plan**.
- 2. Select the plan that you want and select **Confirm**.
+If at any point, you want to change the Defender for Servers plan, you can change it on the Defender plans page by selecting **Change plan**.
-The following table describes what's included in each plan at a high level.
-| Feature | Defender for Servers Plan 1 | Defender for Servers Plan 2 |
-|:|::|::|
-| Automatic onboarding for resources in Azure, AWS, GCP | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Microsoft threat and vulnerability management | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Flexibility to use Microsoft Defender for Cloud or Microsoft 365 Defender portal | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Integration of Microsoft Defender for Cloud and Microsoft Defender for Endpoint (alerts, software inventory, Vulnerability Assessment) | :::image type="icon" source="./media/icons/yes-icon.png"::: | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Security Policy and Regulatory Compliance | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Log-analytics (500 MB free) | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Vulnerability Assessment using Qualys | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Threat detections: OS level, network layer, control plane | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Adaptive application controls | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| File integrity monitoring | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Just-in time VM access | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Adaptive network hardening | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-<!-- | Future ΓÇô TVM P2 | | :::image type="icon" source="./media/icons/yes-icon.png"::: |
-| Future ΓÇô disk scanning insights | | :::image type="icon" source="./media/icons/yes-icon.png"::: | -->
+## Benefits of the Defender for Servers plans
-## What are the benefits of Defender for Servers?
+Defender for Servers offers both threat detection and protection capabilities that consist of:
-The threat detection and protection capabilities provided with Microsoft Defender for Servers include:
+### Plan 1 & Plan 2
-- **Integrated license for Microsoft Defender for Endpoint** - Microsoft Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. When you enable Microsoft Defender for Servers, Defender for Cloud gets access to the Microsoft Defender for Endpoint data that is related to vulnerabilities, installed software, and alerts for your endpoints.
+#### Microsoft threat and vulnerability management
- When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown in Defender for Cloud. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack. For more information, see [Protect your endpoints](integration-defender-for-endpoint.md).
+Defender for Servers includes a selection of vulnerability discovery and management tools for your machines. You can select which tools to deploy to your machines. The discovered vulnerabilities are shown in a security recommendation.
-- **Vulnerability assessment tools for machines** - Microsoft Defender for Servers includes a choice of vulnerability discovery and management tools for your machines. From Defender for Cloud's settings pages, you can select the tools to deploy to your machines. The discovered vulnerabilities are shown in a security recommendation.
+Discovers vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, and without the need of other agents or periodic scans. [Threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt) prioritizes vulnerabilities according to the threat landscape, detections in your organization, sensitive information on vulnerable devices, and the business context. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
- - **Microsoft threat and vulnerability management** - Discover vulnerabilities and misconfigurations in real time with Microsoft Defender for Endpoint, and without the need of other agents or periodic scans. [Threat and vulnerability management](/microsoft-365/security/defender-endpoint/next-gen-threat-and-vuln-mgt) prioritizes vulnerabilities according to the threat landscape, detections in your organization, sensitive information on vulnerable devices, and the business context. Learn more in [Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management](deploy-vulnerability-assessment-tvm.md)
+#### Integrated license for Microsoft Defender for Endpoint
- - **Vulnerability scanner powered by Qualys** - The Qualys scanner is one of the leading tools for real-time identification of vulnerabilities in your Azure and hybrid virtual machines. You don't need a Qualys license or even a Qualys account - everything's handled seamlessly inside Defender for Cloud. Learn more in [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
+Defender for Servers includes [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender). Together, they provide comprehensive endpoint detection and response (EDR) capabilities. When you enable Defender for Servers, Defender for Cloud gains access to the Defender for Endpoint data that is related to vulnerabilities, installed software, and alerts for your endpoints.
-- **Just-in-time (JIT) virtual machine (VM) access** - Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.
+When Defender for Endpoint detects a threat, it triggers an alert. The alert is shown on Defender for Cloud's Recommendation page. From Defender for Cloud, you can also pivot to the Defender for Endpoint console, and perform a detailed investigation to uncover the scope of the attack. Learn how to [Protect your endpoints](integration-defender-for-endpoint.md).
- When you enable Microsoft Defender for Servers, you can use just-in-time VM access to lock down the inbound traffic to your VMs. This reduces exposure to attacks and provides easy access to connect to VMs when needed. For more information, see [Understanding JIT VM access](just-in-time-access-overview.md).
+### Plan 2 only
-- **File integrity monitoring (FIM)** - File integrity monitoring (FIM), also known as change monitoring, examines files and registries of operating system, application software, and others for changes that might indicate an attack. A comparison method is used to determine if the current state of the file is different from the last scan of the file. You can use this comparison to determine if valid or suspicious modifications have been made to your files.
+#### Vulnerability scanner powered by Qualys
- When you enable Microsoft Defender for Servers, you can use FIM to validate the integrity of Windows files, your Windows registries, and Linux files. For more information, see [File integrity monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
+Defender for Servers includes a selection of vulnerability discovery and management tools for your machines. You can select which tools to deploy to your machines. The discovered vulnerabilities are shown in a security recommendation.
-- **Adaptive application controls (AAC)** - Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines.
+The Qualys scanner is one of the leading tools for real-time identification of vulnerabilities in your Azure and hybrid virtual machines. You don't need a Qualys license or a Qualys account - everything's handled seamlessly inside Defender for Cloud. You can learn more about [Defender for Cloud's integrated Qualys scanner for Azure and hybrid machines](deploy-vulnerability-assessment-vm.md).
- After you enable and configure adaptive application controls, you get security alerts if any application runs other than the ones you defined as safe. For more information, see [Use adaptive application controls to reduce your machines' attack surfaces](adaptive-application-controls.md).
+#### Adaptive application controls (AAC)
-- **Adaptive network hardening (ANH)** - Applying network security groups (NSG) to filter traffic to and from resources, improves your network security posture. However, there can still be some cases in which the actual traffic flowing through the NSG is a subset of the NSG rules defined. In these cases, further improving the security posture can be achieved by hardening the NSG rules, based on the actual traffic patterns.
+Adaptive application controls are an intelligent and automated solution for defining allowlists of known-safe applications for your machines.
- Adaptive network hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise. ANH then provides recommendations to allow traffic only from specific IP and port tuples. For more information, see [Improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
+After you enable and configure adaptive application controls, you get security alerts if any application runs other than the ones you defined as safe. Learn how to [use adaptive application controls to reduce your machines' attack surfaces](adaptive-application-controls.md).
+#### File integrity monitoring (FIM)
-- **Docker host hardening** - Microsoft Defender for Cloud identifies unmanaged containers hosted on IaaS Linux VMs, or other Linux machines running Docker containers. Defender for Cloud continuously assesses the configurations of these containers. It then compares them with the Center for Internet Security (CIS) Docker Benchmark. Defender for Cloud includes the entire ruleset of the CIS Docker Benchmark and alerts you if your containers don't satisfy any of the controls. For more information, see [Harden your Docker hosts](harden-docker-hosts.md).
+File integrity monitoring (FIM), also known as change monitoring, examines files and registries of operating system, application software, and others for changes that might indicate an attack. A comparison method is used to determine if the current state of the file is different from the last scan of the file. You can use this comparison to determine if valid or suspicious modifications have been made to your files.
-- **Fileless attack detection** - Fileless attacks inject malicious payloads into memory to avoid detection by disk-based scanning techniques. The attackerΓÇÖs payload then persists within the memory of compromised processes and performs a wide range of malicious activities.
+When you enable Defender for Servers, you can use FIM to validate the integrity of Windows files, your Windows registries, and Linux files. Learn more about [File integrity monitoring in Microsoft Defender for Cloud](file-integrity-monitoring-overview.md).
- With fileless attack detection, automated memory forensic techniques identify fileless attack toolkits, techniques, and behaviors. This solution periodically scans your machine at runtime, and extracts insights directly from the memory of processes. Specific insights include the identification of:
+#### Just-in-time (JIT) virtual machine (VM) access
- - Well-known toolkits and crypto mining software
+Threat actors actively hunt accessible machines with open management ports, like RDP or SSH. All of your virtual machines are potential targets for an attack. When a VM is successfully compromised, it's used as the entry point to attack further resources within your environment.
- - Shellcode - a small piece of code typically used as the payload in the exploitation of a software vulnerability.
+When you enable Microsoft Defender for Servers, you can use just-in-time VM access to lock down the inbound traffic to your VMs. This reduces exposure to attacks and provides easy access to connect to VMs when needed. Learn more about [JIT VM access](just-in-time-access-overview.md).
- - Injected malicious executable in process memory
+#### Adaptive network hardening (ANH)
- Fileless attack detection generates detailed security alerts that include descriptions with process metadata such as network activity. These details accelerate alert triage, correlation, and downstream response time. This approach complements event-based EDR solutions, and provides increased detection coverage.
+Applying network security groups (NSG) to filter traffic to and from resources, improves your network security posture. However, there can still be some cases in which the actual traffic flowing through the NSG is a subset of the NSG rules defined. In these cases, further improving the security posture can be achieved by hardening the NSG rules, based on the actual traffic patterns.
- For details of the fileless attack detection alerts, see the [Reference table of alerts](alerts-reference.md#alerts-windows).
+Adaptive network hardening provides recommendations to further harden the NSG rules. It uses a machine learning algorithm that factors in actual traffic, known trusted configuration, threat intelligence, and other indicators of compromise. ANH then provides recommendations to allow traffic only from specific IP and port tuples. Learn how to [improve your network security posture with adaptive network hardening](adaptive-network-hardening.md).
-- **Linux auditd alerts and Log Analytics agent integration (Linux only)** - The auditd system consists of a kernel-level subsystem, which is responsible for monitoring system calls. It filters them by a specified rule set, and writes messages for them to a socket. Defender for Cloud integrates functionalities from the auditd package within the Log Analytics agent. This integration enables collection of auditd events in all supported Linux distributions, without any prerequisites.
+#### Docker host hardening
- Log Analytics agent for Linux collects auditd records and enriches and aggregates them into events. Defender for Cloud continuously adds new analytics that use Linux signals to detect malicious behaviors on cloud and on-premises Linux machines. Similar to Windows capabilities, these analytics include tests that check for suspicious processes, dubious sign-in attempts, kernel module loading, and other activities. These activities can indicate a machine is either under attack or has been breached.
+Defender for Cloud identifies containers hosted on IaaS Linux VMs, or other Linux machines running Docker containers that are not managed. Defender for Cloud continuously assesses the configurations of these containers. It then compares them with the Center for Internet Security (CIS) Docker Benchmark. Defender for Cloud includes the entire ruleset of the CIS Docker Benchmark and alerts you if your containers don't satisfy any of the controls. For more information, see [Harden your Docker hosts](harden-docker-hosts.md).
- For a list of the Linux alerts, see the [Reference table of alerts](alerts-reference.md#alerts-linux).
+#### Fileless attack detection
+
+Fileless attacks inject malicious payloads into memory to avoid detection by disk-based scanning techniques. The attackerΓÇÖs payload then persists within the memory of compromised processes and performs a wide range of malicious activities.
+
+With fileless attack detection, automated memory forensic techniques identify fileless attack toolkits, techniques, and behaviors. This solution periodically scans your machine at runtime, and extracts insights directly from the memory of processes. Specific insights include the identification of:
+
+- Well-known toolkits and crypto mining software
+
+- Shellcode - a small piece of code typically used as the payload in the exploitation of a software vulnerability.
+
+- Injected malicious executable in process memory
+
+Fileless attack detection generates detailed security alerts that include descriptions with process metadata such as network activity. These details accelerate alert triage, correlation, and downstream response time. This approach complements event-based EDR solutions, and provides increased detection coverage.
+
+For details of the fileless attack detection alerts, see the [Reference table of alerts](alerts-reference.md#alerts-windows).
+
+#### Linux auditd alerts and Log Analytics agent integration (Linux only)
+
+The auditd system consists of a kernel-level subsystem, which is responsible for monitoring system calls. It filters them by a specified rule set, and writes messages for them to a socket. Defender for Cloud integrates functionalities from the auditd package within the Log Analytics agent. This integration enables collection of auditd events in all supported Linux distributions, without any prerequisites.
+
+Log Analytics agent for Linux collects auditd records and enriches and aggregates them into events. Defender for Cloud continuously adds new analytics that use Linux signals to detect malicious behaviors on cloud and on-premises Linux machines. Similar to Windows capabilities, these analytics include tests that check for suspicious processes, dubious sign-in attempts, kernel module loading, and other activities. These activities can indicate a machine is either under attack or has been breached.
+
+For a list of the Linux alerts, see the [Reference table of alerts](alerts-reference.md#alerts-linux).
## How does Defender for Servers collect data?
You can simulate alerts by downloading one of the following playbooks:
## Learn more
-You can check out the following blogs:
+To learn more about Defender for Servers, you can check out the following blogs:
- [Security posture management and server protection for AWS and GCP are now generally available](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/security-posture-management-and-server-protection-for-aws-and/ba-p/3271388) - [Microsoft Defender for Cloud Server Monitoring Dashboard](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/microsoft-defender-for-cloud-server-monitoring-dashboard/ba-p/2869658) +
+For related material, see the following page:
+
+- Whether Defender for Cloud generates an alert or receives an alert from a different security product, you can export alerts from Defender for Cloud. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
+ ## Next steps In this article, you learned about Microsoft Defender for Servers. > [!div class="nextstepaction"] > [Enable enhanced protections](enable-enhanced-security.md)-
-For related material, see the following page:
--- Whether Defender for Cloud generates an alert or receives an alert from a different security product, you can export alerts from Defender for Cloud. To export your alerts to Microsoft Sentinel, any third-party SIEM, or any other external tool, follow the instructions in [Exporting alerts to a SIEM](continuous-export.md).
defender-for-cloud Deploy Vulnerability Assessment Tvm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-tvm.md
Title: Use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud description: Enable, deploy, and use Microsoft Defender for Endpoint's threat and vulnerability management capabilities with Microsoft Defender for Cloud to discover weaknesses in your Azure and hybrid machines Previously updated : 06/29/2022 Last updated : 07/13/2022 # Investigate weaknesses with Microsoft Defender for Endpoint's threat and vulnerability management
You can learn more by watching this video from the Defender for Cloud in the Fie
|-|:-| |Release state:|General availability (GA)| |Machine types:|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines <br> [Supported machines](/microsoft-365/security/defender-endpoint/tvm-supported-os)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#available-defender-for-server-plans)|
|Prerequisites:|Enable the [integration with Microsoft Defender for Endpoint](integration-defender-for-endpoint.md)| |Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Deploy Vulnerability Assessment Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/deploy-vulnerability-assessment-vm.md
Deploy the vulnerability assessment solution that best meets your needs and bud
|-|:-| |Release state:|General availability (GA)| |Machine types (hybrid scenarios):|:::image type="icon" source="./media/icons/yes-icon.png"::: Azure virtual machines<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Arc-enabled machines|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#available-defender-for-server-plans)|
|Required roles and permissions:|[Owner](../role-based-access-control/built-in-roles.md#owner) (resource group level) can deploy the scanner<br>[Security Reader](../role-based-access-control/built-in-roles.md#security-reader) can view findings| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
defender-for-cloud File Integrity Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/file-integrity-monitoring-overview.md
Learn how to configure file integrity monitoring (FIM) in Microsoft Defender for
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#available-defender-for-server-plans).<br>Using the Log Analytics agent, FIM uploads data to the Log Analytics workspace. Data charges apply, based on the amount of data you upload. See [Log Analytics pricing](https://azure.microsoft.com/pricing/details/log-analytics/) to learn more.|
|Required roles and permissions:|**Workspace owner** can enable/disable FIM (for more information, see [Azure Roles for Log Analytics](/services-hub/health/azure-roles#azure-roles)).<br>**Reader** can view results.| |Clouds:|:::image type="icon" source="./medi).<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
defender-for-cloud Harden Docker Hosts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/harden-docker-hosts.md
When vulnerabilities are found, they're grouped inside a single recommendation.
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#available-defender-for-server-plans)|
|Required roles and permissions:|**Reader** on the workspace to which the host connects| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: National (Azure Government, Azure China 21Vianet)<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts|
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
With Microsoft Defender for Servers, you can deploy [Microsoft Defender for Endp
| Aspect | Details | |-|:--| | Release state: | General availability (GA) |
-| Pricing: | Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) |
+| Pricing: | Requires [Microsoft Defender for Servers Plan 1 or Plan 2](defender-for-servers-introduction.md#available-defender-for-server-plans) |
| Supported environments: | :::image type="icon" source="./medi) (formerly Windows Virtual Desktop), [Windows 10 Enterprise multi-session](../virtual-desktop/windows-10-multisession-faq.yml) (formerly Enterprise for Virtual Desktops)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure VMs running Windows 11 or Windows 10 (except if running Azure Virtual Desktop or Windows 10 Enterprise multi-session) | | Required roles and permissions: | * To enable/disable the integration: **Security admin** or **Owner**<br>* To view Defender for Endpoint alerts in Defender for Cloud: **Security reader**, **Reader**, **Resource Group Contributor**, **Resource Group Owner**, **Security admin**, **Subscription owner**, or **Subscription Contributor** |
-| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts |
+| Clouds: | :::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/yes-icon.png"::: Azure Government (Windows only)<br>:::image type="icon" source="./media/icons/no-icon.png"::: Azure China 21Vianet <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected AWS accounts <br>:::image type="icon" source="./media/icons/yes-icon.png"::: Connected GCP projects |
## Benefits of integrating Microsoft Defender for Endpoint with Defender for Cloud
-[Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or AWS. Protections include:
+[Microsoft Defender for Endpoint Plan 2](/microsoft-365/security/defender-endpoint/microsoft-defender-endpoint) protects your Windows and Linux machines whether they're hosted in Azure, hybrid clouds (on-premises), or multicloud. Protections include:
- **Advanced post-breach detection sensors**. Defender for Endpoint's sensors collect a vast array of behavioral signals from your machines.
If you enabled the integration, but still don't see the extension running on you
Defender for Endpoint is included at no extra cost with **Microsoft Defender for Servers**. Alternatively, it can be purchased separately for 50 machines or more. ### If I already have a license for Microsoft Defender for Endpoint, can I get a discount for Microsoft Defender for Servers?
-If you already have a license for **Microsoft Defender for Endpoint for Servers** , you won't pay for that part of your [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) license. Learn more about [the Microsoft 365 license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
+If you already have a license for **Microsoft Defender for Endpoint for Servers** , you won't pay for that part of your [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers) license. Learn more about [the Microsoft 365 license](/microsoft-365/security/defender-endpoint/minimum-requirements#licensing-requirements).
To request your discount, [contact Defender for Cloud's support team](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/overview). You'll need to provide the relevant workspace ID, region, and number of Microsoft Defender for Endpoint for servers licenses applied for machines in the given workspace.
defender-for-cloud Just In Time Access Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/just-in-time-access-overview.md
If other rules already exist for the selected ports, then those existing rules t
In AWS, by enabling JIT-access the relevant rules in the attached EC2 security groups, for the selected ports, are revoked which blocks inbound traffic on those specific ports.
-When a user requests access to a VM, Defender for Cloud checks that the user has [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) permissions for that VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified. In AWS, Defender for Cloud creates a new EC2 security group that allow inbound traffic to the specified ports. After the time has expired, Defender for Cloud restores the NSGs to their previous states. Connections that are already established are not interrupted.
+When a user requests access to a VM, Defender for Cloud checks that the user has [Azure role-based access control (Azure RBAC)](../role-based-access-control/role-assignments-portal.md) permissions for that VM. If the request is approved, Defender for Cloud configures the NSGs and Azure Firewall to allow inbound traffic to the selected ports from the relevant IP address (or range), for the amount of time that was specified. In AWS, Defender for Cloud creates a new EC2 security group that allows inbound traffic to the specified ports. After the time has expired, Defender for Cloud restores the NSGs to their previous states. Connections that are already established are not interrupted.
> [!NOTE] > JIT does not support VMs protected by Azure Firewalls controlled by [Azure Firewall Manager](../firewall-manager/overview.md). The Azure Firewall must be configured with Rules (Classic) and cannot use Firewall policies.
When Defender for Cloud finds a machine that can benefit from JIT, it adds that
### What permissions are needed to configure and use JIT?
-JIT Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans) to be enabled on the subscription.
+JIT Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers) to be enabled on the subscription.
**Reader** and **SecurityReader** roles can both view the JIT status and parameters.
defender-for-cloud Protect Network Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/protect-network-resources.md
This article addresses recommendations that apply to your Azure resources from a
The **Networking** features of Defender for Cloud include: -- Network map (requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans))-- [Adaptive network hardening](adaptive-network-hardening.md) (requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans))
+- Network map (requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers))
+- [Adaptive network hardening](adaptive-network-hardening.md) (requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers))
- Networking security recommendations ## View your networking resources and their recommendations
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Defender for Cloud will immediately start scanning your AWS resources and you'll
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers)|
|Required roles and permissions:|**Owner** on the relevant Azure subscription<br>**Contributor** can also connect an AWS account if an owner provides the service principal details| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Microsoft Defender for Containers brings threat detection, and advanced defenses
|Aspect|Details| |-|:-| |Release state:|General availability (GA)|
-|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans)|
+|Pricing:|Requires [Microsoft Defender for Servers Plan 2](defender-for-servers-introduction.md#plan-2-formerly-defender-for-servers)|
|Required roles and permissions:|**Owner** or **Contributor** on the relevant Azure Subscription| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet)|
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
description: A description of what's new and changed in Microsoft Defender for C
Previously updated : 04/11/2022 Last updated : 07/13/2022 # Archive for what's new in Defender for Cloud?
This page provides you with information about:
- Bug fixes - Deprecated functionality
+## January 2022
+
+Updates in January include:
+
+- [Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix](#microsoft-defender-for-resource-manager-updated-with-new-alerts-and-greater-emphasis-on-high-risk-operations-mapped-to-mitre-attck-matrix)
+- [Recommendations to enable Microsoft Defender plans on workspaces (in preview)](#recommendations-to-enable-microsoft-defender-plans-on-workspaces-in-preview)
+- [Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)](#auto-provision-log-analytics-agent-to-azure-arc-enabled-machines-preview)
+- [Deprecated the recommendation to classify sensitive data in SQL databases](#deprecated-the-recommendation-to-classify-sensitive-data-in-sql-databases)
+- [Communication with suspicious domain alert expanded to included known Log4Shell-related domains](#communication-with-suspicious-domain-alert-expanded-to-included-known-log4shell-related-domains)
+- ['Copy alert JSON' button added to security alert details pane](#copy-alert-json-button-added-to-security-alert-details-pane)
+- [Renamed two recommendations](#renamed-two-recommendations)
+- [Deprecate Kubernetes cluster containers should only listen on allowed ports policy](#deprecate-kubernetes-cluster-containers-should-only-listen-on-allowed-ports-policy)
+- [Added 'Active Alerts' workbook](#added-active-alert-workbook)
+- ['System update' recommendation added to government cloud](#system-update-recommendation-added-to-government-cloud)
+
+### Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix
+
+The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it's also a potential target for attackers. We recommend security operations teams closely monitor the resource management layer.
+
+Microsoft Defender for Resource Manager automatically monitors the resource management operations in your organization, whether they're performed through the Azure portal, Azure REST APIs, Azure CLI, or other Azure programmatic clients. Defender for Cloud runs advanced security analytics to detect threats and alerts you about suspicious activity.
+
+The plan's protections greatly enhance an organization's resiliency against attacks from threat actors and significantly increase the number of Azure resources protected by Defender for Cloud.
+
+In December 2020, we introduced the preview of Defender for Resource Manager, and in May 2021 the plan was release for general availability.
+
+With this update, we've comprehensively revised the focus of the Microsoft Defender for Resource Manager plan. The updated plan includes many **new alerts focused on identifying suspicious invocation of high-risk operations**. These new alerts provide extensive monitoring for attacks across the *complete* [MITRE ATT&CK® matrix for cloud-based techniques](https://attack.mitre.org/matrices/enterprise/cloud/).
+
+This matrix covers the following range of potential intentions of threat actors who may be targeting your organization's resources: *Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Exfiltration, and Impact*.
+
+The new alerts for this Defender plan cover these intentions as shown in the following table.
+
+> [!TIP]
+> These alerts also appear in the [alerts reference page](alerts-reference.md).
+
+| Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
+|-|--|:-:|-|
+| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
+| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription, which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
+| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
+| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium |
+| **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
+| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium |
+| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise additional resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
+| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
+| **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
+
+In addition, these two alerts from this plan have come out of preview:
+
+| Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
+|-|--|:-:|-|
+| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
+| **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
+
+### Recommendations to enable Microsoft Defender plans on workspaces (in preview)
+
+To benefit from all of the security features available from [Microsoft Defender for Servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
+
+When a machine is in a subscription with one of these plan enabled, you'll be billed for the full protections. However, if that machine is reporting to a workspace *without* the plan enabled, you won't actually receive those benefits.
+
+We've added two recommendations that highlight workspaces without these plans enabled, that nevertheless have machines reporting to them from subscriptions that *do* have the plan enabled.
+
+The two recommendations, which both offer automated remediation (the 'Fix' action), are:
+
+|Recommendation |Description |Severity |
+||||
+|[Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Overview of Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
+|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Overview of Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
+
+### Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)
+
+Defender for Cloud uses the Log Analytics agent to gather security-related data from machines. The agent reads various security-related configurations and event logs and copies the data to your workspace for analysis.
+
+Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension, including the Log Analytics agent.
+
+In a further expansion of our hybrid cloud features, we've added an option to auto provision the Log Analytics agent to machines connected to Azure Arc.
+
+As with the other auto provisioning options, this is configured at the subscription level.
+
+When you enable this option, you'll be prompted for the workspace.
+
+> [!NOTE]
+> For this preview, you can't select the default workspaces that was created by Defender for Cloud. To ensure you receive the full set of security features available for the Azure Arc-enabled servers, verify that you have the relevant security solution installed on the selected workspace.
++
+### Deprecated the recommendation to classify sensitive data in SQL databases
+
+We've removed the recommendation **Sensitive data in your SQL databases should be classified** as part of an overhaul of how Defender for Cloud identifies and protects sensitive date in your cloud resources.
+
+Advance notice of this change appeared for the last six months in the [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md) page.
+
+### Communication with suspicious domain alert expanded to included known Log4Shell-related domains
+
+The following alert was previously only available to organizations who had enabled the [Microsoft Defender for DNS](defender-for-dns-introduction.md) plan.
+
+With this update, the alert will also show for subscriptions with the [Microsoft Defender for Servers](defender-for-servers-introduction.md) or [Defender for App Service](defender-for-app-service-introduction.md) plan enabled.
+
+In addition, [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) has expanded the list of known malicious domains to include domains associated with exploiting the widely publicized vulnerabilities associated with Log4j.
+
+| Alert (alert type) | Description | MITRE tactics | Severity |
+|-|-|:--:|-|
+| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access / Persistence / Execution / Command And Control / Exploitation | Medium |
+
+### 'Copy alert JSON' button added to security alert details pane
+
+To help our users quickly share an alert's details with others (for example, SOC analysts, resource owners, and developers) we've added the capability to easily extract all the details of a specific alert with one button from the security alert's details pane.
+
+The new **Copy alert JSON** button puts the alertΓÇÖs details, in JSON format, into the user's clipboard.
++
+### Renamed two recommendations
+
+For consistency with other recommendation names, we've renamed the following two recommendations:
+
+- Recommendation to resolve vulnerabilities discovered in running container images
+ - Previous name: Vulnerabilities in running container images should be remediated (powered by Qualys)
+ - New name: Running container images should have vulnerability findings resolved
+
+- Recommendation to enable diagnostic logs for Azure App Service
+ - Previous name: Diagnostic logs should be enabled in App Service
+ - New name: Diagnostic logs in App Service should be enabled
+
+### Deprecate Kubernetes cluster containers should only listen on allowed ports policy
+
+We've deprecated the **Kubernetes cluster containers should only listen on allowed ports** recommendation.
+
+| Policy name | Description | Effect(s) | Version |
+|--|--|--|--|
+| [Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) | Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../governance/policy/concepts/policy-for-kubernetes.md). | audit, deny, disabled | [6.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) |
+
+The **[Services should listen on allowed ports only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/add45209-73f6-4fa5-a5a5-74a451b07fbe)** recommendation should be used to limit ports that an application exposes to the internet.
+
+### Added 'Active Alert' workbook
+
+To assist our users in their understanding of the active threats to their environments, and prioritize between active alerts during the remediation process, we've added the Active Alerts workbook.
++
+The active alerts workbook allows users to view a unified dashboard of their aggregated alerts by severity, type, tag, MITRE ATT&CK tactics, and location. Learn more in [Use the 'Active Alerts' workbook](custom-dashboards-azure-workbooks.md#use-the-active-alerts-workbook).
+
+### 'System update' recommendation added to government cloud
+
+The 'System updates should be installed on your machines' recommendation is now available on all government clouds.
+
+It's likely that this change will impact your government cloud subscription's secure score. We expect the change to lead to a decreased score, but it's possible the recommendation's inclusion might result in an increased score in some cases.
+ ## December 2021 Updates in December include:
Learn more about [enhancing your custom recommendations with detailed informatio
### Crash dump analysis capabilities migrating to fileless attack detection
-We are integrating the Windows crash dump analysis (CDA) detection capabilities into [fileless attack detection](defender-for-servers-introduction.md#what-are-the-benefits-of-defender-for-servers). Fileless attack detection analytics brings improved versions of the following security alerts for Windows machines: Code injection discovered, Masquerading Windows Module Detected, Shell code discovered, and Suspicious code segment detected.
+We are integrating the Windows crash dump analysis (CDA) detection capabilities into [fileless attack detection](defender-for-servers-introduction.md#fileless-attack-detection). Fileless attack detection analytics brings improved versions of the following security alerts for Windows machines: Code injection discovered, Masquerading Windows Module Detected, Shell code discovered, and Suspicious code segment detected.
Some of the benefits of this transition:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
There are now connector-level settings for Defender for Servers in multicloud.
The new connector-level settings provide granularity for pricing and auto-provisioning configuration per connector, independently of the subscription.
-All auto-provisioning components available in the connector level (Azure Arc, MDE, and vulnerability assessments) are enabled by default, and the new configuration supports both [Plan 1 and Plan 2 pricing tiers](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
+All auto-provisioning components available in the connector level (Azure Arc, MDE, and vulnerability assessments) are enabled by default, and the new configuration supports both [Plan 1 and Plan 2 pricing tiers](defender-for-servers-introduction.md#available-defender-for-server-plans).
Updates in the UI include a reflection of the selected pricing tier and the required components configured.
Microsoft Defender for Servers is now offered in two incremental plans:
- Defender for Servers Plan 2, formerly Defender for Servers - Defender for Servers Plan 1, provides support for Microsoft Defender for Endpoint only
-While Defender for Servers Plan 2 continues to provide protections from threats and vulnerabilities to your cloud and on-premises workloads, Defender for Servers Plan 1 provides endpoint protection only, powered by the natively integrated Defender for Endpoint. Read more about the [Defender for Servers plans](defender-for-servers-introduction.md#what-are-the-microsoft-defender-for-server-plans).
+While Defender for Servers Plan 2 continues to provide protections from threats and vulnerabilities to your cloud and on-premises workloads, Defender for Servers Plan 1 provides endpoint protection only, powered by the natively integrated Defender for Endpoint. Read more about the [Defender for Servers plans](defender-for-servers-introduction.md#available-defender-for-server-plans).
If you have been using Defender for Servers until now no action is required.
Learn how to [enable your database security at the subscription level](quickstar
### Threat protection for Google Kubernetes Engine (GKE) clusters Following our recent announcement [Native CSPM for GCP and threat protection for GCP compute instances](#native-cspm-for-gcp-and-threat-protection-for-gcp-compute-instances), Microsoft Defender for Containers has extended its Kubernetes threat protection, behavioral analytics, and built-in admission control policies to Google's Kubernetes Engine (GKE) Standard clusters. You can easily onboard any existing, or new GKE Standard clusters to your environment through our Automatic onboarding capabilities. Check out [Container security with Microsoft Defender for Cloud](defender-for-containers-introduction.md#vulnerability-assessment), for a full list of available features.-
-## January 2022
-
-Updates in January include:
--- [Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix](#microsoft-defender-for-resource-manager-updated-with-new-alerts-and-greater-emphasis-on-high-risk-operations-mapped-to-mitre-attck-matrix)-- [Recommendations to enable Microsoft Defender plans on workspaces (in preview)](#recommendations-to-enable-microsoft-defender-plans-on-workspaces-in-preview)-- [Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)](#auto-provision-log-analytics-agent-to-azure-arc-enabled-machines-preview)-- [Deprecated the recommendation to classify sensitive data in SQL databases](#deprecated-the-recommendation-to-classify-sensitive-data-in-sql-databases)-- [Communication with suspicious domain alert expanded to included known Log4Shell-related domains](#communication-with-suspicious-domain-alert-expanded-to-included-known-log4shell-related-domains)-- ['Copy alert JSON' button added to security alert details pane](#copy-alert-json-button-added-to-security-alert-details-pane)-- [Renamed two recommendations](#renamed-two-recommendations)-- [Deprecate Kubernetes cluster containers should only listen on allowed ports policy](#deprecate-kubernetes-cluster-containers-should-only-listen-on-allowed-ports-policy)-- [Added 'Active Alerts' workbook](#added-active-alert-workbook)-- ['System update' recommendation added to government cloud](#system-update-recommendation-added-to-government-cloud)-
-### Microsoft Defender for Resource Manager updated with new alerts and greater emphasis on high-risk operations mapped to MITRE ATT&CK® Matrix
-
-The cloud management layer is a crucial service connected to all your cloud resources. Because of this, it's also a potential target for attackers. We recommend security operations teams closely monitor the resource management layer.
-
-Microsoft Defender for Resource Manager automatically monitors the resource management operations in your organization, whether they're performed through the Azure portal, Azure REST APIs, Azure CLI, or other Azure programmatic clients. Defender for Cloud runs advanced security analytics to detect threats and alerts you about suspicious activity.
-
-The plan's protections greatly enhance an organization's resiliency against attacks from threat actors and significantly increase the number of Azure resources protected by Defender for Cloud.
-
-In December 2020, we introduced the preview of Defender for Resource Manager, and in May 2021 the plan was release for general availability.
-
-With this update, we've comprehensively revised the focus of the Microsoft Defender for Resource Manager plan. The updated plan includes many **new alerts focused on identifying suspicious invocation of high-risk operations**. These new alerts provide extensive monitoring for attacks across the *complete* [MITRE ATT&CK® matrix for cloud-based techniques](https://attack.mitre.org/matrices/enterprise/cloud/).
-
-This matrix covers the following range of potential intentions of threat actors who may be targeting your organization's resources: *Initial Access, Execution, Persistence, Privilege Escalation, Defense Evasion, Credential Access, Discovery, Lateral Movement, Collection, Exfiltration, and Impact*.
-
-The new alerts for this Defender plan cover these intentions as shown in the following table.
-
-> [!TIP]
-> These alerts also appear in the [alerts reference page](alerts-reference.md).
-
-| Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
-|-|--|:-:|-|
-| **Suspicious invocation of a high-risk 'Initial Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.InitialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to access restricted resources. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to gain initial access to restricted resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Initial Access | Medium |
-| **Suspicious invocation of a high-risk 'Execution' operation detected (Preview)**<br>(ARM_AnomalousOperation.Execution) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation on a machine in your subscription, which might indicate an attempt to execute code. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Execution | Medium |
-| **Suspicious invocation of a high-risk 'Persistence' operation detected (Preview)**<br>(ARM_AnomalousOperation.Persistence) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to establish persistence. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to establish persistence in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Persistence | Medium |
-| **Suspicious invocation of a high-risk 'Privilege Escalation' operation detected (Preview)**<br>(ARM_AnomalousOperation.PrivilegeEscalation) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to escalate privileges. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to escalate privileges while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Privilege Escalation | Medium |
-| **Suspicious invocation of a high-risk 'Defense Evasion' operation detected (Preview)**<br>(ARM_AnomalousOperation.DefenseEvasion) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to evade defenses. The identified operations are designed to allow administrators to efficiently manage the security posture of their environments. While this activity may be legitimate, a threat actor might utilize such operations to avoid being detected while compromising resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Defense Evasion | Medium |
-| **Suspicious invocation of a high-risk 'Credential Access' operation detected (Preview)**<br>(ARM_AnomalousOperation.CredentialAccess) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to access credentials. The identified operations are designed to allow administrators to efficiently access their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Credential Access | Medium |
-| **Suspicious invocation of a high-risk 'Lateral Movement' operation detected (Preview)**<br>(ARM_AnomalousOperation.LateralMovement) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to perform lateral movement. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to compromise additional resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Lateral Movement | Medium |
-| **Suspicious invocation of a high-risk 'Data Collection' operation detected (Preview)**<br>(ARM_AnomalousOperation.Collection) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempt to collect data. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to collect sensitive data on resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Collection | Medium |
-| **Suspicious invocation of a high-risk 'Impact' operation detected (Preview)**<br>(ARM_AnomalousOperation.Impact) | Microsoft Defender for Resource Manager identified a suspicious invocation of a high-risk operation in your subscription, which might indicate an attempted configuration change. The identified operations are designed to allow administrators to efficiently manage their environments. While this activity may be legitimate, a threat actor might utilize such operations to access restricted credentials and compromise resources in your environment. This can indicate that the account is compromised and is being used with malicious intent. | Impact | Medium |
-
-In addition, these two alerts from this plan have come out of preview:
-
-| Alert (alert type) | Description | MITRE tactics (intentions)| Severity |
-|-|--|:-:|-|
-| **Azure Resource Manager operation from suspicious IP address**<br>(ARM_OperationFromSuspiciousIP) | Microsoft Defender for Resource Manager detected an operation from an IP address that has been marked as suspicious in threat intelligence feeds. | Execution | Medium |
-| **Azure Resource Manager operation from suspicious proxy IP address**<br>(ARM_OperationFromSuspiciousProxyIP) | Microsoft Defender for Resource Manager detected a resource management operation from an IP address that is associated with proxy services, such as TOR. While this behavior can be legitimate, it's often seen in malicious activities, when threat actors try to hide their source IP. | Defense Evasion | Medium |
-
-### Recommendations to enable Microsoft Defender plans on workspaces (in preview)
-
-To benefit from all of the security features available from [Microsoft Defender for Servers](defender-for-servers-introduction.md) and [Microsoft Defender for SQL on machines](defender-for-sql-introduction.md), the plans must be enabled on **both** the subscription and workspace levels.
-
-When a machine is in a subscription with one of these plan enabled, you'll be billed for the full protections. However, if that machine is reporting to a workspace *without* the plan enabled, you won't actually receive those benefits.
-
-We've added two recommendations that highlight workspaces without these plans enabled, that nevertheless have machines reporting to them from subscriptions that *do* have the plan enabled.
-
-The two recommendations, which both offer automated remediation (the 'Fix' action), are:
-
-|Recommendation |Description |Severity |
-||||
-|[Microsoft Defender for Servers should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/1ce68079-b783-4404-b341-d2851d6f0fa2) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Overview of Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
-|[Microsoft Defender for SQL on machines should be enabled on workspaces](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/e9c320f1-03a0-4d2b-9a37-84b3bdc2e281) |Microsoft Defender for Servers brings threat detection and advanced defenses for your Windows and Linux machines.<br>With this Defender plan enabled on your subscriptions but not on your workspaces, you're paying for the full capability of Microsoft Defender for Servers but missing out on some of the benefits.<br>When you enable Microsoft Defender for Servers on a workspace, all machines reporting to that workspace will be billed for Microsoft Defender for Servers - even if they're in subscriptions without Defender plans enabled. Unless you also enable Microsoft Defender for Servers on the subscription, those machines won't be able to take advantage of just-in-time VM access, adaptive application controls, and network detections for Azure resources.<br>Learn more in <a target="_blank" href="/azure/defender-for-cloud/defender-for-servers-introduction?wt.mc_id=defenderforcloud_inproduct_portal_recoremediation">Overview of Microsoft Defender for Servers</a>.<br />(No related policy) |Medium |
-
-### Auto provision Log Analytics agent to Azure Arc-enabled machines (preview)
-
-Defender for Cloud uses the Log Analytics agent to gather security-related data from machines. The agent reads various security-related configurations and event logs and copies the data to your workspace for analysis.
-
-Defender for Cloud's auto provisioning settings has a toggle for each type of supported extension, including the Log Analytics agent.
-
-In a further expansion of our hybrid cloud features, we've added an option to auto provision the Log Analytics agent to machines connected to Azure Arc.
-
-As with the other auto provisioning options, this is configured at the subscription level.
-
-When you enable this option, you'll be prompted for the workspace.
-
-> [!NOTE]
-> For this preview, you can't select the default workspaces that was created by Defender for Cloud. To ensure you receive the full set of security features available for the Azure Arc-enabled servers, verify that you have the relevant security solution installed on the selected workspace.
--
-### Deprecated the recommendation to classify sensitive data in SQL databases
-
-We've removed the recommendation **Sensitive data in your SQL databases should be classified** as part of an overhaul of how Defender for Cloud identifies and protects sensitive date in your cloud resources.
-
-Advance notice of this change appeared for the last six months in the [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md) page.
-
-### Communication with suspicious domain alert expanded to included known Log4Shell-related domains
-
-The following alert was previously only available to organizations who had enabled the [Microsoft Defender for DNS](defender-for-dns-introduction.md) plan.
-
-With this update, the alert will also show for subscriptions with the [Microsoft Defender for Servers](defender-for-servers-introduction.md) or [Defender for App Service](defender-for-app-service-introduction.md) plan enabled.
-
-In addition, [Microsoft Threat Intelligence](https://go.microsoft.com/fwlink/?linkid=2128684) has expanded the list of known malicious domains to include domains associated with exploiting the widely publicized vulnerabilities associated with Log4j.
-
-| Alert (alert type) | Description | MITRE tactics | Severity |
-|-|-|:--:|-|
-| **Communication with suspicious domain identified by threat intelligence**<br>(AzureDNS_ThreatIntelSuspectDomain) | Communication with suspicious domain was detected by analyzing DNS transactions from your resource and comparing against known malicious domains identified by threat intelligence feeds. Communication to malicious domains is frequently performed by attackers and could imply that your resource is compromised. | Initial Access / Persistence / Execution / Command And Control / Exploitation | Medium |
-
-### 'Copy alert JSON' button added to security alert details pane
-
-To help our users quickly share an alert's details with others (for example, SOC analysts, resource owners, and developers) we've added the capability to easily extract all the details of a specific alert with one button from the security alert's details pane.
-
-The new **Copy alert JSON** button puts the alertΓÇÖs details, in JSON format, into the user's clipboard.
--
-### Renamed two recommendations
-
-For consistency with other recommendation names, we've renamed the following two recommendations:
--- Recommendation to resolve vulnerabilities discovered in running container images
- - Previous name: Vulnerabilities in running container images should be remediated (powered by Qualys)
- - New name: Running container images should have vulnerability findings resolved
--- Recommendation to enable diagnostic logs for Azure App Service
- - Previous name: Diagnostic logs should be enabled in App Service
- - New name: Diagnostic logs in App Service should be enabled
-
-### Deprecate Kubernetes cluster containers should only listen on allowed ports policy
-
-We've deprecated the **Kubernetes cluster containers should only listen on allowed ports** recommendation.
-
-| Policy name | Description | Effect(s) | Version |
-|--|--|--|--|
-| [Kubernetes cluster containers should only listen on allowed ports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F440b515e-a580-421e-abeb-b159a61ddcbc) | Restrict containers to listen only on allowed ports to secure access to the Kubernetes cluster. This policy is generally available for Kubernetes Service (AKS), and preview for AKS Engine and Azure Arc enabled Kubernetes. For more information, see [https://aka.ms/kubepolicydoc](../governance/policy/concepts/policy-for-kubernetes.md). | audit, deny, disabled | [6.1.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ContainerAllowedPorts.json) |
-
-The **[Services should listen on allowed ports only](https://portal.azure.com/#blade/Microsoft_Azure_Security/RecommendationsBlade/assessmentKey/add45209-73f6-4fa5-a5a5-74a451b07fbe)** recommendation should be used to limit ports that an application exposes to the internet.
-
-### Added 'Active Alert' workbook
-
-To assist our users in their understanding of the active threats to their environments, and prioritize between active alerts during the remediation process, we've added the Active Alerts workbook.
--
-The active alerts workbook allows users to view a unified dashboard of their aggregated alerts by severity, type, tag, MITRE ATT&CK tactics, and location. Learn more in [Use the 'Active Alerts' workbook](custom-dashboards-azure-workbooks.md#use-the-active-alerts-workbook).
-
-### 'System update' recommendation added to government cloud
-
-The 'System updates should be installed on your machines' recommendation is now available on all government clouds.
-
-It's likely that this change will impact your government cloud subscription's secure score. We expect the change to lead to a decreased score, but it's possible the recommendation's inclusion might result in an increased score in some cases.
event-grid Event Schema Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-policy.md
Title: Azure Policy as an Event Grid source
-description: This article describes how to use Azure Policy as an Event Grid event source. It provides the schema and links to tutorial and how-to articles.
+description: This article describes how to use Azure Policy as an Event Grid event source. It provides the schema and links to tutorial and how-to articles.
-- Previously updated : 09/15/2021++ Last updated : 07/12/2022 # Azure Policy as an Event Grid source
events. For an introduction to event schemas, see
[Azure Event Grid event schema](./event-schema.md). It also gives you a list of quick starts and tutorials to use Azure Policy as an event source.
-## Available event types
-
-Azure Policy emits the following event types:
-
-| Event type | Description |
-| - | -- |
-| Microsoft.PolicyInsights.PolicyStateCreated | Raised when a policy compliance state is created. |
-| Microsoft.PolicyInsights.PolicyStateChanged | Raised when a policy compliance state is changed. |
-| Microsoft.PolicyInsights.PolicyStateDeleted | Raised when a policy compliance state is deleted. |
-
-## Example event
-
-# [Event Grid event schema](#tab/event-grid-event-schema)
-The following example shows the schema of a policy state created event:
-
-```json
-[{
- "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
- "topic": "/subscriptions/<SubscriptionID>",
- "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
- "data": {
- "timestamp": "2021-03-27T18:37:42.4496956Z",
- "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
- "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
- "policyDefinitionReferenceId": "",
- "complianceState": "NonCompliant",
- "subscriptionId": "<subscription-id>",
- "complianceReasonCode": ""
- },
- "eventType": "Microsoft.PolicyInsights.PolicyStateCreated",
- "eventTime": "2021-03-27T18:37:42.5241536Z",
- "dataVersion": "1",
- "metadataVersion": "1"
-}]
-```
-
-The schema for a policy state changed event is similar:
-
-```json
-[{
- "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
- "topic": "/subscriptions/<SubscriptionID>",
- "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
- "data": {
- "timestamp": "2021-03-27T18:37:42.4496956Z",
- "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
- "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
- "policyDefinitionReferenceId": "",
- "complianceState": "NonCompliant",
- "subscriptionId": "<subscription-id>",
- "complianceReasonCode": ""
- },
- "eventType": "Microsoft.PolicyInsights.PolicyStateChanged",
- "eventTime": "2021-03-27T18:37:42.5241536Z",
- "dataVersion": "1",
- "metadataVersion": "1"
-}]
-```
-# [Cloud event schema](#tab/cloud-event-schema)
-
-The following example shows the schema of a policy state created event:
-
-```json
-[{
- "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
- "source": "/subscriptions/<SubscriptionID>",
- "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
- "data": {
- "timestamp": "2021-03-27T18:37:42.4496956Z",
- "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
- "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
- "policyDefinitionReferenceId": "",
- "complianceState": "NonCompliant",
- "subscriptionId": "<subscription-id>",
- "complianceReasonCode": ""
- },
- "type": "Microsoft.PolicyInsights.PolicyStateCreated",
- "time": "2021-03-27T18:37:42.5241536Z",
- "specversion": "1.0"
-}]
-```
-
-The schema for a policy state changed event is similar:
-
-```json
-[{
- "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
- "source": "/subscriptions/<SubscriptionID>",
- "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
- "data": {
- "timestamp": "2021-03-27T18:37:42.4496956Z",
- "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
- "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
- "policyDefinitionReferenceId": "",
- "complianceState": "NonCompliant",
- "subscriptionId": "<subscription-id>",
- "complianceReasonCode": ""
- },
- "type": "Microsoft.PolicyInsights.PolicyStateChanged",
- "time": "2021-03-27T18:37:42.5241536Z",
- "specversion": "1.0"
-}]
-```
---
-## Event properties
-
-# [Event Grid event schema](#tab/event-grid-event-schema)
-
-An event has the following top-level data:
-
-| Property | Type | Description |
-| -- | - | -- |
-| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
-| `subject` | string | The fully qualified ID of the resource that the compliance state change is for, including the resource name and resource type. Uses the format, `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>` |
-| `eventType` | string | One of the registered event types for this event source. |
-| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
-| `id` | string | Unique identifier for the event. |
-| `data` | object | Azure Policy event data. |
-| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
-| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
-
-# [Cloud event schema](#tab/cloud-event-schema)
-
-An event has the following top-level data:
-
-| Property | Type | Description |
-| -- | - | -- |
-| `source` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
-| `subject` | string | The fully qualified ID of the resource that the compliance state change is for, including the resource name and resource type. Uses the format, `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>` |
-| `type` | string | One of the registered event types for this event source. |
-| `time` | string | The time the event is generated based on the provider's UTC time. |
-| `id` | string | Unique identifier for the event. |
-| `data` | object | Azure Policy event data. |
-| `specversion` | string | CloudEvents schema specification version. |
---
-The data object has the following properties:
-
-| Property | Type | Description |
-| -- | - | -- |
-| `timestamp` | string | The time (in UTC) that the resource was scanned by Azure Policy. For ordering events, use this property instead of the top-level `eventTime` or `time` properties. |
-| `policyAssignmentId` | string | The resource ID of the policy assignment. |
-| `policyDefinitionId` | string | The resource ID of the policy definition. |
-| `policyDefinitionReferenceId` | string | The reference ID for the policy definition inside the initiative definition, if the policy assignment is for an initiative. May be empty. |
-| `complianceState` | string | The compliance state of the resource with respect to the policy assignment. |
-| `subscriptionId` | string | The subscription ID of the resource. |
-| `complianceReasonCode` | string | The compliance reason code. May be empty. |
## Next steps
expressroute Expressroute Howto Routing Portal Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-howto-routing-portal-resource-manager.md
Title: 'Tutorial: Configure peering for ExpressRoute circuit - Azure portal'
description: This tutorial shows you how to create and provision ExpressRoute private and Microsoft peering using the Azure portal. - Previously updated : 01/11/2021 Last updated : 07/13/2022 --+ # Tutorial: Create and modify peering for an ExpressRoute circuit using the Azure portal
In this tutorial, you learn how to:
* [Routing requirements](expressroute-routing.md) * [Workflows](expressroute-workflows.md) * You must have an active ExpressRoute circuit. Follow the instructions to [Create an ExpressRoute circuit](expressroute-howto-circuit-portal-resource-manager.md) and have the circuit enabled by your connectivity provider before you continue. To configure peering(s), the ExpressRoute circuit must be in a provisioned and enabled state.
-* If you plan to use a shared key/MD5 hash, be sure to use the key on both sides of the tunnel. The limit is a maximum of 25 alphanumeric characters. Special characters are not supported.
+* If you plan to use a shared key/MD5 hash, be sure to use the key on both sides of the tunnel. The limit is a maximum of 25 alphanumeric characters. Special characters aren't supported.
These instructions only apply to circuits created with service providers offering Layer 2 connectivity services. If you're using a service provider that offers managed Layer 3 services (typically an IPVPN, like MPLS), your connectivity provider configures and manages the routing for you.
This section helps you create, get, update, and delete the Microsoft peering con
**Circuit - Provider status: Not provisioned**
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png" alt-text="Screenshot that shows the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status set to Not provisioned":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status set to Not provisioned.":::
**Circuit - Provider status: Provisioned**
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/provisioned.png" alt-text="Screenshot that shows the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status set to Provisioned":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/provisioned.png" alt-text="Screenshot that showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status set to Provisioned.":::
2. Configure Microsoft peering for the circuit. Make sure that you have the following information before you continue.
- * A pair of subnets owned by you and registered in an RIR/IRR. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you will assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
+ * A pair of subnets owned by you and registered in an RIR/IRR. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you'll assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
* IPv4: Two /30 subnets. These must be valid public IPv4 prefixes. * IPv6: Two /126 subnets. These must be valid public IPv6 prefixes. * Both: Two /30 subnets and two /126 subnets.
This section helps you create, get, update, and delete the Microsoft peering con
* **Optional -** An MD5 hash if you choose to use one. 1. You can select the peering you wish to configure, as shown in the following example. Select the Microsoft peering row.
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/select-microsoft-peering.png" alt-text="Select the Microsoft peering row":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/select-microsoft-peering.png" alt-text="Screenshot showing how to select the Microsoft peering row.":::
4. Configure Microsoft peering. **Save** the configuration once you've specified all parameters. The following image shows an example configuration:
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png" alt-text="Configure Microsoft peering validation needed":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/configuration-m-validation-needed.png" alt-text="Screenshot showing Microsoft peering configuration.":::
> [!IMPORTANT] > Microsoft verifies if the specified 'Advertised public prefixes' and 'Peer ASN' (or 'Customer ASN') are assigned to you in the Internet Routing Registry. If you are getting the public prefixes from another entity and if the assignment is not recorded with the routing registry, the automatic validation will not complete and will require manual validation. If the automatic validation fails, you will see the message 'Validation needed'.
This section helps you create, get, update, and delete the Microsoft peering con
If your circuit gets to a **Validation needed** state, you must open a support ticket to show proof of ownership of the prefixes to our support team. You can open a support ticket directly from the portal, as shown in the following example:
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/ticket-portal-m.png" alt-text="Validation Needed - support ticket":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/ticket-portal-m.png" alt-text="Screenshot showing new support ticket request to submit proof of ownership for public prefixes.":::
### <a name="getmsft"></a>To view Microsoft peering details You can view the properties of Microsoft peering by selecting the row for the peering. ### <a name="updatemsft"></a>To update Microsoft peering configuration You can select the row for the peering that you want to modify, then modify the peering properties and save your modifications. ## <a name="private"></a>Azure private peering
This section helps you create, get, update, and delete the Azure private peering
**Circuit - Provider status: Not provisioned**
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status which is set to Not provisioned":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/not-provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status that is set to Not provisioned.":::
**Circuit - Provider status: Provisioned**
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status which is set to Provisioned":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/provisioned.png" alt-text="Screenshot showing the Overview page for the ExpressRoute Demo Circuit with a red box highlighting the Provider status that is set to Provisioned.":::
2. Configure Azure private peering for the circuit. Make sure that you have the following items before you continue with the next steps:
- * A pair of subnets that are not part of any address space reserved for virtual networks. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you will assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
+ * A pair of subnets that aren't part of any address space reserved for virtual networks. One subnet will be used for the primary link, while the other will be used for the secondary link. From each of these subnets, you'll assign the first usable IP address to your router as Microsoft uses the second usable IP for its router. You have three options for this pair of subnets:
* IPv4: Two /30 subnets. * IPv6: Two /126 subnets. * Both: Two /30 subnets and two /126 subnets.
This section helps you create, get, update, and delete the Azure private peering
* **Optional -** An MD5 hash if you choose to use one. 3. Select the Azure private peering row, as shown in the following example:
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/select-private-peering.png" alt-text="Select the private peering row":::
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/select-private-peering.png" alt-text="Screenshot showing how to select the private peering row.":::
4. Configure private peering. **Save** the configuration once you've specified all parameters.
- :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png" alt-text="Configure private peering":::
-
+ :::image type="content" source="./media/expressroute-howto-routing-portal-resource-manager/private-peering-configuration.png" alt-text="Screenshot showing private peering configuration.":::
### <a name="getprivate"></a>To view Azure private peering details You can view the properties of Azure private peering by selecting the peering. ### <a name="updateprivate"></a>To update Azure private peering configuration You can select the row for peering and modify the peering properties. After updating, save your changes. ## Clean up resources
You can select the row for peering and modify the peering properties. After upda
You can remove your Microsoft peering configuration by right-clicking the peering and selecting **Delete** as shown in the following image: ### <a name="deleteprivate"></a>To delete Azure private peering
You can remove your private peering configuration by right-clicking the peering
> You must ensure that all virtual networks and ExpressRoute Global Reach connections are removed before running this operation. > ## Next steps
governance Event Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/event-overview.md
Title: Reacting to Azure Policy state change events
-description: Use Azure Event Grid to subscribe to App Policy events, which allow applications to react to state changes without the need for complicated code.
Previously updated : 08/17/2021
+description: Use Azure Event Grid to subscribe to Azure Policy events, which allow applications to react to state changes without the need for complicated code.
Last updated : 07/12/2022 ++ # Reacting to Azure Policy state change events
for a full tutorial.
:::image type="content" source="../../../event-grid/media/overview/functional-model.png" alt-text="Event Grid model of sources and handlers" lightbox="../../../event-grid/media/overview/functional-model-big.png":::
-## Available Azure Policy events
-
-Event Grid uses [event subscriptions](../../../event-grid/concepts.md#event-subscriptions) to route
-event messages to subscribers. Azure Policy event subscriptions can include three types of events:
-
-| Event type | Description |
-| - | -- |
-| Microsoft.PolicyInsights.PolicyStateCreated | Raised when a policy compliance state is created. |
-| Microsoft.PolicyInsights.PolicyStateChanged | Raised when a policy compliance state is changed. |
-| Microsoft.PolicyInsights.PolicyStateDeleted | Raised when a policy compliance state is deleted. |
-
-## Event schema
-
-Azure Policy events contain all the information you need to respond to changes in your data. You can
-identify an Azure Policy event when the `eventType` property starts with "Microsoft.PolicyInsights".
-Additional information about the usage of Event Grid event properties is documented in
-[Event Grid event schema](../../../event-grid/event-schema.md).
-
-| Property | Type | Description |
-| -- | - | -- |
-| `id` | string | Unique identifier for the event. |
-| `topic` | string | Full resource path to the event source. This field isn't writeable. Event Grid provides this value. |
-| `subject` | string | The fully qualified ID of the resource that the compliance state change is for, including the resource name and resource type. Uses the format, `/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>` |
-| `data` | object | Azure Policy event data. |
-| `data.timestamp` | string | The time (in UTC) that the resource was scanned by Azure Policy. For ordering events, use this property instead of the top level `eventTime` or `time` properties. |
-| `data.policyAssignmentId` | string | The resource ID of the policy assignment. |
-| `data.policyDefinitionId` | string | The resource ID of the policy definition. |
-| `data.policyDefinitionReferenceId` | string | The reference ID for the policy definition inside the initiative definition, if the policy assignment is for an initiative. May be empty. |
-| `data.complianceState` | string | The compliance state of the resource with respect to the policy assignment. |
-| `data.subscriptionId` | string | The subscription ID of the resource. |
-| `data.complianceReasonCode` | string | The compliance reason code. May be empty. |
-| `eventType` | string | One of the registered event types for this event source. |
-| `eventTime` | string | The time the event is generated based on the provider's UTC time. |
-| `dataVersion` | string | The schema version of the data object. The publisher defines the schema version. |
-| `metadataVersion` | string | The schema version of the event metadata. Event Grid defines the schema of the top-level properties. Event Grid provides this value. |
-
-Here's an example of a policy state change event:
-
-```json
-[{
- "id": "5829794FCB5075FCF585476619577B5A5A30E52C84842CBD4E2AD73996714C4C",
- "topic": "/subscriptions/<SubscriptionID>",
- "subject": "/subscriptions/<SubscriptionID>/resourceGroups/<ResourceGroup>/providers/<ProviderNamespace>/<ResourceType>/<ResourceName>",
- "data": {
- "timestamp": "2021-03-27T18:37:42.4496956Z",
- "policyAssignmentId": "<policy-assignment-scope>/providers/microsoft.authorization/policyassignments/<policy-assignment-name>",
- "policyDefinitionId": "<policy-definition-scope>/providers/microsoft.authorization/policydefinitions/<policy-definition-name>",
- "policyDefinitionReferenceId": "",
- "complianceState": "NonCompliant",
- "subscriptionId": "<subscription-id>",
- "complianceReasonCode": ""
- },
- "eventType": "Microsoft.PolicyInsights.PolicyStateChanged",
- "eventTime": "2021-03-27T18:37:42.5241536Z",
- "dataVersion": "1",
- "metadataVersion": "1"
-}]
-```
-
-For more information, see [Azure Policy events schema](../../../event-grid/event-schema-policy.md).
## Practices for consuming events
governance Policy Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/policy-glossary.md
+
+ Title: Azure Policy glossary
+description: A glossary defining the terminology used throughout Azure Policy
+++ Last updated : 07/13/2022+
+# Azure Policy glossary
+
+The term _policy_ is used widely in virtually every industry and is associated with many use cases. Azure Policy has specific vocabulary and applications that are not to be confused with policy embedded in other contexts.
+
+This glossary provides definitions and descriptions of terms used by Azure Policy.
+
+## Alias
+A field used in policy definitions that maps to a resource property.
+## Applicability
+Describes the relevance of resources that are considered for assessment against a policy. A resource is considered applicable to a policy when it resides within the scope of the policy assignment, is not excluded or exempt from the policy assignment, and meets the conditions specified in the `if` block of the policy rule.
+## Assignment
+A JSON-defined object that determines the resources to which a policy definition is applied. Learn more about the policy assignment JSON structure here: [Azure Policy assignment structure](./concepts/assignment-structure.md).
+## Azure Policy
+A service that enables users to govern Azure resources by enforcing organizational standards and assessing compliance at scale.
+## Built-in
+Describes a type of policy definition that is available by default and generated by Azure Resource Providers. It is the alternative to a custom policy definition. View the list of available [built-in policy definitions](./samples/built-in-policies.md).
+## Category
+Metadata property in the policy definition that classifies the definition based on its area of focus. The category often indicates the resource provider of the target resource (For example: Compute, Storage, Monitoring).
+## Compliance state
+Describes a resource's adherence to applicable policies. Can be compliant, non-compliant, exempt, conflict, not started, or protected. Learn more about [how compliance works](./how-to/get-compliance-data.md).
+## Compliant
+A compliance state which indicates that a resource conformed to the policy rule in the policy definition.
+## Control
+Another term used for _group_, specifically in the context of regulatory compliance.
+## Custom
+Describes a type of policy definition that is authored by a policy user. It is the alternative to a built-in policy definition.
+## Definition
+A JSON-defined object that describes a policy, including resource compliance requirements and the effect to take if they are violated. Learn more about the policy definition JSON structure here: [Azure Policy definition structure](./concepts/definition-structure.md).
+## Definition location
+The scope to which an initiative definition or policy definition can be assigned. It can be either a management group or a subscription, and assignments can be made at or below that scope in the hierarchy.
+## Effect
+The action taken on a resource when the conditions of an applicable policy's rule are met. Learn more about [effects](./concepts/effects.md).
+## Enforcement
+Describes the preventative behavior that certain types of policy effects can have.
+## Enforcement mode
+A property of a policy assignment that allows users to enable or disable enforcement of certain policy effects like deny, while still evaluating for compliance and providing logs.
+## Evaluation
+Describes the process of scanning resources in the cloud environment to determine applicability and compliance of assigned policies.
+## Event
+An incident or outcome when something changes in Azure Policy, available for integration with Event Grid. Example events include instances in which a policy state is created, changed, or deleted. See [available event types for Azure Policy](../../event-grid/event-schema-policy.md).
+## Exclusion
+Also referred to as _NotScopes_; A property in the policy assignment which eliminates child resource containers or child resources from the assignment so they are not considered for compliance evaluation. Excluded scopes do not appear on the Azure portal Compliance blade. Learn more about [excluded scopes](./concepts/assignment-structure.md#excluded-scopes).
+## Exempt
+A compliance state which indicates that a resource is covered by an exemption.
+## Exemption
+A JSON-defined object that eliminates a resource hierarchy or an individual resource from evaluation. Resources that are exempt count toward overall compliance, but are not evaluated. Learn more about the exemption JSON structure here: [Azure Policy exemption structure](./concepts/exemption-structure.md).
+## Group
+A sub-collection of policy definition IDs within an initiative definition.
+## Identity
+A system-assigned or user-assigned managed identity used for remediation in Azure Policy. Learn more about [managed identities](../../active-directory/managed-identities-azure-resources/overview.md).
+## Initiative
+Also known as a _policy set_. A type of policy definition consisting of a collection of policy definition IDs. Used to centralize multiple policy definitions with a common goal that can share parameters, identities and be managed in a single assignment.
+## JSON
+Abbreviation for JavaScript Object Notation (JSON). Used by Azure Policy to define policy objects.
+## Mode
+Property on the policy definition that determines which resource types are evaluated for a policy definition. It is configured depending on whether the policy is targeting an Azure Resource Manager (ARM) property defined in an ARM template or a Resource Provider (RP) property.
+## Non-compliant
+A compliance state which indicates that a resource did not conform to the policy rule in the policy definition.
+## Policy rule
+The component of a policy definition that describes resource compliance requirements through logic-based conditional statements, as well as the effect taken if those conditions are not met. It is composed of an `if` block and `then` block.
+## Policy state
+Describes the aggregated compliance state of a policy assignment
+## Regulatory Compliance
+Describes a specific type of initiative that allows grouping of policies into controls and categorization of policies into compliance domains based on responsibility (_Customer_, _Microsoft_, _Shared_). There are many sample Regulatory Compliance built-ins, and customers have the ability to create their own. Learn more about [Regulatory Compliance](./concepts/regulatory-compliance.md).
+> [!NOTE]
+> Regulatory Compliance is a Preview feature.
+## Remediation
+A JSON-defined object that, when triggered, corrects resources violating policies with **deployIfNotExists** or **modify** effects. Remediation is only
+automatic for resources during creation or update. Existing resources must be remediated by
+triggering a remediation task. Learn how to [remediate non-compliant resources](./how-to/remediate-resources.md).
+## Scope
+The extent or area to which a policy is relevant, as described by Azure Resource Manager (ARM). It determines the set of resources that an assignment applies to, and may be a subscription, management group, resource group, or resource. Learn more about [scope in Azure Policy](./concepts/scope.md).
+## Template info
+The component of a policy definition used to define the constraint template. Specific to [Azure Policy for Kubernetes clusters](./concepts/policy-for-kubernetes.md).
+
+## Next steps
+
+To get started with Azure Policy, see [What is Azure Policy?](./overview.md).
iot-central How To Connect Iot Edge Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/how-to-connect-iot-edge-transparent-gateway.md
To complete the steps in this article, you need:
To follow the steps in this article, download the following files to your computer: - [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.-- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-1/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
+- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-1/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
# [IoT Edge 1.2](#tab/edge1-2) To complete the steps in this article, you need:
To complete the steps in this article, you need:
To follow the steps in this article, download the following files to your computer: - [Thermostat device model (thermostat-1.json)](https://raw.githubusercontent.com/Azure/iot-plugandplay-models/main/dtmi/com/example/thermostat-1.json) - this file is the device model for the downstream devices.-- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-2/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
+- [Transparent gateway manifest (EdgeTransparentGatewayManifest.json)](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-2/EdgeTransparentGatewayManifest.json) - this file is the IoT Edge deployment manifest for the gateway device.
IoT Central relies on the Device Provisioning Service (DPS) to provision devices
1. Run the following command to download the Python script that does the device provisioning: ```bash
- wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-1/provision_device.py
+ wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-1/provision_device.py
``` 1. To provision the `thermostat1` downstream device in your IoT Central application, run the following commands, replacing `{your application id scope}` and `{your device primary key}`. You made a note of these values when you added the devices to your IoT Central application:
To run the thermostat simulator on the `leafdevice` virtual machine:
```bash cd ~
- wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/transparent-gateway-1-1/simple_thermostat.py
+ wget https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/transparent-gateway-1-1/simple_thermostat.py
``` 1. Install the Azure IoT device Python module:
iot-central Howto Connect Eflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-connect-eflow.md
To complete the steps in this article, you need:
* Minimum free disk space: 10 GB * <sup>1</sup> Windows 10 and Windows Server 2019 minimum build 17763 with all current cumulative updates installed.
-To follow the steps in this article, download the [EnvironmentalSensorManifest.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/master/iotedge/EnvironmentalSensorManifest.json) file to your computer.
+To follow the steps in this article, download the [EnvironmentalSensorManifest.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest.json) file to your computer.
## Add device template
iot-central Howto Create Custom Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-create-custom-analytics.md
Use the following steps to import a Databricks notebook that contains the Python
:::image type="content" source="media/howto-create-custom-analytics/databricks-import.png" alt-text="Screenshot of data bricks import.":::
-1. Choose to import from a URL and enter the following address: [https://github.com/Azure-Samples/iot-central-docs-samples/blob/master/databricks/IoT%20Central%20Analysis.dbc?raw=true](https://github.com/Azure-Samples/iot-central-docs-samples/blob/master/databricks/IoT%20Central%20Analysis.dbc?raw=true)
+1. Choose to import from a URL and enter the following address: [https://github.com/Azure-Samples/iot-central-docs-samples/blob/main/databricks/IoT%20Central%20Analysis.dbc?raw=true](https://github.com/Azure-Samples/iot-central-docs-samples/blob/main/databricks/IoT%20Central%20Analysis.dbc?raw=true)
1. To import the notebook, choose **Import**.
iot-central Howto Manage Devices In Bulk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-manage-devices-in-bulk.md
Enter a job name and description, and then select **Rerun job**. A new job is su
## Import devices
-To register a large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/bulk-upload-devices). The CSV file should include the following column headers:
+To register a large number of devices to your application, you can bulk import devices from a CSV file. You can find an example CSV file in the [Azure Samples repository](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/bulk-upload-devices). The CSV file should include the following column headers:
| Column | Description | | - | - |
iot-central Howto Transform Data Internally https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/howto-transform-data-internally.md
JSON output:
The output data is exported to your Azure Data Explorer cluster. To visualize the exported data in Power BI, complete the following steps: 1. Install the Power BI application. You can download the desktop Power BI application from [Go from data to insight to action with Power BI Desktop](https://powerbi.microsoft.com/desktop/).
-1. Download the Power BI desktop [IoT Central ADX Connector.pbit](https://github.com/Azure-Samples/iot-central-docs-samples/raw/master/azure-data-explorer-power-bi/IoT%20Central%20ADX%20Connector.pbit) file from GitHub.
+1. Download the Power BI desktop [IoT Central ADX Connector.pbit](https://github.com/Azure-Samples/iot-central-docs-samples/raw/main/azure-data-explorer-power-bi/IoT%20Central%20ADX%20Connector.pbit) file from GitHub.
1. Use the Power BI Desktop app to open the *IoT Central ADX Connector.pbit* file you downloaded in the previous step. When prompted, enter the Azure Data Explorer cluster, database, and table information you made a note of previously. Now you can visualize the data in Power BI:
iot-central Tutorial In Store Analytics Create App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/retail/tutorial-in-store-analytics-create-app.md
To select a predefined application theme:
3. Select **Save**.
-Rather than use a predefined theme, you can create a custom theme. If you want to use a set of sample images to customize the application and complete the tutorial, download the [Contoso sample images](https://github.com/Azure-Samples/iot-central-docs-samples/tree/master/retail).
+Rather than use a predefined theme, you can create a custom theme. If you want to use a set of sample images to customize the application and complete the tutorial, download the [Contoso sample images](https://github.com/Azure-Samples/iot-central-docs-samples/tree/main/retail).
To create a custom theme:
iot-dps Tutorial Custom Hsm Enrollment Group X509 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/tutorial-custom-hsm-enrollment-group-x509.md
Title: Tutorial - Provision X.509 devices to Azure IoT Hub using a custom Hardwa
description: This tutorial uses enrollment groups. In this tutorial, you learn how to provision X.509 devices using a custom Hardware Security Module (HSM) and the C device SDK for Azure IoT Hub Device Provisioning Service (DPS). Previously updated : 06/20/2022 Last updated : 07/12/2022
In this tutorial, you'll learn how to provision groups of IoT devices that use X.509 certificates for authentication. Sample device code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c) will be executed on your development machine to simulate provisioning of X.509 devices. On real devices, device code would be deployed and run from the IoT device.
-The Azure IoT Device Provisioning Service supports two types of enrollments for provisioning devices:
+The Azure IoT Hub Device Provisioning Service supports two types of enrollments for provisioning devices:
-* [Enrollment groups](concepts-service.md#enrollment-group): Used to enroll multiple related devices.
+* [Enrollment groups](concepts-service.md#enrollment-group): Used to enroll multiple related devices. This tutorial demonstrates provisioning with enrollment groups.
* [Individual Enrollments](concepts-service.md#individual-enrollment): Used to enroll a single device.
-You'll use an enrollment group to provision a set of devices that authenticate using X.509 certificates. To learn how to provision a set of devices using [symmetric keys](./concepts-symmetric-key-attestation.md), see [How to provision devices using symmetric key enrollment groups](how-to-legacy-device-symm-key.md). If you're unfamiliar with the process of autoprovisioning, review the [provisioning](about-iot-dps.md#provisioning-process) overview.
+The Azure IoT Hub Device Provisioning Service supports three forms of authentication for provisioning devices:
-This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but strongly recommended to help protect sensitive information like your device certificate's private key.
+* [X.509 certificates](concepts-x509-attestation.md). This tutorial demonstrates X.509 certificate attestation.
+* [Trusted platform module (TPM)](concepts-tpm-attestation.md)
+* [Symmetric keys](./concepts-symmetric-key-attestation.md)
-In this tutorial you'll complete the following objectives:
+This tutorial uses the [custom HSM sample](https://github.com/Azure/azure-iot-sdk-c/tree/master/provisioning_client/samples/custom_hsm_example), which provides a stub implementation for interfacing with hardware-based secure storage. A [Hardware Security Module (HSM)](./concepts-service.md#hardware-security-module) is used for secure, hardware-based storage of device secrets. An HSM can be used with symmetric key, X.509 certificate, or TPM attestation to provide secure storage for secrets. Hardware-based storage of device secrets isn't required, but it is strongly recommended to help protect sensitive information like your device certificate's private key.
+
+In this tutorial, you'll complete the following objectives:
> [!div class="checklist"] > > * Create a certificate chain of trust to organize a set of devices using X.509 certificates. > * Complete proof of possession with a signing certificate used with the certificate chain.
-> * Create a new group enrollment that uses the certificate chain
-> * Set up the development environment for provisioning a device using code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c)
+> * Create a new group enrollment that uses the certificate chain.
+> * Set up the development environment for provisioning a device using code from the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c).
> * Provision a device using the certificate chain with the custom Hardware Security Module (HSM) sample in the SDK. ## Prerequisites
The following prerequisites are for a Windows development environment used to si
## Prepare the Azure IoT C SDK development environment
-In this section, you'll prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by X.509 devices provisioning with DPS.
+In this section, you'll prepare a development environment used to build the [Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c). The SDK includes sample code and tools used by devices provisioning with DPS.
1. Open a web browser, and go to the [Release page of the Azure IoT C SDK](https://github.com/Azure/azure-iot-sdk-c/releases/latest).
In this section, you'll prepare a development environment used to build the [Azu
## Create an X.509 certificate chain
-In this section you, will generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates will have the following hierarchy.
+In this section, you'll generate an X.509 certificate chain of three certificates for testing each device with this tutorial. The certificates have the following hierarchy.
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/example-device-cert-chain.png" alt-text="Diagram that shows relationship of root C A, intermediate C A, and device certificates." border="false":::
-[Root certificate](concepts-x509-attestation.md#root-certificate): You'll complete [proof of possession](how-to-verify-certificates.md) to verify the root certificate. This verification will enable DPS to trust that certificate and verify certificates signed by it.
+[Root certificate](concepts-x509-attestation.md#root-certificate): You'll complete [proof of possession](how-to-verify-certificates.md) to verify the root certificate. This verification enables DPS to trust that certificate and verify certificates signed by it.
-[Intermediate Certificate](concepts-x509-attestation.md#intermediate-certificate): It's common for intermediate certificates to be used to group devices logically by product lines, company divisions, or other criteria. This tutorial will use a certificate chain composed of one intermediate certificate. The intermediate certificate will be signed by the root certificate. This certificate will also be used on the enrollment group created in DPS to logically group a set of devices. This configuration allows managing a whole group of devices that have device certificates signed by the same intermediate certificate. You can create enrollment groups for enabling or disabling a group of devices. For more information on disabling a group of devices, see [Disallow an X.509 intermediate or root CA certificate by using an enrollment group](how-to-revoke-device-access-portal.md#disallow-an-x509-intermediate-or-root-ca-certificate-by-using-an-enrollment-group)
+[Intermediate certificate](concepts-x509-attestation.md#intermediate-certificate): It's common to use intermediate certificates to group devices logically by product lines, company divisions, or other criteria. This tutorial uses a certificate chain with one intermediate certificate, but in a production scenario you may have several. The intermediate certificate in this chain is signed by the root certificate. This certificate is provided to the enrollment group created in DPS to logically group a set of devices. This configuration allows managing a whole group of devices that have device certificates signed by the same intermediate certificate.
-[Device certificates](concepts-x509-attestation.md#end-entity-leaf-certificate): The device (leaf) certificates will be signed by the intermediate certificate and stored on the device along with its private key. Ideally these sensitive items would be stored securely with an HSM. Each device will present its certificate and private key, along with the certificate chain when attempting provisioning.
+[Device certificates](concepts-x509-attestation.md#end-entity-leaf-certificate): The device certificates (sometimes called leaf certificates) will be signed by the intermediate certificate and stored on the device along with its private key. Ideally these sensitive items would be stored securely with an HSM. Each device presents its certificate and private key, along with the certificate chain, when attempting provisioning.
### Set up the X.509 OpenSSL environment In this section, you'll create the Openssl configuration files, directory structure, and other files used by the Openssl commands.
-1. In your Git Bash command prompt, navigate to a folder where you want to generate the X.509 certificates and keys you'll use in this tutorial.
+1. In your Git Bash command prompt, navigate to a folder where you want to generate the X.509 certificates and keys for this tutorial.
1. Create an OpenSSL configuration file for your root CA certificate. OpenSSL configuration files contain policies and definitions that are consumed by OpenSSL commands. Copy and paste the following text into a file named *openssl_root_ca.cnf*:
Run the following commands to create the intermediate CA private key and the int
### Create the device certificates
-In this section you create the device certificates and the full chain device certificates. The full chain certificate contains the device certificate, the intermediate CA certificate, and the root CA certificate. The device must present its full chain certificate when it registers with DPS.
+In this section, you create two device certificates and their full chain certificates. The full chain certificate contains the device certificate, the intermediate CA certificate, and the root CA certificate. The device must present its full chain certificate when it registers with DPS.
-1. Create the device private key.
+1. Create the first device private key.
```bash openssl genrsa -out ./private/device-01.key.pem 4096
In this section you create the device certificates and the full chain device cer
1. Create the device certificate CSR.
- The subject common name (CN) of the device certificate must be set to the [Registration ID](./concepts-service.md#registration-id) that your device will use to register with DPS. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates. For group enrollments, the registration ID is also used as the device ID in IoT Hub.
+ The subject common name (CN) of the device certificate must be set to the [registration ID](./concepts-service.md#registration-id) that your device will use to register with DPS. The registration ID is a case-insensitive string of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). The common name must adhere to this format. DPS supports registration IDs up to 128 characters long; however, the maximum length of the subject common name in an X.509 certificate is 64 characters. The registration ID, therefore, is limited to 64 characters when using X.509 certificates. For group enrollments, the registration ID is also used as the device ID in IoT Hub.
- The subject common name is set in the `-subj` parameter in the following command.
+ The subject common name is set using the `-subj` parameter. In the following command, the common name is set to **device-01**.
# [Windows](#tab/windows)
In this section you create the device certificates and the full chain device cer
RSA Public-Key: (4096 bit) ```
-1. The device must present the full certificate chain when it authenticates with DPS.
+1. The device must present the full certificate chain when it authenticates with DPS. Use the following command to create the certificate chain:
```bash cat ./certs/device-01.cert.pem ./certs/azure-iot-test-only.intermediate.cert.pem ./certs/azure-iot-test-only.root.ca.cert.pem > ./certs/device-01-full-chain.cert.pem ```
- Use a text editor and open the certificate chain file, *./certs/device-01-full-chain.cert.pem*. The certificate chain text contains the full chain of all three certificates. You'll use this text as the certificate chain with in the custom HSM device code later in this tutorial for `device-01`.
+1. Open the certificate chain file, *./certs/device-01-full-chain.cert.pem*, in a text editor to examine it. The certificate chain text contains the full chain of all three certificates. You'll use this text as the certificate chain with in the custom HSM device code later in this tutorial for `device-01`.
The full chain text has the following format:
In this section you create the device certificates and the full chain device cer
--END CERTIFICATE-- ```
-1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your GitBash command prompt. To create additional devices, you can modify the `registration_id` variable declared at the beginning of the script.
+1. To create the private key, X.509 certificate, and full chain certificate for the second device, copy and paste this script into your GitBash command prompt. To create certificates for more devices, you can modify the `registration_id` variable declared at the beginning of the script.
```bash registration_id=device-02
You'll use the following files in the rest of this tutorial:
## Verify ownership of the root certificate
-For DPS to be able to validate the device's certificate chain during authentication, you must upload and verify ownership of the root CA certificate. Because you created the root CA certificate in the last section, you'll auto-verify that it's valid when you upload it. Alternatively, you can do manual verification of the certificate if you're using a CA certificate from a 3rd-party. To learn more about verifying CA certificates, see [How to do proof-of-possession for X.509 CA certificates with your Device Provisioning Service](how-to-verify-certificates.md).
+For DPS to be able to validate the device's certificate chain during authentication, you must upload and verify ownership of the root CA certificate. Because you created the root CA certificate in the last section, you'll auto-verify that it's valid when you upload it. Alternatively, you can do manual verification of the certificate if you're using a CA certificate from a 3rd-party. To learn more about verifying CA certificates, see [How to do proof-of-possession for X.509 CA certificates](how-to-verify-certificates.md).
-To add the root CA certificate, follow these steps:
+To add the root CA certificate to your DPS instance, follow these steps:
-1. Sign in to the Azure portal, select the **All resources** button on the left-hand menu and open your Device Provisioning Service.
+1. Sign in to the [Azure portal](https://portal.azure.com), select the **All resources** button on the left-hand menu and open your Device Provisioning Service instance.
1. Open **Certificates** from the left-hand menu and then select **+ Add** at the top of the panel to add a new certificate. 1. Enter a friendly display name for your certificate. Browse to the location of the root CA certificate file `certs/azure-iot-test-only.root.ca.cert.pem`. Select **Upload**.
-1. Select the box next to **Set certificate status to verified on upload*.
+1. Select the box next to **Set certificate status to verified on upload**.
- :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/add-root-certificate.png" alt-text="Screenshot that shows adding adding the root C A certificate and the set certificate status to verified on upload box selected.":::
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/add-root-certificate.png" alt-text="Screenshot that shows adding the root C A certificate and the set certificate status to verified on upload box selected.":::
1. Select **Save**.
To add the signing certificates to the certificate store in Windows-based device
1. In a Git bash prompt, convert your signing certificates to `.pfx` as follows.
- root CA certificate:
+ Root CA certificate:
```bash openssl pkcs12 -inkey ./private/azure-iot-test-only.root.ca.key.pem -in ./certs/azure-iot-test-only.root.ca.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/root.pfx ```
- intermediate CA certificate:
+ Intermediate CA certificate:
```bash openssl pkcs12 -inkey ./private/azure-iot-test-only.intermediate.key.pem -in ./certs/azure-iot-test-only.intermediate.cert.pem -export -passin pass:1234 -passout pass:1234 -out ./certs/intermediate.pfx ```
-2. Right-click the Windows **Start** button. Then left-click **Run**. Enter *certmgr.msc* and click **Ok** to start certificate manager MMC snap-in.
+2. Right-click the Windows **Start** button. Then select **Run**. Enter *certmgr.msc* and select **Ok** to start certificate manager MMC snap-in.
-3. In certificate manager, under **Certificates - Current User**, click **Trusted Root Certification Authorities**. Then on the menu, click **Action** > **All Tasks** > **Import** to import `root.pfx`.
+3. In certificate manager, under **Certificates - Current User**, select **Trusted Root Certification Authorities**. Then on the menu, select **Action** > **All Tasks** > **Import** to import `root.pfx`.
* Make sure to search by **Personal information Exchange (.pfx)** * Use `1234` as the password. * Place the certificate in the **Trusted Root Certification Authorities** certificate store.
-4. In certificate manager, under **Certificates - Current User**, click **Intermediate Certification Authorities**. Then on the menu, click **Action** > **All Tasks** > **Import** to import `intermediate.pfx`.
+4. In certificate manager, under **Certificates - Current User**, select **Intermediate Certification Authorities**. Then on the menu, select **Action** > **All Tasks** > **Import** to import `intermediate.pfx`.
* Make sure to search by **Personal information Exchange (.pfx)** * Use `1234` as the password.
Your signing certificates are now trusted on the Windows-based device and the fu
## Create an enrollment group
-1. From your DPS in Azure portal, select the **Manage enrollments** tab. then select the **Add enrollment group** button at the top.
+1. From your DPS instance in Azure portal, select the **Manage enrollments** tab. then select the **Add enrollment group** button at the top.
1. In the **Add Enrollment Group** panel, enter the following information, then select **Save**.
- :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/custom-hsm-enrollment-group-x509.png" alt-text="Screenshot that shows adding an enrollment group in the portal.":::
- | Field | Value | | :-- | :-- | | **Group name** | For this tutorial, enter **custom-hsm-x509-devices**. The enrollment group name is a case-insensitive string (up to 128 characters long) of alphanumeric characters plus the special characters: `'-'`, `'.'`, `'_'`, `':'`. The last character must be alphanumeric or dash (`'-'`). | | **Attestation Type** | Select **Certificate** | | **IoT Edge device** | Select **False** | | **Certificate Type** | Select **Intermediate Certificate** |
- | **Primary certificate .pem or .cer file** | Navigate to the intermediate you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*). This intermediate certificate is signed by the root certificate that you already uploaded and verified. DPS trusts that root once it is verified. DPS can verify the intermediate provided with this enrollment group is truly signed by the trusted root. DPS will trust each intermediate truly signed by that root certificate, and therefore be able to verify and trust leaf certificates signed by the intermediate. |
+ | **Primary certificate .pem or .cer file** | Navigate to the intermediate certificate that you created earlier (*./certs/azure-iot-test-only.intermediate.cert.pem*). This intermediate certificate is signed by the root certificate that you already uploaded and verified. DPS trusts that root once it's verified. DPS can verify that the intermediate provided with this enrollment group is truly signed by the trusted root. DPS will trust each intermediate truly signed by that root certificate, and therefore be able to verify and trust leaf certificates signed by the intermediate. |
+
+ :::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/custom-hsm-enrollment-group-x509.png" alt-text="Screenshot that shows adding an enrollment group in the portal.":::
## Configure the provisioning device code In this section, you update the sample code with your Device Provisioning Service instance information. If a device is authenticated, it will be assigned to an IoT hub linked to the Device Provisioning Service instance configured in this section.
-1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service and note the **_ID Scope_** value.
+1. In the Azure portal, select the **Overview** tab for your Device Provisioning Service instance and note the **ID Scope** value.
:::image type="content" source="./media/tutorial-custom-hsm-enrollment-group-x509/copy-id-scope.png" alt-text="Screenshot that shows the ID scope on the DPS overview pane.":::
In this section, you update the sample code with your Device Provisioning Servic
3. In Solution Explorer for Visual Studio, navigate to **Provision_Samples > prov_dev_client_sample > Source Files** and open *prov_dev_client_sample.c*.
-4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier.
+4. Find the `id_scope` constant, and replace the value with your **ID Scope** value that you copied earlier. For example:
```c static const char* id_scope = "0ne00000A0A"; ```
-5. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_X509` as shown below.
+5. Find the definition for the `main()` function in the same file. Make sure the `hsm_type` variable is set to `SECURE_DEVICE_TYPE_X509` and that all other `hsm_type` lines are commented out. For example:
```c SECURE_DEVICE_TYPE hsm_type;
In this section, you update the sample code with your Device Provisioning Servic
The specifics of interacting with actual secure hardware-based storage vary depending on the device hardware. The certificate chains used by the simulated devices in this tutorial will be hardcoded in the custom HSM stub code. In a real-world scenario, the certificate chain would be stored in the actual HSM hardware to provide better security for sensitive information. Methods similar to the stub methods used in this sample would then be implemented to read the secrets from that hardware-based storage.
-While HSM hardware isn't required, it is recommended to protect sensitive information, like the certificate's private key. If an actual HSM was being called by the sample, the private key wouldn't be present in the source code. Having the key in the source code exposes the key to anyone that can view the code. This is only done in this article to assist with learning.
+While HSM hardware isn't required, it is recommended to protect sensitive information like the certificate's private key. If an actual HSM was being called by the sample, the private key wouldn't be present in the source code. Having the key in the source code exposes the key to anyone that can view the code. This is only done in this article to assist with learning.
To update the custom HSM stub code to simulate the identity of the device with ID `device-01`, perform the following steps:
To update the custom HSM stub code to simulate the identity of the device with I
"--END CERTIFICATE--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `CERTIFICATE` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `CERTIFICATE` string constant value and writes it to the output.
```Bash sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' ./certs/device-01-full-chain.cert.pem
To update the custom HSM stub code to simulate the identity of the device with I
"--END RSA PRIVATE KEY--"; ```
- Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command will generate the syntax for the `PRIVATE_KEY` string constant value and write it to the output.
+ Updating this string value manually can be prone to error. To generate the proper syntax, you can copy and paste the following command into your **Git Bash prompt**, and press **ENTER**. This command generates the syntax for the `PRIVATE_KEY` string constant value and writes it to the output.
```Bash sed -e 's/^/"/;$ !s/$/""\\n"/;$ s/$/"/' ./private/device-01.key.pem
To update the custom HSM stub code to simulate the identity of the device with I
5. Save your changes.
-6. Right-click the **custom_hsm_-_example** project and select **Build**.
+6. Right-click the **custom_hsm_example** project and select **Build**.
> [!IMPORTANT] > You must build the **custom_hsm_example** project before you build the rest of the solution in the next section.
To update the custom HSM stub code to simulate the identity of the device with I
1. On the Visual Studio menu, select **Debug** > **Start without debugging** to run the solution. When prompted to rebuild the project, select **Yes** to rebuild the project before running.
- The following output is an example of simulated device `device-01` successfully booting up, and connecting to the provisioning service. The device was assigned to an IoT hub and registered:
+ The following output is an example of simulated device `device-01` successfully booting up and connecting to the provisioning service. The device was assigned to an IoT hub and registered:
```output Provisioning API Version: 1.8.0
To update the custom HSM stub code to simulate the identity of the device with I
Examine the registration records of the enrollment group to see the registration details for your devices:
-1. In Azure portal, go to your Device Provisioning Service.
+1. In the Azure portal, go to your Device Provisioning Service instance.
1. In the **Settings** menu, select **Manage enrollments**.
When you're finished testing and exploring this device client sample, use the fo
1. Close the device client sample output window on your machine.
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning Service. Open **Manage Enrollments** for your service, and then select the **Enrollment Groups** tab. Select the check box next to the *Group Name* of the device group you created in this tutorial, and press the **Delete** button at the top of the pane.
+1. From the left-hand menu in the Azure portal, select **All resources** and then select your Device Provisioning Service instance. Open **Manage Enrollments** for your service, and then select the **Enrollment Groups** tab. Select the check box next to the *Group Name* of the device group you created in this tutorial, and select **Delete** at the top of the pane.
-1. Click **Certificates** in DPS. For each certificate you uploaded and verified in this tutorial, click the certificate and click the **Delete** button to remove it.
+1. Select **Certificates** in DPS. For each certificate you uploaded and verified in this tutorial, select the certificate and select **Delete** to remove it.
-1. From the left-hand menu in the Azure portal, select **All resources** and then select your IoT hub. Open **IoT devices** for your hub. Select the check box next to the *DEVICE ID* of the device that you registered in this tutorial. Click the **Delete** button at the top of the pane.
+1. From the left-hand menu in the Azure portal, select **All resources** and then select your IoT hub. Open **IoT devices** for your hub. Select the check box next to the *DEVICE ID* of the device that you registered in this tutorial. Select **Delete** at the top of the pane.
## Next steps
-In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial.
+In this tutorial, you provisioned an X.509 device using a custom HSM to your IoT hub. To learn how to provision IoT devices to multiple hubs continue to the next tutorial.
> [!div class="nextstepaction"] > [Tutorial: Provision devices across load-balanced IoT hubs](tutorial-provision-multiple-hubs.md)
iot-edge How To Configure Proxy Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-configure-proxy-support.md
This step takes place once on the IoT Edge device during initial device setup.
```bash sudo iotedge config apply ```
+
+6. Verify that your proxy settings are propagated using `docker inspect edgeAgent` in the `Env` section. If not, the container must be recreated.
+
+ ```bash
+ sudo docker rm -f edgeAgent
+ ```
+
+7. The IoT Edge runtime should recreate `edgeAgent` within a minute. Once `edgeAgent` container is running again, `docker inspect edgeAgent` and verify the proxy settings matches the configuration file.
:::moniker-end <!-- end iotedge-2020-11 -->
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | |
-| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
| Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) | | Windows 10/11 Pro | ![Windows 10/11 Pro + AMD64](./media/support/green-check.png) | | |
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | |
-| Raspberry Pi OS Stretch | | ![Raspberry Pi OS Stretch + ARM32v7](./media/support/green-check.png) | |
| Red Hat Enterprise Linux 8 | ![Red Hat Enterprise Linux 8 + AMD64](./media/support/green-check.png) | | | | Ubuntu Server 20.04 | ![Ubuntu Server 20.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 20.04 + ARM64](./media/support/green-check.png) | | Ubuntu Server 18.04 | ![Ubuntu Server 18.04 + AMD64](./media/support/green-check.png) | | ![Ubuntu Server 18.04 + ARM64](./media/support/green-check.png) |
The systems listed in the following table are considered compatible with Azure I
| Operating System | AMD64 | ARM32v7 | ARM64 | | - | -- | - | -- | | [CentOS-7](https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7) | ![CentOS + AMD64](./media/support/green-check.png) | ![CentOS + ARM32v7](./media/support/green-check.png) | ![CentOS + ARM64](./media/support/green-check.png) |
-| [Debian 9](https://www.debian.org/releases/stretch/) | ![Debian 9 + AMD64](./media/support/green-check.png) | ![Debian 9 + ARM32v7](./media/support/green-check.png) | ![Debian 9 + ARM64](./media/support/green-check.png) |
| [Debian 10 <sup>1</sup>](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) | | [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | | [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) |
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/quick-create-go.md
ms.devlang: golang
In this quickstart, you'll learn to use the Azure SDK for Go to manage certificates in an Azure Key Vault.
-Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](../general/overview.md).
+Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](../general/overview.md).
-Follow this guide to learn how to use the [azcertificates](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azcertificates) package to manage your Azure Key Vault certificates using Go.
+Follow this guide to learn how to use the [azcertificates](https://aka.ms/azsdk/go/keyvault-certificates/docs) package to manage your Azure Key Vault certificates using Go.
## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- **Go installed**: Version 1.16 or [above](https://go.dev/dl/)
+- **Go installed**: Version 1.18 or [above](https://go.dev/dl/)
- [Azure CLI](/cli/azure/install-azure-cli) ## Set up your environment
Follow this guide to learn how to use the [azcertificates](https://pkg.go.dev/gi
1. Deploy a new key vault instance. ```azurecli
- az keyvault create --name <keyVaultName> --resource-group myResourceGroup
+ az keyvault create --name <keyVaultName> --resource-group myResourceGroup
``` Replace `<keyVaultName>` with a name that's unique across all of Azure. You typically use your personal or company name along with other numbers and identifiers.
package main
import ( "context" "fmt"
+ "log"
"time" "github.com/Azure/azure-sdk-for-go/sdk/azcore/to"
import (
"github.com/Azure/azure-sdk-for-go/sdk/keyvault/azcertificates" )
-var (
- ctx = context.Background()
-)
- func getClient() *azcertificates.Client {- keyVaultName := os.Getenv("KEY_VAULT_NAME") if keyVaultName == "" {
- panic("KEY_VAULT_NAME environment variable not set")
+ log.Fatal("KEY_VAULT_NAME environment variable not set")
} keyVaultUrl := fmt.Sprintf("https://%s.vault.azure.net/", keyVaultName) cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil {
- panic(err)
+ log.Fatal(err)
}
- client, err := azcertificates.NewClient(keyVaultUrl, cred, nil)
- if err != nil {
- panic(err)
- }
- return client
+ return azcertificates.NewClient(keyVaultUrl, cred, nil)
} func createCert(client *azcertificates.Client) {
- resp, err := client.BeginCreateCertificate(ctx, "myCertName", azcertificates.CertificatePolicy{
- IssuerParameters: &azcertificates.IssuerParameters{
- Name: to.StringPtr("Self"),
- },
- X509CertificateProperties: &azcertificates.X509CertificateProperties{
- Subject: to.StringPtr("CN=DefaultPolicy"),
+ params := azcertificates.CreateCertificateParameters{
+ CertificatePolicy: &azcertificates.CertificatePolicy{
+ IssuerParameters: &azcertificates.IssuerParameters{
+ Name: to.Ptr("Self"),
+ },
+ X509CertificateProperties: &azcertificates.X509CertificateProperties{
+ Subject: to.Ptr("CN=DefaultPolicy"),
+ },
},
- }, nil)
- if err != nil {
- panic(err)
}-
- pollerResp, err := resp.PollUntilDone(ctx, 1*time.Second)
+ resp, err := client.CreateCertificate(context.TODO(), "myCertName", params, nil)
if err != nil {
- panic(err)
+ log.Fatal(err)
}
- fmt.Printf("Created certificate with ID: %s\n", *pollerResp.ID)
+
+ fmt.Printf("Requested a new certificate. Operation status: %s\n", *resp.Status)
} func getCert(client *azcertificates.Client) {
- getResp, err := client.GetCertificate(ctx, "myCertName", nil)
+ // an empty string version gets the latest version of the certificate
+ version := ""
+ getResp, err := client.GetCertificate(context.TODO(), "myCertName", version, nil)
if err != nil {
- panic(err)
+ log.Fatal(err)
}
- fmt.Println("Enabled set to:", *getResp.Properties.Enabled)
+ fmt.Println("Enabled set to:", *getResp.Attributes.Enabled)
} func listCert(client *azcertificates.Client) {
- poller := client.ListCertificates(nil)
- for poller.NextPage(ctx) {
- for _, cert := range poller.PageResponse().Certificates {
+ pager := client.NewListCertificatesPager(nil)
+ for pager.More() {
+ page, err := pager.NextPage(context.Background())
+ if err != nil {
+ log.Fatal(err)
+ }
+ for _, cert := range page.Value {
fmt.Println(*cert.ID) } }
- if poller.Err() != nil {
- panic(poller.Err)
- }
} func updateCert(client *azcertificates.Client) { // disables the certificate, sets an expires date, and add a tag
- _, err := client.UpdateCertificateProperties(ctx, "myCertName", &azcertificates.UpdateCertificatePropertiesOptions{
- Version: "myNewVersion",
- CertificateAttributes: &azcertificates.CertificateProperties{
- Enabled: to.BoolPtr(false),
- Expires: to.TimePtr(time.Now().Add(72 * time.Hour)),
+ params := azcertificates.UpdateCertificateParameters{
+ CertificateAttributes: &azcertificates.CertificateAttributes{
+ Enabled: to.Ptr(false),
+ Expires: to.Ptr(time.Now().Add(72 * time.Hour)),
},
- Tags: map[string]string{"Owner": "SRE"},
- })
+ Tags: map[string]*string{"Owner": to.Ptr("SRE")},
+ }
+ // an empty string version updates the latest version of the certificate
+ version := ""
+ _, err := client.UpdateCertificate(context.TODO(), "myCertName", version, params, nil)
if err != nil {
- panic(err)
+ log.Fatal(err)
} fmt.Println("Updated certificate properites: Enabled=false, Expires=72h, Tags=SRE") } func deleteCert(client *azcertificates.Client) {
- pollerResp, err := client.BeginDeleteCertificate(ctx, "myCertName", nil)
- if err != nil {
- panic(err)
- }
- finalResp, err := pollerResp.PollUntilDone(ctx, time.Second)
+ // DeleteCertificate returns when Key Vault has begun deleting the certificate. That can take several
+ // seconds to complete, so it may be necessary to wait before performing other operations on the
+ // deleted certificate.
+ resp, err := client.DeleteCertificate(context.TODO(), "myCertName", nil)
if err != nil {
- panic(err)
+ log.Fatal(err)
}
- fmt.Println("Deleted certificate with ID: ", *finalResp.ID)
+ fmt.Println("Deleted certificate with ID: ", *resp.ID)
} func main() {
go run main.go
## Code examples
-**Authenticate and create a client**
-
-```go
-cred, err := azidentity.NewDefaultAzureCredential(nil)
-if err != nil {
- panic(err)
-}
-
-client, err = azcertificates.NewClient("https://my-key-vault.vault.azure.net/", cred, nil)
-if err != nil {
- panic(err)
-}
-```
-
-**Create a certificate**
-
-```go
-resp, err := client.BeginCreateCertificate(context.TODO(), "myCert", azcertificates.CertificatePolicy{
- IssuerParameters: &azcertificates.IssuerParameters{
- Name: to.StringPtr("Self"),
- },
- X509CertificateProperties: &azcertificates.X509CertificateProperties{
- Subject: to.StringPtr("CN=DefaultPolicy"),
- },
-}, nil)
-if err != nil {
- panic(err)
-}
-
-pollerResp, err := resp.PollUntilDone(context.TODO(), 1*time.Second)
-if err != nil {
- panic(err)
-}
-fmt.Println(*pollerResp.ID)
-```
-
-**Get a certificate**
-
-```go
-getResp, err := client.GetCertificate(context.TODO(), "myCertName", nil)
-if err != nil {
- panic(err)
-}
-fmt.Println(*getResp.ID)
-
-//optionally you can get a specific version
-getResp, err = client.GetCertificate(context.TODO(), "myCertName", &azcertificates.GetCertificateOptions{Version: "myCertVersion"})
-if err != nil {
- panic(err)
-}
-```
-
-**List certificates**
-
-```go
-poller := client.ListCertificates(nil)
-for poller.NextPage(context.TODO()) {
- for _, cert := range poller.PageResponse().Certificates {
- fmt.Println(*cert.ID)
- }
-}
-if poller.Err() != nil {
- panic(err)
-}
-```
-
-**Update a certificate**
-
-```go
-_, err := client.UpdateCertificateProperties(context.TODO(), "myCertName", &azcertificates.UpdateCertificatePropertiesOptions{
- Version: "myNewVersion",
- CertificateAttributes: &azcertificates.CertificateProperties{
- Enabled: to.BoolPtr(false),
- Expires: to.TimePtr(time.Now().Add(72 * time.Hour)),
- },
- Tags: map[string]string{"Owner": "SRE"},
-})
-if err != nil {
- panic(err)
-}
-```
-
-**Delete a certificate**
-
-```go
-pollerResp, err := client.BeginDeleteCertificate(context.TODO(), "myCertName", nil)
-if err != nil {
- panic(err)
-}
-finalResp, err := pollerResp.PollUntilDone(context.TODO(), time.Second)
-if err != nil {
- panic(err)
-}
-
-fmt.Println("Deleted certificate with ID: ", *finalResp.ID)
-```
-
+See the [module documentation](https://aka.ms/azsdk/go/keyvault-certificates/docs) for more examples.
## Clean up resources
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/keys/quick-create-go.md
ms.devlang: golang
In this quickstart, you'll learn to use the Azure SDK for Go to create, retrieve, update, list, and delete Azure Key Vault keys.
-Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](../general/overview.md).
+Azure Key Vault is a cloud service that works as a secure secrets store. You can securely store keys, passwords, certificates, and other secrets. For more information on Key Vault, you may review the [Overview](../general/overview.md).
-Follow this guide to learn how to use the [azkeys](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azkeys) package to manage your Azure Key Vault keys using Go.
+Follow this guide to learn how to use the [azkeys](https://aka.ms/azsdk/go/keyvault-keys/docs) package to manage your Azure Key Vault keys using Go.
## Prerequisites - An Azure subscription - [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- **Go installed**: Version 1.16 or [above](https://go.dev/dl/)
+- **Go installed**: Version 1.18 or [above](https://go.dev/dl/)
- [Azure CLI](/cli/azure/install-azure-cli)
import (
"fmt" "log" "os"
- "time"
"github.com/Azure/azure-sdk-for-go/sdk/azcore/to" "github.com/Azure/azure-sdk-for-go/sdk/azidentity"
func main() {
} // create azkeys client
- client, err := azkeys.NewClient(keyVaultUrl, cred, nil)
- if err != nil {
- log.Fatalf("failed to connect to client: %v", err)
- }
+ client := azkeys.NewClient(keyVaultUrl, cred, nil)
+ // create RSA Key
- rsaResp, err := client.CreateRSAKey(context.TODO(), "new-rsa-key", &azkeys.CreateRSAKeyOptions{KeySize: to.Int32Ptr(2048)})
+ rsaKeyParams := azkeys.CreateKeyParameters{
+ Kty: to.Ptr(azkeys.JSONWebKeyTypeRSA),
+ KeySize: to.Ptr(int32(2048)),
+ }
+ rsaResp, err := client.CreateKey(context.TODO(), "new-rsa-key", rsaKeyParams, nil)
if err != nil { log.Fatalf("failed to create rsa key: %v", err) }
- fmt.Printf("Key ID: %s: Key Type: %s\n", *rsaResp.Key.ID, *rsaResp.Key.KeyType)
+ fmt.Printf("New RSA key ID: %s\n", *rsaResp.Key.KID)
// create EC Key
- ecResp, err := client.CreateECKey(context.TODO(), "new-ec-key", &azkeys.CreateECKeyOptions{CurveName: azkeys.JSONWebKeyCurveNameP256.ToPtr()})
+ ecKeyParams := azkeys.CreateKeyParameters{
+ Kty: to.Ptr(azkeys.JSONWebKeyTypeEC),
+ Curve: to.Ptr(azkeys.JSONWebKeyCurveNameP256),
+ }
+ ecResp, err := client.CreateKey(context.TODO(), "new-ec-key", ecKeyParams, nil)
if err != nil { log.Fatalf("failed to create ec key: %v", err) }
- fmt.Printf("Key ID: %s: Key Type: %s\n", *ecResp.Key.ID, *ecResp.Key.KeyType)
+ fmt.Printf("New EC key ID: %s\n", *ecResp.Key.KID)
// list all vault keys fmt.Println("List all vault keys:")
- pager := client.ListKeys(nil)
- for pager.NextPage(context.TODO()) {
- for _, key := range pager.PageResponse().Keys {
+ pager := client.NewListKeysPager(nil)
+ for pager.More() {
+ page, err := pager.NextPage(context.TODO())
+ if err != nil {
+ log.Fatal(err)
+ }
+ for _, key := range page.Value {
fmt.Println(*key.KID) } }
- if pager.Err() != nil {
- panic(pager.Err())
- }
-
- //update key properties to disable key
- updateResp, err := client.UpdateKeyProperties(context.TODO(), "new-rsa-key", &azkeys.UpdateKeyPropertiesOptions{
+ // update key properties to disable key
+ updateParams := azkeys.UpdateKeyParameters{
KeyAttributes: &azkeys.KeyAttributes{
- Attributes: azkeys.Attributes{
- Enabled: to.BoolPtr(false),
- },
+ Enabled: to.Ptr(false),
},
- })
- if err != nil {
- panic(err)
}
- fmt.Printf("Key %s Enabled attribute set to: %t\n", *updateResp.Key.ID, *updateResp.Attributes.Enabled)
-
- // delete rsa key
- delResp, err := client.BeginDeleteKey(context.TODO(), "new-rsa-key", nil)
+ // an empty string version updates the latest version of the key
+ version := ""
+ updateResp, err := client.UpdateKey(context.TODO(), "new-rsa-key", version, updateParams, nil)
if err != nil { panic(err) }
- pollResp, err := delResp.PollUntilDone(context.TODO(), 1*time.Second)
- if err != nil {
- panic(err)
+ fmt.Printf("Key %s Enabled attribute set to: %t\n", *updateResp.Key.KID, *updateResp.Attributes.Enabled)
+
+ // delete the created keys
+ for _, keyName := range []string{"new-rsa-key", "new-ec-key"} {
+ // DeleteKey returns when Key Vault has begun deleting the key. That can take several
+ // seconds to complete, so it may be necessary to wait before performing other operations
+ // on the deleted key.
+ delResp, err := client.DeleteKey(context.TODO(), keyName, nil)
+ if err != nil {
+ panic(err)
+ }
+ fmt.Printf("Successfully deleted key %s", *delResp.Key.KID)
}
- fmt.Printf("Successfully deleted key %s", *pollResp.Key.ID)
} ```
Successfully deleted key https://quickstart-kv.vault.azure.net/keys/new-rsa-key4
``` > [!NOTE]
-> The output is for informational purposes only. Your returns values may vary based on your Azure subscription and Azure Key Vault.
+> The output is for informational purposes only. Your return values may vary based on your Azure subscription and Azure Key Vault.
## Code examples
-These code examples show how to create, retrieve, list, update key properties, and delete a key from Azure Key Vault.
-
-**Authenticate and create a client**
-
-```go
-cred, err := azidentity.NewDefaultAzureCredential(nil)
-if err != nil {
- log.Fatalf("failed to obtain a credential: %v", err)
-}
-
-client, err := azkeys.NewClient("https://keyVaultName.vault.azure.net/", cred, nil)
-if err != nil {
- log.Fatalf("failed to create a client: %v", err)
-}
-```
-
-If you used a different Key Vault name, replace keyVaultName with your vault's name.
-
-**Create a key**
-
-```go
-//RSA Key
-resp, err := client.CreateRSAKey(context.TODO(), "new-rsa-key", &azkeys.CreateRSAKeyOptions{KeySize: to.Int32Ptr(2048)})
-if err != nil {
-
-}
-fmt.Println(*resp.Key.ID)
-fmt.Println(*resp.Key.KeyType)
-
-//EC key
-resp, err := client.CreateECKey(context.TODO(), "new-ec-key", &azkeys.CreateECKeyOptions{CurveName: azkeys.JSONWebKeyCurveNameP256.ToPtr()})
-if err != nil {
- panic(err)
-}
-fmt.Println(*resp.Key.ID)
-fmt.Println(*resp.Key.KeyType)
-```
-
-**Get a key**
-
-```go
-resp, err := client.GetKey(context.TODO(), "new-rsa-key", nil)
-if err != nil {
- panic(err)
-}
-fmt.Println(*resp.Key.ID)
-```
-
-**List all keys**
-
-```go
-pager := client.ListKeys(nil)
-for pager.NextPage(context.TODO()) {
- for _, key := range pager.PageResponse().Keys {
- fmt.Println(*key.KID)
- }
-}
-
-if pager.Err() != nil {
- panic(pager.Err())
-}
-```
-
-**Update a key properties**
-
-```go
-resp, err := client.UpdateKeyProperties(context.TODO(), "new-rsa-key", &azkeys.UpdateKeyPropertiesOptions{
- KeyAttributes: &azkeys.KeyAttributes{
- Attributes: azkeys.Attributes{
- Enabled: to.BoolPtr(false),
- },
- },
-})
-if err != nil {
- panic(err)
-}
-fmt.Println(*resp.Attributes.Enabled)
-```
-
-**Delete a key**
-
-```go
-resp, err := client.BeginDeleteKey(context.TODO(), "new-rsa-key", nil)
-if err != nil {
- panic(err)
-}
-pollResp, err := resp.PollUntilDone(context.TODO(), 1*time.Second)
-if err != nil {
- panic(err)
-}
-fmt.Printf("Successfully deleted key %s", *pollResp.Key.ID)
-```
-
+See the [module documentation](https://aka.ms/azsdk/go/keyvault-keys/docs) for more examples.
## Clean up resources
key-vault Quick Create Go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-go.md
Title: 'Quickstart: Manage secrets by using the Azure Key Vault Go client library'
-description: Learn how to create, retrieve, and delete secrets from an Azure key vault by using the Go client library.
+description: Learn how to create, retrieve, and delete secrets from an Azure key vault by using the Go client library.
Last updated 12/29/2021
ms.devlang: golang
In this quickstart, you'll learn how to use the Azure SDK for Go to create, retrieve, list, and delete secrets from an Azure key vault.
-You can store a variety of [object types](../general/about-keys-secrets-certificates.md#object-types) in an Azure key vault. When you store secrets in a key vault, you avoid having to store them in your code, which helps improve the security of your applications.
+You can store a variety of [object types](../general/about-keys-secrets-certificates.md#object-types) in an Azure key vault. When you store secrets in a key vault, you avoid having to store them in your code, which helps improve the security of your applications.
-Get started with the [azsecrets](https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets) package and learn how to manage your secrets in an Azure key vault by using Go.
+Get started with the [azsecrets](https://aka.ms/azsdk/go/keyvault-secrets/docs) package and learn how to manage your secrets in an Azure key vault by using Go.
## Prerequisites - An Azure subscription. If you don't already have a subscription, you can [create one for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- [Go version 1.16 or later](https://go.dev/dl/), installed.
+- [Go version 1.18 or later](https://go.dev/dl/), installed.
- [The Azure CLI](/cli/azure/install-azure-cli), installed. ## Setup
go get -u github.com/Azure/azure-sdk-for-go/sdk/keyvault/azsecrets
go get -u github.com/Azure/azure-sdk-for-go/sdk/azidentity ```
-## Code examples
-
-In the following sections, you create a client, set a secret, retrieve a secret, and delete a secret.
-
-### Authenticate and create a client
-
-```go
-vaultURI := os.Getenv("AZURE_KEY_VAULT_URI")
-
-cred, err := azidentity.NewDefaultAzureCredential(nil)
-if err != nil {
- log.Fatalf("failed to obtain a credential: %v", err)
-}
-
-client, err := azsecrets.NewClient(vaultURI, cred, nil)
-if err != nil {
- log.Fatalf("failed to create a client: %v", err)
-}
-```
-
-### Create a secret
-
-```go
-resp, err := client.SetSecret(context.TODO(), "secretName", "secretValue", nil)
-if err != nil {
- log.Fatalf("failed to create a secret: %v", err)
-}
-
-fmt.Printf("Name: %s, Value: %s\n", *resp.ID, *resp.Value)
-```
-
-### Get a secret
-
-```go
-getResp, err := client.GetSecret(context.TODO(), "secretName", nil)
-if err != nil {
- log.Fatalf("failed to get the secret: %v", err)
-}
-
-fmt.Printf("secretValue: %s\n", *getResp.Value)
-```
-
-### List properties of secrets
-
-```go
-pager := client.ListPropertiesOfSecrets(nil)
-for pager.More() {
- page, err := pager.NextPage(context.TODO())
- if err != nil {
- panic(err)
- }
- for _, v := range page.Secrets {
- fmt.Printf("Secret Name: %s\tSecret Tags: %v\n", *v.ID, v.Tags)
- }
-}
-```
-
-### Delete a secret
-
-```go
-respDel, err := client.BeginDeleteSecret(context.TODO(), mySecretName, nil)
-_, err = respDel.PollUntilDone(context.TODO(), time.Second)
-if err != nil {
- log.Fatalf("failed to delete secret: %v", err)
-}
-```
- ## Sample code Create a file named *main.go*, and then paste the following code into it:
import (
) func main() {- mySecretName := "secretName01" mySecretValue := "secretValue" vaultURI := os.Getenv("AZURE_KEY_VAULT_URI")
- //Create a credential using the NewDefaultAzureCredential type.
+ // Create a credential using the NewDefaultAzureCredential type.
cred, err := azidentity.NewDefaultAzureCredential(nil) if err != nil { log.Fatalf("failed to obtain a credential: %v", err) }
- //Establish a connection to the Key Vault client
- client, err := azsecrets.NewClient(vaultURI, cred, nil)
- if err != nil {
- log.Fatalf("failed to connect to client: %v", err)
- }
+ // Establish a connection to the Key Vault client
+ client := azsecrets.NewClient(vaultURI, cred, nil)
- //Create a secret
- _, err = client.SetSecret(context.TODO(), mySecretName, mySecretValue, nil)
+ // Create a secret
+ params := azsecrets.SetSecretParameters{Value: &mySecretValue}
+ _, err = client.SetSecret(context.TODO(), mySecretName, params, nil)
if err != nil { log.Fatalf("failed to create a secret: %v", err) }
- //Get a secret
- resp, err := client.GetSecret(context.TODO(), mySecretName, nil)
+ // Get a secret. An empty string version gets the latest version of the secret.
+ version := ""
+ resp, err := client.GetSecret(context.TODO(), mySecretName, version, nil)
if err != nil { log.Fatalf("failed to get the secret: %v", err) } fmt.Printf("secretValue: %s\n", *resp.Value)
- //List secrets
- pager := client.ListSecrets(nil)
- for pager.NextPage(context.TODO()) {
- resp := pager.PageResponse()
- for _, secret := range resp.Secrets {
+ // List secrets
+ pager := client.NewListSecretsPager(nil)
+ for pager.More() {
+ page, err := pager.NextPage(context.TODO())
+ if err != nil {
+ log.Fatal(err)
+ }
+ for _, secret := range page.Value {
fmt.Printf("Secret ID: %s\n", *secret.ID) } }
- if pager.Err() != nil {
- log.Fatalf("failed to get list secrets: %v", err)
- }
-
- //Delete a secret
- respDel, err := client.BeginDeleteSecret(context.TODO(), mySecretName, nil)
- _, err = respDel.PollUntilDone(context.TODO(), time.Second)
+ // Delete a secret. DeleteSecret returns when Key Vault has begun deleting the secret.
+ // That can take several seconds to complete, so it may be necessary to wait before
+ // performing other operations on the deleted secret.
+ delResp, err := client.DeleteSecret(context.TODO(), mySecretName, nil)
if err != nil { log.Fatalf("failed to delete secret: %v", err) }
- fmt.Println(mySecretName + " has been deleted\n")
+ fmt.Println(delResp.ID.Name() + " has been deleted")
} ```
func main() {
quickstart-secret has been deleted ```
+## Code examples
+
+See the [module documentation](https://aka.ms/azsdk/go/keyvault-secrets/docs) for more examples.
+ ## Clean up resources Delete the resource group and all its remaining resources by running the following command:
lab-services Quick Create Lab Plan Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-python.md
+
+ Title: Azure Lab Services quickstart - Create a lab plan using Python
+description: In this quickstart, you learn how to create an Azure Lab Services lab plan using Python and the Azure Python SDK.
++ Last updated : 02/15/2022+++
+# Quickstart: Create a lab plan using Python and the Azure libraries (SDK) for Python
+
+In this article you, as the admin, use Python and the Azure Python SDK to create a lab plan. Lab plans are used when creating labs for Azure Lab Services. You'll also add a role assignment so an educator can create labs based on the lab plan. For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).
+- [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).
+- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/RogerBestMsft/azure-samples-python-management/blob/rbest_ALSSample/samples/labservices/requirements.txt)
+
+## Create a lab plan
+
+The following steps will show you how to create a lab plan. Any properties set in the lab plan will be used in labs created with this plan.
+
+```python
+# --
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License. See License.txt in the project root for
+# license information.
+# --
+
+import os
+import time
+from datetime import timedelta
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.labservices import LabServicesClient
+from azure.mgmt.resource import ResourceManagementClient
+
+def main():
+
+ SUBSCRIPTION_ID = "<Subscription ID>"
+ TIME = str(time.time()).replace('.','')
+ GROUP_NAME = "BellowsCollege_rg"
+ LABPLAN = "BellowsCollege_labplan"
+ LAB = "BellowsCollege_lab"
+ LOCATION = 'southcentralus'
+
+ # Create clients
+ # # For other authentication approaches, please see: https://pypi.org/project/azure-identity/
+ resource_client = ResourceManagementClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ labservices_client = LabServicesClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ # Create resource group
+ resource_client.resource_groups.create_or_update(
+ GROUP_NAME,
+ {"location": LOCATION}
+ )
+
+ # Create lab services lab plan
+ LABPLANBODY = {
+ "location" : LOCATION,
+ "properties" : {
+ "defaultConnectionProfile" : {
+ "webSshAccess" : "None",
+ "webRdpAccess" : "None",
+ "clientSshAccess" : "None",
+ "clientRdpAccess" : "Public"
+ },
+ "defaultAutoShutdownProfile" : {
+ "shutdownOnDisconnect" : "Disabled",
+ "shutdownWhenNotConnected" : "Disabled",
+ "shutdownOnIdle" : "None"
+ },
+ "allowedRegions" : [LOCATION],
+ "supportInfo" : {
+ "email" : "user@bellowscollege.com",
+ "phone" : "123-123-1234",
+ "instructions" : "Bellows College support."
+ }
+ }
+ }
+
+ #Create Lab Plan
+ poller = labservices_client.lab_plans.begin_create_or_update(
+ GROUP_NAME,
+ LABPLAN,
+ LABPLANBODY
+ )
+
+ # Poll for long running execution.
+ labplan_result = poller.result()
+ print(f"Created Lab Plan: {labplan_result.name}")
+
+ # Get LabServices Lab Plans by resource group
+ labservices_client.lab_plans.list_by_resource_group(
+ GROUP_NAME
+ )
+
+ #Get single LabServices Lab Plan
+ labservices_labplan = labservices_client.lab_plans.get(GROUP_NAME, LABPLAN)
+
+if __name__ == "__main__":
+ main()
+```
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete the lab with the following steps:
+
+```python
+# --
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License. See License.txt in the project root for
+# license information.
+# --
+
+from datetime import timedelta
+import time
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.labservices import LabServicesClient
+from azure.mgmt.resource import ResourceManagementClient
+
+# - other dependence -
+# - end -
+#
+
+def main():
+
+ SUBSCRIPTION_ID = "<Subscription ID>"
+ TIME = str(time.time()).replace('.','')
+ GROUP_NAME = "BellowsCollege_rg" + TIME
+ LABPLAN = "BellowsCollege_labplan" + TIME
+ LAB = "BellowsCollege_lab" + TIME
+ LOCATION = 'southcentralus'
+
+ # Create clients
+ # # For other authentication approaches, please see: https://pypi.org/project/azure-identity/
+ resource_client = ResourceManagementClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ labservices_client = LabServicesClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ # Delete Lab
+ labservices_client.labs.begin_delete(
+ GROUP_NAME,
+ LAB
+ ).result()
+ print("Deleted lab.\n")
+
+ # Delete Group
+ resource_client.resource_groups.begin_delete(
+ GROUP_NAME
+ ).result()
++
+if __name__ == "__main__":
+ main()
+```
+## Next steps
+
+In this QuickStart, you created a resource group and a lab plan. As an admin, you can learn more about [Azure PowerShell module](/powershell/azure) and [Az.LabServices cmdlets](/powershell/module/az.labservices/).
+
+> [!div class="nextstepaction"]
+> [Quickstart: Create a lab using Python and the Azure Python SDK](quick-create-lab-python.md)
lab-services Quick Create Lab Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-python.md
+
+ Title: Azure Lab Services quickstart - Create a lab using Python
+description: In this quickstart, you learn how to create an Azure Lab Services lab using Python and the Azure Python libraries (SDK).
++ Last updated : 02/15/2022+++
+# Quickstart: Create a lab using Python and the Azure Python libraries (SDK)
+
+In this quickstart, you, as the educator, create a lab using Python and the Azure Python libraries (SDK). The lab will use the settings from a previously created lab plan. For detailed overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
+
+## Prerequisites
+
+- Azure subscription. If you donΓÇÖt have one, [create a free account](https://azure.microsoft.com/free/) before you begin.
+- [Setup Local Python dev environment for Azure](/azure/developer/python/configure-local-development-environment).
+- [The requirements.txt can be downloaded from the Azure Python samples](https://github.com/RogerBestMsft/azure-samples-python-management/blob/rbest_ALSSample/samples/labservices/requirements.txt)
+- Lab plan. To create a lab plan, see [Quickstart: Create a lab plan using Python and the Azure Python libraries (SDK)](quick-create-lab-plan-python.md).
+
+## Create a lab
+
+Before we can create a lab, we need the lab plan object. In the [previous quickstart](quick-create-lab-plan-python.md), we created a lab plan named `BellowsCollege_labplan` in a resource group named `BellowsCollege_rg`.
+
+```python
+# --
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License. See License.txt in the project root for
+# license information.
+# --
+
+from datetime import timedelta
+import time
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.labservices import LabServicesClient
+from azure.mgmt.resource import ResourceManagementClient
+
+def main():
+
+ SUBSCRIPTION_ID = "<Subscription ID>"
+ TIME = str(time.time()).replace('.','')
+ GROUP_NAME = "BellowsCollege_rg"
+ LABPLAN = "BellowsCollege_labplan"
+ LAB = "BellowsCollege_lab"
+ LOCATION = 'southcentralus'
+
+ # Create clients
+ # # For other authentication approaches, please see: https://pypi.org/project/azure-identity/
+ resource_client = ResourceManagementClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ labservices_client = LabServicesClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ #Get single LabServices Lab Plan
+ labservices_labplan = labservices_client.lab_plans.get(GROUP_NAME, LABPLAN)
+
+ print("Get lab plans")
+ print(labservices_labplan)
+
+ #Get image information
+ LABIMAGES = labservices_client.images.list_by_lab_plan(GROUP_NAME,LABPLAN)
+ image = (list(filter(lambda x: (x.name == "microsoftwindowsdesktop.windows-11.win11-21h2-pro"), LABIMAGES)))
+
+ #Get lab quota
+ USAGEQUOTA = timedelta(hours=10)
+
+ # Password
+ CUSTOMPASSWORD = "<custom password>"
+ # Create LabServices Lab
+ LABBODY = {
+ "name": LAB,
+ "location" : LOCATION,
+ "properties" : {
+ "networkProfile": {},
+ "connectionProfile" : {
+ "webSshAccess" : "None",
+ "webRdpAccess" : "None",
+ "clientSshAccess" : "None",
+ "clientRdpAccess" : "Public"
+ },
+ "AutoShutdownProfile" : {
+ "shutdownOnDisconnect" : "Disabled",
+ "shutdownWhenNotConnected" : "Disabled",
+ "shutdownOnIdle" : "None"
+ },
+ "virtualMachineProfile" : {
+ "createOption" : "TemplateVM",
+ "imageReference" : {
+ "offer": image[0].offer,
+ "publisher": image[0].publisher,
+ "sku": image[0].sku,
+ "version": image[0].version
+ },
+ "sku" : {
+ "name" : "Classic_Fsv2_2_4GB_128_S_SSD",
+ "capacity" : 2
+ },
+ "additionalCapabilities" : {
+ "installGpuDrivers" : "Disabled"
+ },
+ "usageQuota" : USAGEQUOTA,
+ "UseSharedPassword" : "Enabled",
+ "adminUser" : {
+ "username" : "testuser",
+ "password" : CUSTOMPASSWORD
+ }
+ },
+ "securityProfile" : {
+ "openAccess" : "Disabled"
+ },
+ "rosterProfile" : {},
+ "labPlanId" : labservices_labplan.id,
+ "title" : "lab-python",
+ "description" : "lab 99 description updated"
+ }
+ }
+
+ poller = labservices_client.labs.begin_create_or_update(
+ GROUP_NAME,
+ LAB,
+ LABBODY
+ )
+
+ lab_result = poller.result()
+ print(f"Created Lab {lab_result.name}")
+
+ # Get LabServices Labs
+ labservices_lab = labservices_client.labs.get(GROUP_NAME,LAB)
+ print("Get lab:\n{}".format(labservices_lab))
+
++
+if __name__ == "__main__":
+ main()
++
+```
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete
+the group and lab with the following steps:
+
+```python
+# --
+# Copyright (c) Microsoft Corporation. All rights reserved.
+# Licensed under the MIT License. See License.txt in the project root for
+# license information.
+# --
+
+from datetime import timedelta
+import time
+from azure.identity import DefaultAzureCredential
+from azure.mgmt.labservices import LabServicesClient
+from azure.mgmt.resource import ResourceManagementClient
+
+# - other dependence -
+# - end -
+#
+
+def main():
+
+ SUBSCRIPTION_ID = "<Subscription ID>"
+ TIME = str(time.time()).replace('.','')
+ GROUP_NAME = "BellowsCollege_rg"
+ LABPLAN = "BellowsCollege_labplan"
+ LAB = "BellowsCollege_lab"
+ LOCATION = 'southcentralus'
+
+ # Create clients
+ # # For other authentication approaches, please see: https://pypi.org/project/azure-identity/
+ resource_client = ResourceManagementClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ labservices_client = LabServicesClient(
+ credential=DefaultAzureCredential(),
+ subscription_id=SUBSCRIPTION_ID
+ )
+
+ # Delete Lab
+ labservices_client.labs.begin_delete(
+ GROUP_NAME,
+ LAB
+ ).result()
+ print("Deleted lab.\n")
+
+ # Delete Group
+ resource_client.resource_groups.begin_delete(
+ GROUP_NAME
+ ).result()
++
+if __name__ == "__main__":
+ main()
+```
+
+## Next steps
+
+As an admin, you can learn more about [Azure PowerShell module](/powershell/azure) and [Az.LabServices cmdlets](/powershell/module/az.labservices/).
load-testing How To Define Test Criteria https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-define-test-criteria.md
By defining test criteria, you can specify the performance expectations of your
## Load test pass/fail criteria
-This section discusses the syntax you use to define Azure Load Testing pass/fail criteria.
+This section discusses the syntax of Azure Load Testing pass/fail criteria. When a criterion evaluates to `true`, the load test gets the *failed* status.
-You use `Aggregate_function (client_metric) condition value` syntax. When a criterion evaluates to `true`, the load test gets the *failed* status.
+The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`.
+
+The following table describes the different components:
|Parameter |Description | |||
+|`Request` | *Optional.* Name of the sampler in the JMeter script to which the criterion applies. If you don't specify a request name, the criterion applies to the aggregate of all the requests in the script. |
|`Client metric` | *Required.* The client metric on which the criteria should be applied. | |`Aggregate function` | *Required.* The aggregate function to be applied on the client metric. | |`Condition` | *Required.* The comparison operator. |
-|`Threshold` | *Required.* The numeric value to compare with the client metric.<BR>The threshold evaluates against the aggregated value. |
+|`Threshold` | *Required.* The numeric value to compare with the client metric. |
-Load Testing supports the following combination of parameters:
+Azure Load Testing supports the following metrics:
|Metric |Aggregate function |Threshold |Condition | |||||
-|`response_time_ms` | `avg` (average) | Integer value, representing number of milliseconds (ms) | `>` (greater than) |
-|`error` | `percentage` | Numerical values in the range 0-100, representing a percentage | `>` (greater than) |
+|`response_time_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) |
+|`latency_ms` | `avg` (average)<BR> `min` (minimum)<BR> `max` (maximum)<BR> `pxx` (percentile), xx can be 50, 90, 95, 99 | Integer value, representing number of milliseconds (ms). | `>` (greater than)<BR> `<` (less than) |
+|`error` | `percentage` | Numerical value in the range 0-100, representing a percentage. | `>` (greater than) <BR> `<` (less than) |
+|`requests_per_sec` | `avg` (average) | Numerical value with up to two decimal places. | `>` (greater than) <BR> `<` (less than) |
+|`requests` | `count` | Integer value. | `>` (greater than) <BR> `<` (less than) |
## Define test pass/fail criteria in the Azure portal
In this section, you learn how to define load test pass/fail criteria for contin
failureCriteria:     - avg(response_time_ms) > 300     - percentage(error) > 20
+ - GetCustomerDetails: avg(latency_ms) >200
``` 1. Save the YAML configuration file.
load-testing Reference Test Config Yaml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/reference-test-config-yaml.md
A test configuration uses the following keys:
| `engineInstances` | integer | | *Required*. Number of parallel instances of the test engine to execute the provided test plan. You can update this property to increase the amount of load that the service can generate. | | `configurationFiles` | array | | List of relevant configuration files or other files that you reference in the Apache JMeter script. For example, a CSV data set file, images, or any other data file. These files will be uploaded to the Azure Load Testing resource alongside the test script. If the files are in a subfolder on your local machine, use file paths that are relative to the location of the test script. <BR><BR>Azure Load Testing currently doesn't support the use of file paths in the JMX file. When you reference an external file in the test script, make sure to only specify the file name. | | `description` | string | | Short description of the test run. |
-| `failureCriteria` | object | | Criteria that indicate failure of the test. Each criterion is in the form of:<BR>`[Aggregate_function] ([client_metric]) > [value]`<BR><BR>- `[Aggregate function] ([client_metric])` is either `avg(response_time_ms)` or `percentage(error).`<BR>- `value` is an integer number. |
+| `failureCriteria` | object | | Criteria that indicate when a test should fail. The structure of a pass/fail criterion is: `Request: Aggregate_function (client_metric) condition threshold`. For more information on the supported values, see [Load test pass/fail criteria](./how-to-define-test-criteria.md#load-test-passfail-criteria). |
| `properties` | object | | List of properties to configure the load test. | | `properties.userPropertyFile` | string | | File to use as an Apache JMeter [user properties file](https://jmeter.apache.org/usermanual/test_plan.html#properties). The file will be uploaded to the Azure Load Testing resource alongside the JMeter test script and other configuration files. If the file is in a subfolder on your local machine, use a path relative to the location of the test script. | | `splitAllCSVs` | boolean | False | Split the input CSV files evenly across all test engine instances. For more information, see [Read a CSV file in load tests](./how-to-read-csv-data.md#split-csv-input-data-across-test-engines). |
configurationFiles:
failureCriteria: - avg(response_time_ms) > 300 - percentage(error) > 50
+ - GetCustomerDetails: avg(latency_ms) >200
splitAllCSVs: True env: - name: my-variable
logic-apps Create Managed Service Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-managed-service-identity.md
The following table lists the operations where you can use either the system-ass
| Operation type | Supported operations | |-|-| | Built-in | - Azure API Management <br>- Azure App Services <br>- Azure Functions <br>- HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. However, they don't support the user-assigned managed identity for authenticating the same connections. |
-| Managed connector | Single-authentication: <br>- Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD <p>Multi-authentication: <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- SQL Server |
+| Managed connector | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure Key Vault <br>- Azure Resource Manager <br>- Azure Service Bus <br>- HTTP with Azure AD <br>- SQL Server |
||| ### [Standard](#tab/standard)
The following table lists the operations where you can use both the system-assig
| Operation type | Supported operations | |-|-| | Built-in | - HTTP <br>- HTTP + Webhook <p>**Note**: HTTP operations can authenticate connections to Azure Storage accounts behind Azure firewalls with the system-assigned identity. |
-| Managed connector | Single-authentication: <br>- Azure Automation <br>- Azure Event Grid <br>- Azure Key Vault <br>- Azure Resource Manager <br>- HTTP with Azure AD <p>Multi-authentication: <br>- Azure Blob Storage <br>- Azure Event Hubs <br>- Azure Service Bus <br>- SQL Server |
+| Managed connector | - Azure Automation <br>- Azure Blob Storage <br>- Azure Event Grid <br>- Azure Event Hubs <br>- Azure Key Vault <br>- Azure Resource Manager <br>- Azure Service Bus <br>- HTTP with Azure AD <br>- SQL Server |
|||
machine-learning Concept Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow-models.md
name: mlflow-env
### Model's predict function
-All MLflow models contain a `predict` function. This function is the one that is called when a model is deployed using a no-code-deployment experience. What the `predict` function returns (classes, probabilities, a forecast, etc.) depend on the framework (i.e. flavor) used for training. Read the documentation of each flavor to know what they return.
+All MLflow models contain a `predict` function. **This function is the one that is called when a model is deployed using a no-code-deployment experience**. What the `predict` function returns (classes, probabilities, a forecast, etc.) depend on the framework (i.e. flavor) used for training. Read the documentation of each flavor to know what they return.
In same cases, you may need to customize this function to change the way inference is executed. On those cases, you will need to [log models with a different behavior in the predict method](how-to-log-mlflow-models.md#logging-models-with-a-different-behavior-in-the-predict-method) or [log a custom model's flavor](how-to-log-mlflow-models.md#logging-custom-models).
+## Loading MLflow models back
+
+Models created as MLflow models can be loaded back directly from the run where they were logged, from the file system where they are saved or from the model registry where they are registered. MLflow provides a consistent way to load those models regardless of the location.
+
+There are two workflows available for loading models:
+
+* **Loading back the same object and types that were logged:**: You can load models using MLflow SDK and obtain an instance of the model with types belonging to the training library. For instance, an ONNX model will return a `ModelProto` while a decision tree trained with Scikit-Learn model will return a `DecisionTreeClassifier` object. Use `mlflow.<flavor>.load_model()` to do so.
+* **Loading back a model for running inference:** You can load models using MLflow SDK and obtain a wrapper where MLflow warranties there will be a `predict` function. It doesn't matter which flavor you are using, every MLflow model needs to implement this contract. Furthermore, MLflow warranties that this function can be called using arguments of type `pandas.DataFrame`, `numpy.ndarray` or `dict[strin, numpyndarray]` (depending on the signature of the model). MLflow handles the type conversion to the input type the model actually expects. Use `mlflow.pyfunc.load_model()` to do so.
+ ## Start logging models We recommend starting taking advantage of MLflow models in Azure Machine Learning. There are different ways to start using the model's concept with MLflow. Read [How to log MLFlow models](how-to-log-mlflow-models.md) to a comprehensive guide.
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
import mlflow
from xgboost import XGBClassifier from sklearn.metrics import accuracy_score
-with mlflow.start_run():
- mlflow.autolog()
+mlflow.autolog()
- model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
- model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
- y_pred = model.predict(X_test)
+model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
- accuracy = accuracy_score(y_test, y_pred)
+y_pred = model.predict(X_test)
+accuracy = accuracy_score(y_test, y_pred)
``` > [!TIP]
from sklearn.metrics import accuracy_score
from mlflow.models import infer_signature from mlflow.utils.environment import _mlflow_conda_env
-with mlflow.start_run():
- mlflow.autolog(log_models=False)
-
- model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
- model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
- y_pred = model.predict(X_test)
-
- accuracy = accuracy_score(y_test, y_pred)
-
- # Signature
- signature = infer_signature(X_test, y_test)
-
- # Conda environment
- custom_env =_mlflow_conda_env(
- additional_conda_deps=None,
- additional_pip_deps=["xgboost==1.5.2"],
- additional_conda_channels=None,
- )
-
- # Sample
- input_example = X_train.sample(n=1)
-
- # Log the model manually
- mlflow.xgboost.log_model(model,
- artifact_path="classifier",
- conda_env=custom_env,
- signature=signature,
- input_example=input_example)
+mlflow.autolog(log_models=False)
+
+model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+y_pred = model.predict(X_test)
+
+accuracy = accuracy_score(y_test, y_pred)
+
+# Signature
+signature = infer_signature(X_test, y_test)
+
+# Conda environment
+custom_env =_mlflow_conda_env(
+ additional_conda_deps=None,
+ additional_pip_deps=["xgboost==1.5.2"],
+ additional_conda_channels=None,
+)
+
+# Sample
+input_example = X_train.sample(n=1)
+
+# Log the model manually
+mlflow.xgboost.log_model(model,
+ artifact_path="classifier",
+ conda_env=custom_env,
+ signature=signature,
+ input_example=input_example)
``` > [!NOTE]
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score from mlflow.models import infer_signature
-with mlflow.start_run():
- mlflow.xgboost.autolog(log_models=False)
+mlflow.xgboost.autolog(log_models=False)
- model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
- model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
- y_probs = model.predict_proba(X_test)
+model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+y_probs = model.predict_proba(X_test)
- accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
- mlflow.log_metric("accuracy", accuracy)
+accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
+mlflow.log_metric("accuracy", accuracy)
- signature = infer_signature(X_test, y_probs)
- mlflow.pyfunc.log_model("classifier",
- python_model=ModelWrapper(model),
- signature=signature)
+signature = infer_signature(X_test, y_probs)
+mlflow.pyfunc.log_model("classifier",
+ python_model=ModelWrapper(model),
+ signature=signature)
``` > [!TIP]
from sklearn.preprocessing import OrdinalEncoder
from sklearn.metrics import accuracy_score from mlflow.models import infer_signature
-with mlflow.start_run():
- mlflow.xgboost.autolog(log_models=False)
-
- encoder = OrdinalEncoder(handle_unknown='ignore')
- X_train['thal'] = enc.fit_transform(X_train['thal'])
- X_test['thal'] = enc.transform(X_test['thal'])
-
- model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
- model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
- y_probs = model.predict_proba(X_test)
-
- accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
- mlflow.log_metric("accuracy", accuracy)
-
- encoder_path = 'encoder.pkl'
- joblib.dump(encoder, encoder_path)
- model_path = "xgb.model"
- model.save_model(model_path)
-
- signature = infer_signature(X, y_probs)
- mlflow.pyfunc.log_model("classifier",
- python_model=ModelWrapper(),
- artifacts={
- 'encoder': encoder_path,
- 'model': model_path
- },
- signature=signature)
+mlflow.xgboost.autolog(log_models=False)
+
+encoder = OrdinalEncoder(handle_unknown='ignore')
+X_train['thal'] = enc.fit_transform(X_train['thal'])
+X_test['thal'] = enc.transform(X_test['thal'])
+
+model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+y_probs = model.predict_proba(X_test)
+
+accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
+mlflow.log_metric("accuracy", accuracy)
+
+encoder_path = 'encoder.pkl'
+joblib.dump(encoder, encoder_path)
+model_path = "xgb.model"
+model.save_model(model_path)
+
+signature = infer_signature(X, y_probs)
+mlflow.pyfunc.log_model("classifier",
+ python_model=ModelWrapper(),
+ artifacts={
+ 'encoder': encoder_path,
+ 'model': model_path
+ },
+ signature=signature)
``` # [Using a model loader](#tab/loader)
from xgboost import XGBClassifier
from sklearn.metrics import accuracy_score from mlflow.models import infer_signature
-with mlflow.start_run():
- mlflow.xgboost.autolog(log_models=False)
+mlflow.xgboost.autolog(log_models=False)
- model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
- model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
- y_probs = model.predict_proba(X_test)
+model = XGBClassifier(use_label_encoder=False, eval_metric="logloss")
+model.fit(X_train, y_train, eval_set=[(X_test, y_test)], verbose=False)
+y_probs = model.predict_proba(X_test)
- accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
- mlflow.log_metric("accuracy", accuracy)
+accuracy = accuracy_score(y_test, y_probs.argmax(axis=1))
+mlflow.log_metric("accuracy", accuracy)
- model_path = "xgb.model"
- model.save_model(model_path)
+model_path = "xgb.model"
+model.save_model(model_path)
- signature = infer_signature(X_test, y_probs)
- mlflow.pyfunc.log_model("classifier",
- data_path=model_path,
- code_path=["loader_module.py"],
- loader_module="loader_module",
- signature=signature)
+signature = infer_signature(X_test, y_probs)
+mlflow.pyfunc.log_model("classifier",
+ data_path=model_path,
+ code_path=["loader_module.py"],
+ loader_module="loader_module",
+ signature=signature)
```
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
mlflow.register_model(f"file://{model_local_path}", "local-model-test")
> [!NOTE] > Notice how the model URI schema `file:/` requires absolute paths.
-## Querying models
+## Querying model registries
### Querying all the models in the registry
If you need a specific version of the model, you can indicate so:
client.get_model_version(model_name, version=2) ```
+## Loading models from registry
+
+You can load models directly from the registry to restore the models objects that were logged. Use the functions `mlflow.<flavor>.load_model()` or `mlflow.pyfunc.load_model()` indicating the URI of the model you want to load using the following syntax:
+
+* `models:/<model-name>/latest`, to load the last version of the model.
+* `models:/<model-name>/<version-number>`, to load a specific version of the model.
+* `models:/<model-name>/<stage-name>`, to load a specific version in a given stage for a model. View [Model stages](#model-stages) for details.
+
+> [!TIP]
+> For learning about the difference between `mlflow.<flavor>.load_model()` and `mlflow.pyfunc.load_model()`, view [Loading MLflow models back](concept-mlflow-models.md#loading-mlflow-models-back) article.
+ ## Model stages MLflow supports model's stages to manage model's lifecycle. Model's version can transition from one stage to another. Stages are assigned to a model's version (instead of models) which means that a given model can have multiple versions on different stages.
machine-learning How To Track Experiments Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-track-experiments-mlflow.md
MLflow also allows you to both operations at once and download and load the mode
model = mlflow.xgboost.load_model(f"runs:/{last_run.info.run_id}/{artifact_path}") ```
+> [!TIP]
+> You can also load models from the registry using MLflow. View [loading MLflow models with MLflow](how-to-manage-models-mlflow.md#loading-models-from-registry) for details.
+ ## Getting child (nested) runs MLflow supports the concept of child (nested) runs. They are useful when you need to spin off training routines requiring being tracked independently from the main training process. This is the typical case of hyper-parameter tuning for instance. You can query all the child runs of a specific run using the property tag `mlflow.parentRunId`, which contains the run ID of the parent run.
machine-learning How To Use Mlflow Cli Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-cli-runs.md
You can get the Azure ML MLflow tracking URI using the [Azure Machine Learning S
from azure.identity import DefaultAzureCredential import mlflow
- ml_client = MLClient.from_config(credential=DefaultAzureCredential()
+ ml_client = MLClient.from_config(credential=DefaultAzureCredential())
azureml_mlflow_uri = ml_client.workspaces.get(ml_client.workspace_name).mlflow_tracking_uri mlflow.set_tracking_uri(azureml_mlflow_uri) ```
machine-learning How To Use Sweep In Pipeline https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-sweep-in-pipeline.md
Assume you already have a command component defined in `train.yaml`. A two-step
:::code language="yaml" source="~/azureml-examples-main/cli/jobs/pipelines-with-components/pipeline_with_hyperparameter_sweep/pipeline.yml" highlight="7-48":::
-The `sweep_step` is the step for hyperparameter tuning. Its type needs to be `sweep`. And `trial` refers to the command component defined in `train.yaml`. From the `search sapce` field we can see three hyparmeters (`c_value`, `kernel`, and `coef`) are added to the search space. After you submit this pipeline job, Azure Machine Learning will run the trial component multiple times to sweep over hyperparameters based on the search space and terminate policy you defined in `sweep_step`. Check [sweep job YAML schema](reference-yaml-job-sweep.md) for full schema of sweep job.
+The `sweep_step` is the step for hyperparameter tuning. Its type needs to be `sweep`. And `trial` refers to the command component defined in `train.yaml`. From the `search space` field we can see three hyparmeters (`c_value`, `kernel`, and `coef`) are added to the search space. After you submit this pipeline job, Azure Machine Learning will run the trial component multiple times to sweep over hyperparameters based on the search space and terminate policy you defined in `sweep_step`. Check [sweep job YAML schema](reference-yaml-job-sweep.md) for full schema of sweep job.
Below is the trial component definition (train.yml file).
managed-grafana How To Data Source Plugins Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md
Other data sources include:
- [TestData DB](https://grafana.com/docs/grafana/latest/datasources/testdata/) - [Zipkin](https://grafana.com/docs/grafana/latest/datasources/zipkin/)
-You can find all available Grafana data sources by going to your resource and selecting this page from the left menu: **Configuration** > **Data sources** > **Add a data source** . Search for the data source you need from the available list. For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
+You can find all available Grafana data sources by going to your resource and selecting **Configuration** > **Data sources** from the left menu. Search for the data source you need from the available list and select **Add data source**.
:::image type="content" source="media/managed-grafana-how-to-source-plugins.png" alt-text="Screenshot of the Add data source page.":::
+> [!NOTE]
+> Installing Grafana plugins listed on the page **Configuration** > **Plugins** isnΓÇÖt currently supported.
+
+For more information about data sources, go to [Data sources](https://grafana.com/docs/grafana/latest/datasources/) on the Grafana Labs website.
+ ## Default configuration for Azure Monitor The Azure Monitor data source is automatically added to all new Managed Grafana resources. To review or modify its configuration, follow these steps in your Managed Grafana endpoint:
The Azure Monitor data source is automatically added to all new Managed Grafana
:::image type="content" source="media/managed-grafana-how-to-source-configuration.png" alt-text="Screenshot of the Add data sources page.":::
-1. Azure Monitor should be listed as a built-in data source for your Managed Grafana instance. Select **Azure Monitor**.
+1. Azure Monitor is listed as a built-in data source for your Managed Grafana instance. Select **Azure Monitor**.
1. In **Settings**, authenticate through **Managed Identity** and select your subscription from the dropdown list or enter your **App Registration** details :::image type="content" source="media/managed-grafana-how-to-source-configuration-Azure-Monitor-settings.png" alt-text="Screenshot of the Azure Monitor page in data sources.":::
managed-grafana Troubleshoot Managed Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/troubleshoot-managed-grafana.md
Previously updated : 06/16/2022 Last updated : 07/06/2022 # Troubleshoot issues for Azure Managed Grafana
One or several Managed Grafana dashboard panels show no data.
Context: Grafana dashboards are set up to fetch new data periodically. If the dashboard is refreshed too often for the underlying query to load, the panel will be stuck without ever being able to load and display data.
-1. Check how frequently the dashboard is configured to refresh data?
+1. Check how frequently the dashboard is configured to refresh data.
1. In your dashboard, go to **Dashboard settings**. 1. In the general settings, lower the **Auto refresh** rate of the dashboard to be no faster than the time the query takes to load. 1. When a query takes too long to retrieve data. Grafana will automatically time out certain dependency calls that take longer than, for example, 30 seconds. Check that there are no unusual slow-downs on the query's end.
+## General issues with data sources
+
+The user can't connect to a data source, or a data source cannot fetch data.
+
+### Solution: review network settings and IP address
+
+To troubleshoot this issue:
+
+1. Check the network setting of the data source server. There should be no firewall blocking Grafana from accessing it.
+1. Check that the data source isn't trying to connect to a private IP address. Azure Managed Grafana doesn't currently support connections to private networks.
+ ## Azure Monitor can't fetch data Every Grafana instance comes pre-configured with an Azure Monitor data source. When trying to use a pre-provisioned dashboard, the user finds that the Azure Monitor data source can't fetch data.
marketplace Marketplace Apis Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/marketplace-apis-guide.md
Title: Align your business with our eCommerce platform and Azure Marketplace
-description: Align your business with our eCommerce platform (Azure Marketplace).
+ Title: Align your business with our e-commerce platform and Azure Marketplace.
+description: Align your business with our e-commerce platform (Azure Marketplace).
Last updated 06/21/2022
# Align your business with our e-commerce platform
-This article describes how the commercial marketplace User Interface (UI) and programmatic Application Programming Interfaces (APIs) combine to support your business processes. The links under the API point to the specific interfaces developers can use to integrate their CRM system with the commercial marketplace.
+This article describes how the commercial marketplace user interface (UI) and programmatic application programming interfaces (APIs) combine to support your business processes.
## Overview of activities
-The activities below are not sequential. The activity you use is dependent on your business needs and sales processes. This guide shows how to integrate different APIs to automate each activity.
+The following guide shows how to integrate different APIs to automate each activity. The links in the API column point to the specific interfaces that developers can use to integrate their customer relationship management (CRM) system with the commercial marketplace. The activity that you use depends on your business needs and sales processes.
-| <center>Activity | ISV sales activities | Corresponding Marketplace API | Corresponding Marketplace UI |
+| <center>Activity | ISV sales activities | Corresponding marketplace API | Corresponding marketplace UI |
| | | | |
-| <center>**1. Product Marketing**<br><img src="medi)</ul> | Create product messaging, positioning, promotion, pricing<br>Partner Center (PC) → Offer Creation |
-| <center>**2. Demand Generation**<br><img src="medi)<br>[Co-Sell Connector for SalesForce CRM](/partner-center/connector-salesforce)<br>[Co-Sell Connector for Dynamics 365 CRM](/partner-center/connector-dynamics) | Product Promotion<br>Lead nurturing<br>Eval, trial & PoC<br>Azure Marketplace and AppSource<br>PC Marketplace Insights<br>PC Co-Sell Opportunities |
-| <center>**3. Negotiation and Quote Creation**<br><img src="medi)<br>[Partner Center '7' API Family](/partner-center/) | T&Cs<br>Pricing<br>Discount approvals<br>Final quote<br>PC → Plans (public or private) |
-| <center>**4. Sale**<br><img src="medi)<br>[Reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing<br>Revenue Recognition<br>Invoicing<br>Billing<br>Azure portal / Admin Center<br>PC Marketplace Rewards<br>PC Payouts Reports<br>PC Marketplace Analytics<br>PC Co-Sell Closing |
-| <center>**5. Maintenance**<br><img src="medi)<br>[(EA Customer) Azure Consumption API](/rest/api/consumption/)<br>[(EA Customer) Azure Charges List API](/rest/api/consumption/charges/list) | Recurring billing<br>Overages<br>Product Support<br>PC Payouts Reports<br>PC Marketplace Analytics |
-| <center>**6. Contract End**<br><img src="medi)<br>AMA/VM's: auto-renew | Renew or<br>Terminate<br>PC Marketplace Analytics |
+| <center>**Product marketing**<br><img src="medi)</ul> | Product messaging, positioning, promotion, and pricing;<br>Partner Center: offer creation |
+| <center>**Demand generation**<br><img src="medi); <br>[co-sell Connector for Salesforce CRM](/partner-center/connector-salesforce); <br>[co-sell connector for Dynamics 365 CRM](/partner-center/connector-dynamics) | Product promotion,<br>lead nurturing,<br>evaluation, trial, and PoC;<br>Azure Marketplace and AppSource;<br>Partner Center Marketplace insights;<br>Partner Center co-sell opportunities |
+| <center>**Negotiation and quote creation**<br><img src="medi)<br>[Partner Center 7 API family](/partner-center/) | T&Cs,<br>pricing,<br>discount approvals,<br>final quote,<br>Partner Center: plans (public or private) |
+| <center>**Sale**<br><img src="medi),<br>[reporting APIs](https://partneranalytics-api.azureedge.net/partneranalytics-api/Programmatic%20Access%20to%20Commercial%20Marketplace%20Analytics%20Data_v1.pdf) | Contract signing,<br>revenue recognition,<br>invoicing,<br>billing,<br>Azure portal / admin center,<br>Partner Center Marketplace Rewards,<br>Partner Center payouts reports,<br>Partner Center marketplace analytics,<br>Partner Center co-sell closing |
+| <center>**Maintenance**<br><img src="medi), <br>[Enterprise Agreement (EA) Customer, Azure Consumption API](/rest/api/consumption/)<br>[(EA customer), Azure charges list API](/rest/api/consumption/charges/list) | Recurring billing,<br>overages,<br>product support,<br>Partner Center payouts reports,<br>Partner Center marketplace analytics |
+| <center>**Contract end**<br><img src="medi),<br>Azure Monitor agent/VMs: auto-renew | Renew or<br>terminate,<br>Partner Center marketplace analytics |
## Next steps -- Visit the links above for each API as needed.
+- Visit the API links in the table, as needed.
networking Check Usage Against Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/check-usage-against-limits.md
Title: Check Azure resource usage against limits | Microsoft Docs
description: Learn how to check your Azure resource usage against Azure subscription limits. documentationcenter: na--++ tags: azure-resource-manager
networking Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/cli-samples.md
Title: Azure CLI Samples - Networking
description: Learn about Azure CLI samples for networking, including samples for connectivity between Azure resources and samples for load balancing and traffic direction. documentationcenter: virtual-network--++ tags: ms.assetid:
networking Disaster Recovery Dns Traffic Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/disaster-recovery-dns-traffic-manager.md
Title: 'Disaster recovery using Azure DNS and Traffic Manager | Microsoft Docs'
description: Overview of the disaster recovery solutions using Azure DNS and Traffic Manager. documentationcenter: na--++ editor: tags: azure-resource-manager
networking Architecture Guides https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/architecture-guides.md
Title: Azure Networking architecture documentation description: Learn about the reference architecture documentation available for Azure networking services. -+ Last updated 03/30/2021-+ # Azure Networking architecture documentation
networking Networking Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/networking-overview.md
Title: Azure networking services overview description: Learn about networking services in Azure, including connectivity, application protection, application delivery, and network monitoring services. -+ Last updated 02/03/2022-+
networking Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/fundamentals/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure networking
description: Sample Azure Resource Graph queries for Azure networking showing use of resource types and tables to access Azure networking related resources and properties. Last updated 07/07/2022 --++
networking Microsoft Global Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/microsoft-global-network.md
Title: 'Microsoft global network - Azure'
description: Learn how Microsoft builds and operates one of the largest backbone networks in the world, and why it is central to delivering a great cloud experience. documentationcenter: --++ ms.devlang: na Last updated 01/05/2020-+
networking Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md
Title: Built-in policy definitions for Azure networking services
description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 07/06/2022 --++
networking Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/powershell-samples.md
Title: Azure PowerShell Samples - Networking
description: Learn about Azure PowerShell samples for networking, including a sample for creating a virtual network for multi-tier applications. documentationcenter: virtual-network--++ tags: ms.assetid:
Last updated 05/24/2017-+ # Azure PowerShell Samples for networking
networking Virtual Network Cli Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-cli-sample-multi-tier-application.md
Title: Azure CLI script sample - Create a network for multi-tier applications
description: Azure CLI script sample - Create a virtual network for multi-tier applications. documentationcenter: virtual-network--++ ms.devlang: azurecli Last updated 07/07/2017-+
networking Virtual Network Cli Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-cli-sample-peer-two-virtual-networks.md
Title: Azure CLI Script Sample - Peer two virtual networks | Microsoft Docs
description: Use an Azure CLI script sample to create and connect two virtual networks in the same region through the Azure network. documentationcenter: virtual-network--++ ms.devlang: azurecli Last updated 07/07/2017-+
networking Virtual Network Cli Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-cli-sample-route-traffic-through-nva.md
Title: Azure CLI script sample - Route traffic through a network virtual applian
description: Azure CLI script sample - Route traffic through a firewall network virtual appliance. documentationcenter: virtual-network--++ ms.devlang: azurecli Last updated 07/07/2017-+
networking Virtual Network Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-filter-network-traffic.md
Title: Azure CLI script sample - Filter VM network traffic | Microsoft Docs
description: Use an Azure CLI script to filter inbound and outbound virtual machine (VM) network traffic with front-end and back-end subnets. documentationcenter: virtual-network-+ ms.devlang: azurecli Last updated 07/07/2017-+
networking Virtual Network Powershell Filter Network Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-filter-network-traffic.md
Title: Azure PowerShell script sample - Filter VM network traffic | Microsoft Do
description: Azure PowerShell script sample - Filter inbound and outbound VM network traffic. documentationcenter: virtual-network--++ ms.devlang: powershell Last updated 05/16/2017-+
networking Virtual Network Powershell Sample Multi Tier Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-sample-multi-tier-application.md
Title: Azure PowerShell script sample - Create a network for multi-tier applicat
description: Azure PowerShell script sample - Create a virtual network for multi-tier applications. documentationcenter: virtual-network--++ ms.devlang: powershell Last updated 05/16/2017-+
networking Virtual Network Powershell Sample Peer Two Virtual Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-sample-peer-two-virtual-networks.md
Title: Azure PowerShell Script Sample - Peer two virtual networks | Microsoft Do
description: Create and connect two virtual networks in the same region. Use the Azure script for two peer virtual networks to connect the networks through Azure. documentationcenter: virtual-network--++ ms.devlang: powershell Last updated 05/16/2017-+
networking Virtual Network Powershell Sample Route Traffic Through Nva https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/scripts/virtual-network-powershell-sample-route-traffic-through-nva.md
Title: Azure PowerShell script sample - Route traffic through a network virtual
description: Azure PowerShell script sample - Route traffic through a firewall network virtual appliance. documentationcenter: virtual-network--++ ms.devlang: powershell Last updated 05/16/2017-+
networking Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure networking services
description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 07/06/2022 --++
orbital Concepts Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact-profile.md
+
+ Title: Ground station contact profile - Azure Orbital GSaaS
+description: Learn more about the contact profile object, including how to create, modify, and delete the profile.
++++ Last updated : 06/21/2022+
+#Customer intent: As a satellite operator or user, I want to understand how to use the contact profile so that I can take passes using the GSaaS service.
++
+# Ground station contact profile
+
+The contact profile object stores pass requirements such as links and endpoint details for each link. Use this object with the spacecraft object at time of scheduling to view and schedule available passes.
+
+You can create many contact profiles to represent different types of passes depending on your mission operations. For example, you can create a contact profile for a command and control pass or a contact profile for a downlink only pass.
+
+These objects are mutable and don't undergo an authorization process like the spacecraft objects do. One contact profile can be used with many spacecraft objects.
+
+See [how to configure a contact profile](contact-profile.md) for the full list of parameters.
+
+## Prerequisites
+
+- Subnet that is created in the VNET and resource group you desire. See [Prepare network for Orbital GSaaS integration.](prepare-network.md)
+
+## Creating a contact profile
+
+Follow the steps in [how to create a contact profile.](contact-profile.md).
+
+## Adjusting pass parameters
+
+Specify a minimum pass time to ensure passes of a certain duration. Specify a minimum elevation to ensure passes above a certain elevation.
+
+The two parameters above are used by the service during the contact scheduling. Avoid changing these on a pass-by-pass basis and create multiple contact profiles if you need such flexibility.
+
+At the moment autotrack is disabled and autotracking options are not applied.
+
+## Understanding links and channels
+
+A whole band, unique in direction, and unique in polarity is called a link. Channels, which are children under links, specify center frequency, bandwidth, and endpoints. Typically there's only one channel per link but some applications require multiple channels per link. Refer to the Ground Station manual for a full list of supported bands and antenna capabilities.
+
+You can specify an EIRP and G/T requirement for each link. EIRP applies to uplinks and G/T applies to downlinks. You can give a name to each link and channel to keep track of these properties.
+
+Look at the example below to see how to specify an RHCP channel and an LHCP channel if your mission requires dual-polarization on downlink.
+
+```json
+{
+ "location": "eastus2",
+ "tags": null,
+ "id": "/subscriptions/c1be1141-a7c9-4aac-9608-3c2e2f1152c3/resourceGroups/contoso-Rgp/providers/Microsoft.Orbital/contactProfiles/CONTOSO-CP",
+ "name": "CONTOSO-CP",
+ "type": "Microsoft.Orbital/contactProfiles",
+ "properties": {
+ "provisioningState": "Succeeded",
+ "minimumViableContactDuration": "PT1M",
+ "minimumElevationDegrees": 5,
+ "autoTrackingConfiguration": "disabled",
+ "eventHubUri": "/subscriptions/c1be1141-a7c9-4aac-9608-3c2e2f1152c3/resourceGroups/contoso-Rgp/providers/Microsoft.EventHub/namespaces/contosoHub/eventhubs/contosoHub",
+ "networkConfiguration": {
+ "subnetId": "/subscriptions/c1be1141-a7c9-4aac-9608-3c2e2f1152c3/resourceGroups/contoso-Rgp/providers/Microsoft.Network/virtualNetworks/contoso-vnet/subnets/orbital-delegated-subnet"
+ },
+ "links": [
+ {
+ "name": "contoso-downlink-rhcp",
+ "polarization": "RHCP",
+ "direction": "downlink",
+ "gainOverTemperature": 25,
+ "eirpdBW": 0,
+ "channels": [
+ {
+ "name": "contoso-downlink-channel-rhcp",
+ "centerFrequencyMHz": 8160,
+ "bandwidthMHz": 15,
+ "endPoint": {
+ "ipAddress": "10.1.0.5",
+ "endPointName": "ContosoTest_Downlink_RHCP",
+ "port": "51103",
+ "protocol": "UDP"
+ },
+ "modulationConfiguration": null,
+ "demodulationConfiguration": null,
+ "encodingConfiguration": null,
+ "decodingConfiguration": null
+ }
+ ]
+ }
+ {
+ "name": "contoso-downlink-lhcp",
+ "polarization": "LHCP",
+ "direction": "downlink",
+ "gainOverTemperature": 25,
+ "eirpdBW": 0,
+ "channels": [
+ {
+ "name": "contoso-downlink-channel-lhcp",
+ "centerFrequencyMHz": 8160,
+ "bandwidthMHz": 15,
+ "endPoint": {
+ "ipAddress": "10.1.0.5",
+ "endPointName": "ContosoTest_Downlink_LHCP",
+ "port": "51104",
+ "protocol": "UDP"
+ },
+ "modulationConfiguration": null,
+ "demodulationConfiguration": null,
+ "encodingConfiguration": null,
+ "decodingConfiguration": null
+ }
+ ]
+ }
+ ]
+ }
+}
+```
++
+## Applying modems or bring your own
+
+We recommend taking advantage of Orbital's GSaaS software modem functionality if possible. This modem is managed by the service and is inserted between your endpoint and the incoming or outgoing virtual RF stream per channel. We have a library of modems that will be available in the marketplace for you to utilize. If there is no modem that can be used with your application then utilize the virtual RF delivery feature to bring your own modem.
+
+There are 4 parameters related to modem configurations. The table below shows you how to configure these parameters.
+
+| Parameter | Options |
+||--|
+| modulationConfiguration | 1. Null for virtual RF<br />2. JSON escaped modem config for software modem |
+| demodulationConfiguration | 1. Null for virtual RF<br />2. JSON escaped modem config for software modem |
+| encodingConfiguration | Null (not used) |
+| decodingConfiguration | Null (not used) |
+
+Use the same modem config file in uplink and downlink channels for full-duplex communications in the same band.
+
+The modem config should be a JSON escaped raw save file from a software modem. Please see the marketplace for modem options.
+
+## Modifying or deleting a contact profile
+
+You can modify or delete the contact profile via the Portal or through the API.
+
+## Configuring contact profile for third party ground stations
+
+When you onboard a third party network, you'll receive a token that identifies your profile. Use this token in the object to link a contact profile to the third party network.
+
+## Next steps
+
+- [Quickstart: Schedule a contact](schedule-contact.md)
+- [How to: Update the Spacecraft TLE](update-tle.md)
+
orbital Concepts Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/concepts-contact.md
+
+ Title: Ground station contact - Azure Orbital GSaSS
+description: Learn more about the contact object and how to schedule a contact.
++++ Last updated : 06/21/2022+
+#Customer intent: As a satellite operator or user, I want to understand how to what the contact object is so I can manage my mission operations.
++
+# Ground station contact
+
+A contact occurs when the spacecraft is over a specified ground station. You can find available passes on the system and schedule them for use through Azure Orbital GSaaS. A contact and ground station pass mean the same thing.
+
+When you schedule a contact, a contact object is created under your spacecraft object in your resource group. The contact only associated with this spacecraft and can't be transferred to another spacecraft, resource group, or region.
+
+## Contact object
+
+The contact object contains the start time and end time of the pass and other parameters of interest related to pass operations. The full list is below.
+
+| Parameter | Description |
+||--|
+| Reservation Start Time | Start time of pass in UTC. |
+| Reservation End Time | End time of pass in UTC. |
+| Maximum Elevation Degrees | The maximum elevation the spacecraft will be in the sky relative to horizon in degrees, used to gauge the quality of the pass. |
+| TX Start Time | Start time of permissible transmission window in UTC. This start time will be equal to or come after Reservation Start Time. |
+| TX End Time | End time of permissible transmission window in UTC. This end time will be equal to or come before Reservation End Time. |
+| RX Start Time | Start time of permissible reception window in UTC. This start time will be equal to or come after Reservation Start Time. |
+| RX End Time | End time of permissible reception window in UTC. This end time will be equal to or come before Reservation End Time. |
+| Start Azimuth | Starting azimuth position of the spacecraft measured clockwise from North in degrees. |
+| End Azimuth | End azimuth position of the spacecraft measured clockwise from North in degrees. |
+| Start Elevation | Starting elevation position of the spacecraft measured from the horizon up in degrees. |
+| End Elevation | Starting elevation position of the spacecraft measured from the horizon up in degrees. |
+
+The RX and TX start/end times may differ depending on the individual station masks. Billing meters are engaged between the Reservation Start Time and Reservation End Time.
+
+## Creating a contact
+
+In order to create a contact, you must have the following pre-requisites:
+
+* Authorized spacecraft object
+* Contact profile with links in accordance with the spacecraft object above
+
+Contacts are created on a per pass and per ground station basis. If you already know the pass timings for your spacecraft and selected ground station, then you can directly proceed to schedule the pass with these times. The service will succeed in creating the contact object if the window is available and fail if it isn't.
+
+If you don't know the pass timings, or which sites are available, then you can use the portal or API to get those details. Query the available passes and use the results to schedule your passes accordingly.
+
+| Method | List available contacts | Schedule contacts | Notes |
+|-|-|-|-|
+|Portal| Yes | Yes | Custom pass timings not possible. You have to use the results of the query|
+|API | Yes | Yes| Custom pass timings possible. |
+
+See [how-to schedule a contact](schedule-contact.md) for the Portal method. The API can also be used to create a contact. See the API docs (link to API docs) for this method.
+
+## Next steps
+
+- [Quickstart: Schedule a contact](schedule-contact.md)
+- [How to: Update the Spacecraft TLE](update-tle.md)
orbital Contact Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/contact-profile.md
Title: 'Configure a contact profile on Azure Orbital Earth Observation service' description: 'Quickstart: Configure a contact profile'-+ - Previously updated : 11/16/2021+ Last updated : 06/01/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
Configure a contact profile with Azure Orbital to save and reuse contact configu
## Sign in to Azure
-Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Create a contact profile resource
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
| Subscription | Select your subscription | | Resource group | Select your resource group | | Name | Enter contact profile name. Specify the antenna provider and mission information here. Like *Microsoft_Aqua_Uplink+Downlink_1* |
- | Region | Select **West US 2** |
+ | Region | Select a region |
| Minimum viable contact duration | Define the minimum duration of the contact as a prerequisite to show you available time slots to communicate with your spacecraft. If an available time window is less than this time, it won't show in the list of available options. Provide minimum contact duration in ISO 8601 format. Like *PT1M* | | Minimum elevation | Define minimum elevation of the contact, after acquisition of signal (AOS), as a prerequisite to show you available time slots to communicate with your spacecraft. Using higher value can reduce the duration of the contact. Provide minimum viable elevation in decimal degrees. | | Auto track configuration | Select the frequency band to be used for autotracking during the contact. X band, S band, or Disabled. |
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
| **Field** | **Value** | | | |
+ | Direction | Select the link direction |
| Gain/Temperature (Downlink only) | Enter the gain to noise temperature in db/K | | EIRP in dBW (Uplink only) | Enter the effective isotropic radiated power in dBW | | Center Frequency | Enter the center frequency in MHz |
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
| IP Address | Specify the IP Address for data retrieval/delivery | | Port | Specify the Port for data retrieval/delivery | | Protocol | Select TCP or UDP protocol for data retrieval/delivery |
+ | Demodulation Configuration (Downlink only) | If applicable, paste your modem demodulation configuration |
+ | Decoding Configuration (Downlink only)| If applicable, paste your decoding configuration |
+ | Modulation Configuration (Uplink only) | If applicable, paste your modem modulation configuration |
+ | Encoding Configuration (Uplink only)| If applicable, paste your encoding configuration |
:::image type="content" source="media/orbital-eos-contact-link.png" alt-text="Contact Profile Links Page" lightbox="media/orbital-eos-contact-link.png":::
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Next steps
+- [How-to Receive real-time telemetry](receive-real-time-telemetry.md)
- [Quickstart: Schedule a contact](schedule-contact.md)-- [Tutorial: Cancel a contact](delete-contact.md)
+- [Tutorial: Cancel a contact](delete-contact.md)
orbital Delete Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/delete-contact.md
Title: 'Cancel a scheduled contact on Azure Orbital Earth Observation service'
-description: 'Cancel a scheduled contact'
+ Title: Cancel a scheduled contact on Azure Orbital Earth Observation service
+description: Learn how to cancel a scheduled contact.
- Previously updated : 11/16/2021+ Last updated : 06/13/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
-# Cancel a scheduled contact
+# Tutorial: Cancel a scheduled contact
To cancel a scheduled contact, the contact entry must be deleted on the **Contacts** page.
To cancel a scheduled contact, the contact entry must be deleted on the **Contac
## Sign in to Azure
-Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Delete a scheduled contact entry
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
:::image type="content" source="media/orbital-eos-contact-config-view.png" alt-text="Delete a scheduled contact" lightbox="media/orbital-eos-contact-config-view.png"::: 6. The scheduled contact will be canceled once the contact entry is deleted.+ ## Next steps - [Quickstart: Schedule a contact](schedule-contact.md)-- [Tutorial: Update the spacecraft TLE](update-tle.md)
+- [Tutorial: Update the spacecraft TLE](update-tle.md)
orbital Downlink Aqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/downlink-aqua.md
+
+ Title: Schedule a contact with NASA's AQUA public satellite using Azure Orbital Earth Observation Service
+description: How to schedule a contact with NASA's AQUA public satellite using Azure Orbital Earth Observation Service
++++ Last updated : 07/12/2022+
+# Customer intent: As a satellite operator, I want to ingest data from NASA's AQUA public satellite into Azure.
++
+# Tutorial: Downlink data from NASA's AQUA public satellite
+
+You can communicate with satellites directly from Azure using Azure Orbital's ground station service. Once downlinked, this data can be processed and analyzed in Azure. In this guide you'll learn how to:
+
+> [!div class="checklist"]
+> * Create & authorize a spacecraft for AQUA
+> * Prepare a virtual machine (VM) to receive the downlinked AQUA data
+> * Configure a contact profile for an AQUA downlink mission
+> * Schedule a contact with AQUA using Azure Orbital and save the downlinked data
++
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- Complete the onboarding process for the preview. [Onboard to the Azure Orbital Preview](orbital-preview.md).
+
+## Sign in to Azure
+
+Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+
+> [!NOTE]
+> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page.
+
+## Create & authorize a spacecraft for AQUA
+1. In the Azure portal search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
+2. In the **Spacecrafts** page, select Create.
+3. Learn an up-to-date Two-Line Element (TLE) for AQUA by checking celestrak at https://celestrak.com/NORAD/elements/active.txt
+ > [!NOTE]
+ > You will want to periodically update this TLE value to ensure that it is up-to-date prior to scheduling a contact. A TLE that is more than one or two weeks old may result in an unsuccessful downlink.
+4. In **Create spacecraft resource**, enter or select this information in the Basics tab:
+
+ | **Field** | **Value** |
+ | | |
+ | Subscription | Select your subscription |
+ | Resource Group | Select your resource group |
+ | Name | **AQUA** |
+ | Region | Select **West US 2** |
+ | NORAD ID | **27424** |
+ | TLE title line | **AQUA** |
+ | TLE line 1 | Enter TLE line 1 from Celestrak |
+ | TLE line 2 | Enter TLE line 2 from Celestrak |
+
+5. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+6. In the **Links** page, enter or select this information:
+
+ | **Field** | **Value** |
+ | | |
+ | Direction | Select **Downlink** |
+ | Center Frequency | Enter **8160** |
+ | Bandwidth | Enter **15** |
+ | Polarization | Select **RHCP** |
+
+7. Select the **Review + create** tab, or select the **Review + create** button.
+8. Select **Create**
+
+9. Access the [Azure Orbital Spacecraft Authorization Form](https://forms.office.com/r/QbUef0Cmjr)
+10. Provide the following information:
+
+ - Spacecraft name: **AQUA**
+ - Region where spacecraft resource was created: **West US 2**
+ - Company name and email
+ - Azure Subscription ID
+
+11. Submit the form
+12. Await a 'Spacecraft resource authorized' email from Azure Orbital
+ > [!NOTE]
+ > You can confirm that your spacecraft resource for AQUA is authorized by checking that the **Authorization status** shows **Allowed** in the spacecraft's overiew page.
++
+## Prepare a virtual machine (VM) to receive the downlinked AQUA data
+1. [Create a virtual network](../virtual-network/quick-create-portal.md) to host your data endpoint virtual machine (VM)
+2. [Create a virtual machine (VM)](../virtual-network/quick-create-portal.md#create-virtual-machines) within the virtual network above. Ensure that this VM has the following specifications:
+- Operation System: Linux (Ubuntu 18.04 or higher)
+- Size: at least 32 GiB of RAM
+- Ensure that the VM has at least one standard public IP
+3. Create a tmpfs on the virtual machine. This virtual machine is where the data will be written to in order to avoid slow writes to disk:
+```console
+sudo mount -t tmpfs -o size=28G tmpfs /media/aqua
+```
+4. Ensure that SOCAT is installed on the machine:
+```console
+sudo apt install socat
+```
+5. Edit the [Network Security Group](../virtual-network/network-security-groups-overview.md) for the subnet that your virtual machine is using to allow inbound connections from the following IPs over TCP port 56001:
+- 20.47.120.4
+- 20.47.120.38
+- 20.72.252.246
+- 20.94.235.188
+- 20.69.186.50
+- 20.47.120.177
+
+## Configure a contact profile for an AQUA downlink mission
+1. In the Azure portal search box, enter **Contact profile**. Select **Contact profile** in the search results.
+2. In the **Contact profile** page, select **Create**.
+3. In **Create contact profile resource**, enter or select this information in the **Basics** tab:
+
+ | **Field** | **Value** |
+ | | |
+ | Subscription | Select your subscription |
+ | Resource group | Select your resource group |
+ | Name | Enter **AQUA_Downlink** |
+ | Region | Select **West US 2** |
+ | Minimum viable contact duration | **PT1M** |
+ | Minimum elevation | **5.0** |
+ | Auto track configuration | **Disabled** |
+ | Event Hubs Namespace | Select an Event Hubs Namespace to which you'll send telemetry data of your contacts. Select a Subscription before you can select an Event Hubs Namespace. |
+ | Event Hubs Instance | Select an Event Hubs Instance that belongs to the previously selected Namespace. *This field will only appear if an Event Hubs Namespace is selected first*. |
++
+4. Select the **Links** tab, or select the **Next: Links** button at the bottom of the page.
+5. In the **Links** page, select **Add new Link**
+6. In the **Add Link** page, enter, or select this information:
+
+ | **Field** | **Value** |
+ | | |
+ | Direction | **Downlink** |
+ | Gain/Temperature in db/K | **0** |
+ | Center Frequency | **8160.0** |
+ | Bandwidth MHz | **15.0** |
+ | Polarization | **RHCP** |
+ | Endpoint name | Enter the name of the virtual machine (VM) you created above |
+ | IP Address | Enter the Public IP address of the virtual machine you created above (VM) |
+ | Port | **56001** |
+ | Protocol | **TCP** |
+ | Demodulation Configuration | Leave this field **blank** or request a demodulation configuration from the [Azure Orbital team](mailto:msazureorbital@microsoft.com) to use a software modem. Include your Subscription ID, Spacecraft resource ID, and Contact Profile resource ID in your email request.|
+ | Decoding Configuration | Leave this field **blank** |
++
+7. Select the **Submit** button
+8. Select the **Review + create** tab or select the **Review + create** button
+9. Select the **Create** button
+
+## Schedule a contact with AQUA using Azure Orbital and save the downlinked data
+1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
+2. In the **Spacecrafts** page, select **AQUA**.
+3. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview.
+4. In the **Schedule contact** page, specify this information from the top of the page:
+
+ | **Field** | **Value** |
+ | | |
+ | Contact profile | Select **AQUA_Downlink** |
+ | Ground station | Select **Quincy** |
+ | Start time | Identify a start time for the contact availability window |
+ | End time | Identify an end time for the contact availability window |
+
+5. Select **Search** to view available contact times.
+6. Select one or more contact windows and select **Schedule**.
+7. View the scheduled contact by selecting the **AQUA** spacecraft and navigating to **Contacts**.
+8. Shortly before the contact begins executing, start listening on port 56001, and output the data received into the file:
+```console
+socat -u tcp-listen:56001,fork create:/media/aqua/out.bin
+```
+9. Once your contact has executed, copy the output file,
+```console
+/media/aqua/out.bin out
+```
+ of the tmpfs and into your home directory to avoid being overwritten when another contact is executed.
+
+ > [!NOTE]
+ > For a 10 minute long contact with AQUA while it is transmitting with 15MHz of bandwidth, you should expect to receive somewhere in the order of 450MB of data.
+
+## Next steps
+
+- [Quickstart: Configure a contact profile](contact-profile.md)
+- [Quickstart: Schedule a contact](schedule-contact.md)
orbital License Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md
+
+ Title: License your spacecraft - Azure Orbital
+description: Learn how to license your spacecraft with Orbital.
++++ Last updated : 07/12/2022+++
+# License your spacecraft
+
+This page provides an overview on how to register or license your spacecraft with Azure Orbital.
+
+## Prerequisites
+
+To initiate the spacecraft licensing process, you'll need:
+
+- A spacecraft object that corresponds to the spacecraft in orbit or slated for launch. The links in this object must match all current and planned filings.
+- List of ground stations that you wish to use
+
+## Step 1 - Initiate the request
+
+The process starts by initiating the licensing request via the Azure portal.
+
+1. Navigate to the spacecraft object and select New Support Request under the Support + troubleshooting category to the left.
+1. Complete the following fields:
+ 1. Summary: Provide a relevant ticket title.
+ 1. Issue type: Technical.
+ 1. Subscription: Choose your current subscription.
+ 1. Service: My Service
+ 1. Service Type: Azure orbital
+ 1. Problem type: Spacecraft Management and Setup
+ 1. Problem subtype: Spacecraft Registration
+1. Click next to Solutions
+1. Click next to Details
+1. Enter the desired ground stations in the Description field
+1. Enable advanced diagnostic information
+1. Click next to Review + Create
+1. Click Create.
+
+## Step 2 - Provide more details
+
+When the request is generated, our regulatory team will investigate the request and determine if more detail is required. If so, a customer support representative will reach out to you with a regulatory intake form. You'll need to input information regarding relevant filings, call signs, orbital parameters, link details, antenna details, point of contacts, etc.
+
+Fill out all relevant fields in this form as it helps speeds up the process. When you're done entering information, email this form back to the customer support representative.
+
+## Step 3 - Await feedback from our regulatory team
+
+Based on the details provided in the steps above, our regulatory team will make an assessment on time and cost to onboard your spacecraft to all requested ground stations. This step will take a few weeks to execute.
+
+Once the determination is made, we'll confirm the cost with you and ask you to authorize before proceeding.
+
+## Step 4 - Orbital requests the relevant licensing
+
+Upon authorization, you'll be billed and our regulatory team will seek the relevant licenses to enable your spacecraft with the desired ground stations. This step will take 2 to 6 months to execute.
+
+## Step 5 - Spacecraft is authorized
+
+Once the licenses are in place, the spacecraft object will be updated by Orbital to represent the licenses held at the specified ground stations. Refer to (to add link to spacecraft concept) to understand how the authorizations are applied.
+
+## FAQ
+
+Q. Are third party ground stations such as KSAT included in this process?
+A. No, the process on this page applies to Microsoft sites only. For more information, see (to add link to third party page).
+
+## Next steps
orbital Partner Network Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/partner-network-integration.md
+
+ Title: Integrate partner network ground stations into your Azure Orbital Ground Station as a Service solution
+description: Leverage partner network ground station locations through Azure Orbital.
++++ Last updated : 07/06/2022+++
+# Integrate partner network ground stations into your Azure Orbital Ground Station as a Service solution
+
+This article describes how to integrate partner network ground stations.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+- An active contract with the partner network(s) you wish to integrate with Azure Orbital.
+- KSAT Lite
+- [Viasat RTE](https://azuremarketplace.microsoft.com/marketplace/apps/viasatinc1628707641775.viasat-real-time-earth?tab=overview)
+
+## Request integration resource information
+
+1. Email the Azure Orbital Ground Station as a Service (GSaaS) team at **azorbitalpm@microsoft.com** to initiate partner network integration by providing the details below:
+ - Azure Subscription ID
+ - List of partner networks you've contracted with
+ - List of ground station locations included in partner contracts
+2. The Azure Orbital GSaaS team will reply to your message with additional requested information, or, the Contact Profile resource parameters that will enable your partner network integration.
+3. Create a contact profile resource with the parameters provided by the Azure Orbital GSaaS team.
+4. Await integration confirmation prior to scheduling Contacts with the newly integrated partner network(s).
+
+> [!NOTE]
+> It is important that the contact profile resource parameters match those provided by the Azure Orbital GSaaS team.
+
+## Next steps
+
+- [Configure a contact profile](./contact-profile.md)
+- [Learn more about the contact profile object](./concepts-contact-profile.md)
+- [Overview of the Azure Space Partner Community](./space-partner-program-overview.md)
orbital Prepare Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/prepare-network.md
+
+ Title: Prepare network to send and receive data - Azure Orbital
+description: Learn how to deliver and receive data from Azure Orbital.
++++ Last updated : 07/12/2022+++
+# Prepare the network for Azure Orbital GSaaS integration
+
+The Orbital GSaaS platform interfaces with your resources using VNET injection, which is used in both uplink and downlink directions. This page describes how to ensure your Subnet and Orbital GSaaS objects are configured correctly.
+
+Ensure the objects comply with the recommendations in this article. Note, these steps don't have to be followed in order.
+
+## Prepare subnet for VNET injection
+
+Prerequisites:
+- An entire subnet that can be dedicated to Orbital GSaaS in your virtual network in your resource group.
+
+Steps:
+1. Delegate a subnet to service named: Microsoft.Orbital/orbitalGateways. Follow instructions here: [Add or remove a subnet delegation in an Azure virtual network](/azure/virtual-network/manage-subnet-delegation).
+
+> [!NOTE]
+> Address range needs to be at least /24 (example 10.0.0.0/23)
+
+## Setting up the contact profile
+
+Prerequisites:
+- The subnet/vnet is in the same region as the contact profile
+
+Make sure the contact profile properties are set as follows:
+
+1. subnetId (under networkConfiguration): The full ID to the delegated subnet, which can be found inside the VNET's JSON view
+1. For each link
+ 1. ipAddress: Enter an IP here for TCP/UDP server mode. Leave blank for TCP/UDP client mode. See section below for a detailed explanation on configuring this property.
+ 1. port: Needs to be within 49152 and 65535 range and need to be unique across all links in the contact profile.
+
+> [!NOTE]
+> You can have multiple links/channels in a contact profile, and you can have multiple IPs. But the combination of port/protocol needs to be unique. You can't have two identical ports, even if you have two different destination IPs.
+
+## Scheduling the contact
+
+The platform pre-reserves IPs in the subnet when the contact is scheduled. These IPs represent the platform side endpoints for each link. IPs will be unique between contacts, and if multiple concurrent contacts are using the same subnet, we guarantee those IPs to be distinct. The service will fail to schedule the contact and an error will be returned if the service runs out of IPs or can't allocate an IP.
+
+When you create a contact, you can find these IPs by viewing the contact properties. Select JSON view in the portal or use the GET contact API call to view the contact properties. The parameters of interest are below:
+
+| Parameter | Usage |
+||-|
+| antennaConfiguration.destinationIP | Connect to this IP when you configure the link as tcp/udp client. |
+| antennaConfiguration.sourceIps | Data will come from this IP when you configure the link as tcp/udp server. |
+
+You can use this information to set up network policies or to distinguish between simultaneous contacts to the same endpoint.
+
+> [!NOTE]
+> - The source and destination IPs are always taken from the subnet address range
+> - Only one destination IP is present. Any link in client mode should connect to this IP and the links are differentiated based on port.
+> - Many source IPs can be present. Links in server mode will connect to your specified IP address in the contact profile. The flows will originate from the source IPs present in this field and target the port as per the link details in the contact profile. There is no fixed assignment of link to source IP so please make sure to allow all IPs in any networking setup or firewalls.
++
+## Client/Server, TCP/UDP, and link direction
+
+Here's how to set up the link flows based on direction on tcp or udp preference.
+
+### Uplink
+
+| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
+|--|-|--|-|--|
+| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Blank | Routable IP from delegated subnet |
+| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 |
+| **Output** | | | | |
+| Contact Object destinationIP | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
+| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs |
+++
+### Downlink
+
+| Setting | TCP Client | TCP Server | UDP Client | UDP Server |
+|--|-|--|-|--|
+| Contact Profile Link ipAddress | Blank | Routable IP from delegated subnet | Blank | Routable IP from delegated subnet |
+| Contact Profile Link port | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 | Unique port in 49152-65535 |
+| **Output** | | | | |
+| Contact Object destinationIP | Connect to this IP | Not applicable | Connect to this IP | Not applicable |
+| Contact Object sourceIP | Not applicable | Link will come from one of these IPs | Not applicable | Link will come from one of these IPs |
+
+## Next steps
+
+- [Quickstart: Register Spacecraft](register-spacecraft.md)
+- [Quickstart: Schedule a contact](schedule-contact.md)
orbital Receive Real Time Telemetry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/receive-real-time-telemetry.md
+
+ Title: Receive real-time telemetry - Azure Orbital
+description: Learn how to receive real-time telemetry during contacts.
++++ Last updated : 07/12/2022+++
+# Receive real-time telemetry
+
+An Azure Orbital Ground station emits telemetry events that can be used to analyze the ground station operation during a contact. You can configure your contact profile to send telemetry events to Azure Event Hubs. The steps in this article describe how to create and sent events to Event Hubs.
+
+## Configure Event Hubs
+
+1. In your subscription, go to Resource Provider settings and register Microsoft.Orbital as a provider
+1. Create an Azure Event Hubs in your subscription.
+1. From the left menu, select Access Control (IAM). Under Grant Access to this Resource, select Add Role Assignment
+1. Select Azure Event Hubs Data Sender.
+1. Assign access to 'User, group, or service principal'
+1. Click '+ Select members'
+1. Search for 'Azure Orbital Resource Provider' and press Select
+1. Press Review + Assign. This action will grant Azure Orbital the rights to send telemetry into your event hub.
+1. To confirm the newly added role assignment, go back to the Access Control (IAM) page and select View access to this resource.
+Congrats! Orbital can now communicate with your hub.
+
+## Enable telemetry for a contact profile in the Azure portal
+
+Ensure the contact profile is configured as follows:
+
+1. Choose a namespace using the Event Hubs Namespace dropdown.
+1. Choose an instance using the Event Hubs Instance dropdown that appears after namespace selection.
+
+## Schedule a contact
+
+Schedule a contact using the Contact Profile that you previously configured for Telemetry.
+
+Once the contact begins, you should begin seeing data in your Event Hubs soon after.
+
+## Verifying telemetry data
+
+You can verify both the presence and content of incoming telemetry data multiple ways.
+
+### Portal: Event Hubs Capture
+
+To verify that events are being received in your Event Hubs, you can check the graphs present on the Event Hubs namespace Overview page. This view shows data across all Event Hubs instances within a namespace. You can navigate to the Overview page of a specific instance to see the graphs for that instance.
+
+### Verify content of telemetry data
+
+You can enable Event Hubs Capture feature that will automatically deliver the telemetry data to an Azure Blob storage account of your choosing.
+Follow the [instructions to enable Capture](/azure/event-hubs/event-hubs-capture-enable-through-portal). Once enabled, you can check your container and view/download the data.
+
+## Event Hubs consumer
+
+Code: Event Hubs Consumer.
+Event Hubs documentation provides guidance on how to write simple consumer apps to receive events from your Event Hubs:
+- [Python](/azure/event-hubs/event-hubs-python-get-started-send)
+- [.NET](/azure/event-hubs/event-hubs-dotnet-standard-getstarted-send)
+- [Java](/azure/event-hubs/event-hubs-java-get-started-send)
+- [JavaScript](/azure/event-hubs/event-hubs-node-get-started-send)
+
+## Understanding telemetry points
+
+The ground station provides telemetry using Avro as a schema. The schema is below:
+
+```json
+{
+ "namespace": "EventSchema",
+ "name": "TelemetryEventSchema",
+ "type": "record",
+ "fields": [
+ {
+ "name": "version",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "contactId",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "contactPlatformIdentifier",
+ "type": [ "null", "string" ]
+ },
+ {
+ "name": "gpsTime",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "utcTime",
+ "type": "string"
+ },
+ {
+ "name": "azimuthDecimalDegrees",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "elevationDecimalDegrees",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "antennaType",
+ "type": {
+ "name": "antennaTypeEnum",
+ "type": "enum",
+ "symbols": [
+ "Microsoft",
+ "KSAT"
+ ]
+ }
+ },
+ {
+ "name": "links",
+ "type": [
+ "null",
+ {
+ "type": "array",
+ "items": {
+ "name": "antennaLink",
+ "type": "record",
+ "fields": [
+ {
+ "name": "direction",
+ "type": {
+ "name": "directionEnum",
+ "type": "enum",
+ "symbols": [
+ "Uplink",
+ "Downlink"
+ ]
+ }
+ },
+ {
+ "name": "polarization",
+ "type": {
+ "name": "polarizationEnum",
+ "type": "enum",
+ "symbols": [
+ "RHCP",
+ "LHCP",
+ "linearVertical",
+ "linearHorizontal"
+ ]
+ }
+ },
+ {
+ "name": "inputRfPowerDbm",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "uplinkEnabled",
+ "type": [ "null", "boolean" ]
+ },
+ {
+ "name": "channels",
+ "type": [
+ "null",
+ {
+ "type": "array",
+ "items": {
+ "name": "antennaLinkChannel",
+ "type": "record",
+ "fields": [
+ {
+ "name": "endpointName",
+ "type": "string"
+ },
+ {
+ "name": "inputEbN0InDb",
+ "type": [ "null", "double" ]
+ },
+ {
+ "name": "modemLockStatus",
+ "type": [
+ "null",
+ {
+ "name": "modemLockStatusEnum",
+ "type": "enum",
+ "symbols": [
+ "Unlocked",
+ "Locked"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ]
+ }
+ }
+ ]
+ }
+ ]
+}
+
+```
+
+## Next steps
+
+- [Event Hubs using Python Getting Started](/azure/event-hubs/event-hubs-python-get-started-send)
+- [Azure Event Hubs client library for Python code samples](/azure-sdk-for-python/tree/main/sdk/eventhub/azure-eventhub/samples/async_samples)
+
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Title: 'Register Spacecraft on Azure Orbital Earth Observation service'
+ Title: Register Spacecraft on Azure Orbital Earth Observation service
description: 'Quickstart: Register Spacecraft' - Previously updated : 11/16/2021+ Last updated : 06/03/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
To contact a satellite, it must be registered as a spacecraft resource with the
## Prerequisites - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- Complete the onboarding process for the preview. [Onboard to the Azure Orbital Preview](orbital-preview.md) ## Sign in to Azure
-Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+Sign in to the [Azure portal](https://aka.ms/orbital/portal).
## Create spacecraft resource
-> [!NOTE]
-> These steps must be followed as is or you won't be able to find the resources. Please use the specific link above to sign in directly to the Azure Orbital Preview page.
-
-1. In the Azure portal search box, enter **Spacecrafts*. Select **Spacecrafts** in the search results.
-2. In the **Spacecrafts** page, select Create.
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select Create.
3. In **Create spacecraft resource**, enter or select this information in the Basics tab: | **Field** | **Value** |
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
6. Select the **Review + create** tab, or select the **Review + create** button. 7. Select **Create**
-## Authorize the new spacecraft resource
-
-1. Access the [Azure Orbital Spacecraft Authorization Form](https://forms.office.com/r/QbUef0Cmjr)
-2. Provide the following information:
-
- - Spacecraft name
- - Region where spacecraft resource was created
- - Company name and email
- - Azure Subscription ID
-
-3. Submit the form
-4. Await a 'Spacecraft resource authorized' email from Azure Orbital
+## Request authorization of the new spacecraft resource
+
+1. Navigate to the newly created spacecraft resource's overview page.
+1. Select **New support request** in the Support + troubleshooting section of the left-hand blade.
+1. In the **New support request** page, enter or select this information in the Basics tab:
+
+| **Field** | **Value** |
+| | |
+| Summary | Request Authorization for [Spacecraft Name] |
+| Issue type | Select **Technical** |
+| Subscription | Select the subscription in which the spacecraft resource was created |
+| Service | Select **My services** |
+| Service type | Search for and select **Azure Orbital** |
+| Problem type | Select **Spacecraft Management and Setup** |
+| Problem subtype | Select **Spacecraft Registration** |
+
+1. Select the Details tab at the top of the page
+1. In the Details tab, enter this information in the Problem details section:
+
+| **Field** | **Value** |
+| | |
+| When did the problem start? | Select the current date & time |
+| Description | List your spacecraft's frequency bands and desired ground stations |
+| File upload | Upload any pertinent licensing material, if applicable |
+
+1. Complete the **Advanced diagnostic information** and **Support method** sections of the **Details** tab.
+1. Select the **Review + create** tab, or select the **Review + create** button.
+1. Select **Create**.
## Confirm spacecraft is authorized
-1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results
-2. In the **Spacecrafts** page, select the newly registered spacecraft
-3. In the new spacecraft's overview page, check the **Authorization status** shows **Allowed**
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+1. In the **Spacecraft** page, select the newly registered spacecraft.
+1. In the new spacecraft's overview page, check the **Authorization status** shows **Allowed**.
## Next steps
orbital Satellite Imagery With Orbital Ground Station https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/satellite-imagery-with-orbital-ground-station.md
Optional setup for capturing the ground station telemetry are included in the [A
## Step 1: Prerequisites
-You must first follow the steps listed in [Tutorial: Downlink data from NASA's AQUA public satellite](howto-downlink-aqua.md).
+You must first follow the steps listed in [Tutorial: Downlink data from NASA's AQUA public satellite](downlink-aqua.md).
> [!NOTE]
-> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](howto-downlink-aqua.md#prepare-a-virtual-machine-vm-to-receive-the-downlinked-aqua-data), use the following values:
+> In the section [Prepare a virtual machine (VM) to receive the downlinked AQUA data](downlink-aqua.md#prepare-a-virtual-machine-vm-to-receive-the-downlinked-aqua-data), use the following values:
> > - **Name:** receiver-vm > - **Operating System:** Linux (CentOS Linux 7 or higher)
Other helpful resources: 
For an end-to-end implementation that involves extracting, loading, transforming, and analyzing spaceborne data by using geospatial libraries and AI models with Azure Synapse Analytics, see: -- [Spaceborne data analysis with Azure Synapse Analytics](https://docs.microsoft.com/azure/architecture/industries/aerospace/geospatial-processing-analytics)
+- [Spaceborne data analysis with Azure Synapse Analytics](/azure/architecture/industries/aerospace/geospatial-processing-analytics)
orbital Schedule Contact https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/schedule-contact.md
Title: 'How to schedule a contact on Azure Orbital Earth Observation service'
-description: 'How to schedule a contact'
+ Title: How to schedule a contact on Azure Orbital Earth Observation service
+description: Learn how to schedule a contact.
- Previously updated : 11/16/2021+ Last updated : 07/12/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
Schedule a contact with the selected satellite for data retrieval/delivery on Az
## Sign in to Azure
-Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
+Sign in to the [Azure portal - Orbital](https://aka.ms/orbital/portal).
## Select an available contact
-1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
-2. In the **Spacecrafts** page, select the spacecraft for the contact.
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select the spacecraft for the contact.
3. Select **Schedule contact** on the top bar of the spacecraftΓÇÖs overview. :::image type="content" source="media/orbital-eos-schedule.png" alt-text="Schedule a contact at spacecraft resource page" lightbox="media/orbital-eos-schedule.png":::
orbital Spacecraft Object https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/spacecraft-object.md
+
+ Title: Spacecraft object - Azure Orbital
+description: Learn about how you can represent your spacecraft details in Azure Orbital.
++++ Last updated : 07/07/2022+
+#Customer intent: As a satellite operator or user, I want to understand what the spacecraft object does so I can manage my mission.
++
+# Spacecraft object
+
+Learn about how you can represent your spacecraft details in Azure Orbital GSaaS.
+
+## Spacecraft details
+
+The spacecraft object captures three types of information:
+
+- **Links** - RF details on center frequency, direction, and bandwidth for each link.
+- **Ephemeris** - The latest TLE.
+- **Licensing** - Authorizations held on a per link per ground station basis.
+
+## Links
+
+Make sure to capture each link that you wish to use with Azure Orbital when you create the spacecraft object. The following details are required:
+
+| **Field** | **Values** |
+||--|
+| Direction | Uplink or Downlink |
+| Center Frequency | Center frequency in MHz |
+| Bandwidth | Bandwidth in MHz |
+| Polarization | RHCP, LHCP, or Linear Vertical |
+
+Dual polarization schemes are represented by two individual links with their respective LHCP and RHCP polarizations.
+
+## Ephemeris
+
+The spacecraft ephemeris is captured in Azure Orbital using the Two-Line Element or TLE.
+
+A TLE is associated with the spacecraft to determine contact opportunities at the time of scheduling. The TLE is also used to determine the path the antenna must follow during the contact as the spacecraft passes over the ground station during contact execution.
+
+As TLEs are prone to expiration, the user must keep the TLE up-to-date using the [TLE update](update-tle.md) procedure.
+
+## Licensing
+
+In order to uphold regulatory requirements across the world, the spacecraft object contains authorizations on a per link and per site level that permits usage of the Azure Orbital ground station sites.
+
+The platform will deny scheduling or execution of contacts if the spacecraft object links aren't authorized. The platform will also deny contact if a profile contains links that aren't included in the spacecraft object authorized links.
+
+For more information, see the Licensing (add link to: concepts-licensing.md when article is created) documentation.
+
+## Create spacecraft resource
+
+For more information on how to create a spacecraft resource, see the details listed in the [register a spacecraft](register-spacecraft.md) article.
+
+## Managing spacecraft objects
+
+Spacecraft objects can be created and deleted via the Portal and Azure Orbital APIs. Once the object is created, modification to the object is dependent on the authorization status.
+
+When the spacecraft is unauthorized, then the spacecraft object can be modified. The API is the best way to make changes as the Portal only lets you make TLE updates.
+
+When the spacecraft is unauthorized, then TLE updates are the only modifications possible. Other fields such as links become immutable. The TLE updates are possible via the Portal and Orbital SDK.
+
+## Delete spacecraft resource
+
+You can delete the spacecraft object via the Portal or through the API. See [How-to: Delete Contact](delete-contact.md)
+
+## Next steps
+
+- [Register a spacecraft](register-spacecraft.md)
orbital Update Tle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/update-tle.md
Title: 'Update the spacecraft TLE on Azure Orbital Earth Observation service'
-description: 'Update the spacecraft TLE'
+ Title: Update the spacecraft TLE on Azure Orbital Earth Observation service
+description: Update the TLE of an existing spacecraft resource.
- Previously updated : 11/16/2021+ Last updated : 06/03/2022 # Customer intent: As a satellite operator, I want to ingest data from my satellite into Azure.
-# Update the spacecraft TLE
+# Tutorial: Update the spacecraft TLE
Update the TLE of an existing spacecraft resource.
Update the TLE of an existing spacecraft resource.
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - A registered spacecraft. Learn more on how to [register spacecraft](register-spacecraft.md).
-## Sign in to Azure
-
-Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
- ## Update the spacecraft TLE
-1. In the Azure portal search box, enter **Spacecrafts**. Select **Spacecrafts** in the search results.
-2. In the **Spacecrafts** page, select the name of the spacecraft for which to update the ephemeris.
+1. In the Azure portal search box, enter **Spacecraft**. Select **Spacecraft** in the search results.
+2. In the **Spacecraft** page, select the name of the spacecraft for which to update the ephemeris.
3. Select **Ephemeris** on the left menu bar of the spacecraftΓÇÖs overview. 4. In Ephemeris, enter this information in each of the required fields: | **Field** | **Value** | | | | | TLE title line | Spacecraft updated TLE Title Line |
- | TLE Line 1 | Spacecraft updated TLE Line 1 |
- | TLE Line 2 | Spacecraft updated TLE Line 2 |
+ | TLE Line 1 | Updated TLE Line 1 |
+ | TLE Line 2 | Updated TLE Line 2 |
:::image type="content" source="media/orbital-eos-ephemeris.png" alt-text="Spacecraft TLE update" lightbox="media/orbital-eos-ephemeris.png":::
Sign in to the [Azure portal - Orbital Preview](https://aka.ms/orbital/portal).
## Next steps - [Tutorial: Schedule a contact](schedule-contact.md)-- [Tutorial: Cancel a scheduled contact](delete-contact.md)
+- [Tutorial: Cancel a scheduled contact](delete-contact.md)
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-firewall-rules.md
For example, if your application connects with a Java Database Connectivity (JDB
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL
+> [!NOTE]
+> To access Azure Database for PostgreSQL from your local computer, ensure that the firewall on your network and local computer allow outgoing communication on TCP port 5432.
+ ## Connect from Azure We recommend that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service app, or use a public IP address that's tied to a virtual machine.
postgresql Concepts Firewall Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-firewall-rules.md
If the source IP address of the request is within one of the ranges specified in
> java.util.concurrent.ExecutionException: java.lang.RuntimeException: > org.postgresql.util.PSQLException: FATAL: no pg\_hba.conf entry for host "123.45.67.890", user "adminuser", database "postgresql", SSL
+> [!NOTE]
+> To access Azure Database for PostgreSQL from your local computer, ensure that the firewall on your network and local computer allow outgoing communication on TCP port 5432.
+ ## Connecting from Azure It is recommended that you find the outgoing IP address of any application or service and explicitly allow access to those individual IP addresses or ranges. For example, you can find the outgoing IP address of an Azure App Service or use a public IP tied to a virtual machine or other resource (see below for info on connecting with a virtual machine's private IP over service endpoints).
postgresql Concepts Supported Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/concepts-supported-versions.md
Azure Database for PostgreSQL currently supports the following major versions:
## PostgreSQL version 11
-The current minor release is 11.12. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/static/release-11-12.html) to learn more about improvements and fixes in this minor release.
+The current minor release is 11.16. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/11/release-11-16.html) to learn more about improvements and fixes in this minor release.
## PostgreSQL version 10
-The current minor release is 10.17. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/static/release-10-17.html) to learn more about improvements and fixes in this minor release.
+The current minor release is 10.21. Refer to the [PostgreSQL documentation](https://www.postgresql.org/docs/10/release-10-21.html) to learn more about improvements and fixes in this minor release.
## PostgreSQL version 9.6 (retired)
search Cognitive Search Debug Session https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/cognitive-search-debug-session.md
Previously updated : 06/15/2022 Last updated : 07/12/2022 # Debug Sessions in Azure Cognitive Search
Debug Sessions is a visual editor that works with an existing skillset in the Az
## How a debug session works
-When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a blob container in an Azure Storage account that you provide. You can reuse the same container for all subsequent debug sessions you create. A helpful container name might be "cognitive-search-debug-sessions".
+When you start a session, the search service creates a copy of the skillset, indexer, and a data source containing a single document that will be used to test the skillset. All session state will be saved to a new blob container created by the Azure Cognitive Search service in an Azure Storage account that you provide. The name of the generated container has a prefix of "ms-az-cognitive-search-debugsession".
A cached copy of the enriched document and skillset is loaded into the visual editor so that you can inspect the content and metadata of the enriched document, with the ability to check each document node and edit any aspect of the skillset definition. Any changes made within the session are cached. Those changes will not affect the published skillset unless you commit them. Committing changes will overwrite the production skillset.
search Index Similarity And Scoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/index-similarity-and-scoring.md
Both BM25 and Classic are TF-IDF-like retrieval functions that use the term freq
BM25 offers advanced customization options, such as allowing the user to decide how the relevance score scales with the term frequency of matched terms. For more information, see [Configure the scoring algorithm](index-ranking-similarity.md). > [!NOTE]
-> If you're using a search service that was created before July 2020, the scoring algorithm is most likely the previous default, `ClassicSimilarity`, which you an upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
+> If you're using a search service that was created before July 2020, the scoring algorithm is most likely the previous default, `ClassicSimilarity`, which you can upgrade on a per-index basis. See [Enable BM25 scoring on older services](index-ranking-similarity.md#enable-bm25-scoring-on-older-services) for details.
The following video segment fast-forwards to an explanation of the generally available ranking algorithms used in Azure Cognitive Search. You can watch the full video for more background.
You can consume these data points in [custom scoring solutions](https://github.c
+ [Scoring Profiles](index-add-scoring-profiles.md) + [REST API Reference](/rest/api/searchservice/) + [Search Documents API](/rest/api/searchservice/search-documents)
-+ [Azure Cognitive Search .NET SDK](/dotnet/api/overview/azure/search)
++ [Azure Cognitive Search .NET SDK](/dotnet/api/overview/azure/search)
service-connector Concept Region Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/concept-region-support.md
When you create a service connection with Service Connector, the conceptual conn
If your compute service instance is located in one of the regions that Service Connector supports below, you can use Service Connector to create and manage service connections. - Australia East
+- Canada Central
+- East Asia
- East US - East US 2 EUAP
+- Germany West Central
- Japan East
+- Korea Central
- North Europe - UK South - West Central US
storage Anonymous Read Access Client https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-client.md
Title: Access public containers and blobs anonymously with .NET
description: Use the Azure Storage client library for .NET to access public containers and blobs anonymously. -+ Last updated 02/16/2022-+ ms.devlang: csharp
storage Anonymous Read Access Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-configure.md
Title: Configure anonymous public read access for containers and blobs
description: Learn how to allow or disallow anonymous access to blob data for the storage account. Set the container public access setting to make containers and blobs available for anonymous access. -+ Last updated 03/01/2022-+
storage Anonymous Read Access Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/anonymous-read-access-prevent.md
Title: Prevent anonymous public read access to containers and blobs
description: Learn how to analyze anonymous requests against a storage account and how to prevent anonymous access for the entire storage account or for an individual container. -+ Last updated 12/09/2020-+
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/assign-azure-role-data-access.md
Title: Assign an Azure role for access to blob data
description: Learn how to assign permissions for blob data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 04/19/2022-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-access-azure-active-directory.md
Title: Authorize access to blobs using Active Directory
description: Authorize access to Azure blobs using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account. -+ Last updated 10/14/2021-+
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-cli.md
Title: Choose how to authorize access to blob data with Azure CLI
description: Specify how to authorize data operations against blob data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token. -+ Last updated 07/12/2021-+
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-portal.md
Title: Choose how to authorize access to blob data in the Azure portal
description: When you access blob data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key. -+ Last updated 12/10/2021-+
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Azure AD credentials to access blob data
description: PowerShell supports signing in with Azure AD credentials to run commands on blob data in Azure Storage. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal. -+ Last updated 05/12/2022-+
storage Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/authorize-managed-identity.md
Title: Authorize access to blob data with a managed identity
description: Use managed identities for Azure resources to authorize blob data access from applications running in Azure VMs, function apps, and others. -+ Last updated 10/11/2021-+ ms.devlang: csharp
storage Client Side Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/client-side-encryption.md
Previously updated : 07/11/2022 Last updated : 07/12/2022
For a sample project that shows how to migrate data from client-side encryption
For a sample project that shows how to migrate data from client-side encryption v1 to v2 and how to encrypt data with client-side encryption v2 in Python, see [Client Side Encryption Migration from V1 to V2](https://github.com/wastore/azure-storage-samples-for-python/tree/master/ClientSideEncryptionToServerSideEncryptionMigrationSamples/ClientSideEncryptionV1ToV2). ++ ## Client-side encryption and performance Keep in mind that encrypting your storage data results in additional performance overhead. When you use client-side encryption in your application, the client library must securely generate the CEK and IV, encrypt the content itself, communicate with your chosen keystore for key-enveloping, and format and upload additional metadata. This overhead varies depending on the quantity of data being encrypted. We recommend that customers always test their applications for performance during development.
storage Data Lake Storage Access Control Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control-model.md
Title: Access control model for Azure Data Lake Storage Gen2 description: Learn how to configure container, directory, and file-level access in accounts that have a hierarchical namespace.-+
storage Data Lake Storage Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-access-control.md
Title: Access control lists in Azure Data Lake Storage Gen2 description: Understand how POSIX-like ACLs access control lists work in Azure Data Lake Storage Gen2.-+
storage Data Lake Storage Acl Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-azure-portal.md
Title: Use the Azure portal to manage ACLs in Azure Data Lake Storage Gen2 description: Use the Azure portal to manage access control lists (ACLs) in storage accounts that has hierarchical namespace (HNS) enabled.-+
storage Data Lake Storage Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-acl-cli.md
Title: Use Azure CLI to manage ACLs in Azure Data Lake Storage Gen2 description: Use the Azure CLI to manage access control lists (ACL) in storage accounts that have a hierarchical namespace. -+ Last updated 02/17/2021-+
storage Data Lake Storage Directory File Acl Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-cli.md
Title: Use Azure CLI to manage data (Azure Data Lake Storage Gen2) description: Use the Azure CLI to manage directories and files in storage accounts that have a hierarchical namespace. -+ Last updated 02/17/2021-+
storage Data Lake Storage Directory File Acl Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-directory-file-acl-dotnet.md
Title: Use .NET to manage data in Azure Data Lake Storage Gen2 description: Use the Azure Storage client library for .NET to manage directories and files in storage accounts that has hierarchical namespace enabled.-+ Last updated 02/17/2021-+
storage Data Lake Storage Explorer Acl https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-lake-storage-explorer-acl.md
Title: 'Storage Explorer: Set ACLs in Azure Data Lake Storage Gen2' description: Use the Azure Storage Explorer to manage access control lists (ACLs) in storage accounts that has hierarchical namespace (HNS) enabled.-+
storage Data Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/data-protection-overview.md
Customer-managed failover is not currently supported for storage accounts with a
## Next steps - [Azure Storage redundancy](../common/storage-redundancy.md)-- [Disaster recovery and storage account failover](../common/storage-disaster-recovery-guidance.md)
+- [Disaster recovery and storage account failover](../common/storage-disaster-recovery-guidance.md)
storage Scalability Targets Premium Block Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-block-blobs.md
Title: Scalability targets for premium block blob storage accounts
description: Learn about premium-performance block blob storage accounts. Block blob storage accounts are optimized for applications that use smaller, kilobyte-range objects. -+ Last updated 12/18/2019-+
storage Scalability Targets Premium Page Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets-premium-page-blobs.md
Title: Scalability targets for premium page blob storage accounts
description: A premium performance page blob storage account is optimized for read/write operations. This type of storage account backs an unmanaged disk for an Azure virtual machine. -+ Last updated 09/24/2021-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/scalability-targets.md
Title: Scalability and performance targets for Blob storage
description: Learn about scalability and performance targets for Blob storage. -+ Last updated 04/01/2021-+
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
This article describes limitations and known issues of SFTP support for Azure Bl
> > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+> [!IMPORTANT]
+> Because you must enable hierarchical namespace for your account to use SFTP, all of the known issues that are described in the Known issues with [Azure Data Lake Storage Gen2](data-lake-storage-known-issues.md) article also apply to your account.
+ ## Known unsupported clients The following clients are known to be incompatible with SFTP for Azure Blob Storage (preview). See [Supported algorithms](secure-file-transfer-protocol-support.md#supported-algorithms) for more information.
The following clients are known to be incompatible with SFTP for Azure Blob Stor
- paramiko 1.16.0 - SSH.NET 2016.1.0
-> [!NOTE]
-> The unsupported client list above is not exhaustive and may change over time.
+The unsupported client list above is not exhaustive and may change over time.
## Unsupported operations
The following clients are known to be incompatible with SFTP for Azure Blob Stor
| Links |<li>`symlink` - creating symbolic links<li>`ln` - creating hard links<li>Reading links not supported | | Capacity Information | `df` - usage info for filesystem | | Extensions | Unsupported extensions include but aren't limited to: fsync@openssh.com, limits@openssh.com, lsetstat@openssh.com, statvfs@openssh.com |
-| SSH Commands | SFTP is the only supported subsystem. Shell requests after the completion of the key exchange will fail. |
-| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols on blobs that are created by using SFTP. Full overwrites are allowed.|
+| SSH Commands | SFTP is the only supported subsystem. Shell requests after the completion of key exchange will fail. |
+| Multi-protocol writes | Random writes and appends (`PutBlock`,`PutBlockList`, `GetBlockList`, `AppendBlock`, `AppendFile`) aren't allowed from other protocols (NFS, Blob REST, Data Lake Storage Gen2 REST) on blobs that are created by using SFTP. Full overwrites are allowed.|
## Authentication and authorization-
+
- _Local users_ is the only form of identity management that is currently supported for the SFTP endpoint. - Azure Active Directory (Azure AD) isn't supported for the SFTP endpoint. - POSIX-like access control lists (ACLs) aren't supported for the SFTP endpoint.
- > [!NOTE]
- > After your data is ingested into Azure Storage, you can use the full breadth of Azure storage security settings. While authorization mechanisms such as role-based access control (RBAC) and access control lists aren't supported as a means to authorize a connecting SFTP client, they can be used to authorize access via Azure tools (such Azure portal, Azure CLI, Azure PowerShell commands, and AzCopy) as well as Azure SDKS, and Azure REST APIs.
+To learn more, see [SFTP permission model](secure-file-transfer-protocol-support.md#sftp-permission-model) and see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md).
-- Account and container level operations aren't supported for the SFTP endpoint.
-
## Networking - To access the storage account using SFTP, your network must allow traffic on port 22. -- Static IP addresses are not supported for storage accounts.
+- Static IP addresses aren't supported for storage accounts. This is not an SFTP specific limitation.
- Internet routing is not supported. Use Microsoft network routing. - There's a 2 minute timeout for idle or inactive connections. OpenSSH will appear to stop responding and then disconnect. Some clients reconnect automatically.
-## Security
--- Host keys are published [here](secure-file-transfer-protocol-host-keys.md). During the public preview, host keys may rotate frequently.--- RSA keys must be minimum 2048 bits in length.--- User supplied passwords are not supported. Passwords are generated by Azure and are minimum 88 characters in length.-
-## Integrations
--- Change feed notifications aren't supported.-
-## Performance
-
-For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md).
- ## Other
+- For performance issues and considerations, see [SSH File Transfer Protocol (SFTP) performance considerations in Azure Blob storage](secure-file-transfer-protocol-performance.md).
+
- Special containers such as $logs, $blobchangefeed, $root, $web aren't accessible via the SFTP endpoint. - Symbolic links aren't supported.
For performance issues and considerations, see [SSH File Transfer Protocol (SFTP
- FTPS and FTP aren't supported. -- TLS and SSL are not related to SFTP.
+- TLS and SSL aren't related to SFTP.
## Troubleshooting
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
Blob storage now supports the SSH File Transfer Protocol (SFTP). This support pr
Azure allows secure data transfer to Blob Storage accounts using Azure Blob service REST API, Azure SDKs, and tools such as AzCopy. However, legacy workloads often use traditional file transfer protocols such as SFTP. You could update custom applications to use the REST API and Azure SDKs, but only by making significant code changes.
-Prior to the release of this feature, if you wanted to use SFTP to transfer data to Azure Blob Storage you would have to either purchase a third party product or orchestrate your own solution. You would have to create a virtual machine (VM) in Azure to host an SFTP server, and then figure out a way to move data into the storage account.
+Prior to the release of this feature, if you wanted to use SFTP to transfer data to Azure Blob Storage you would have to either purchase a third party product or orchestrate your own solution. For custom solutions, you would have to create virtual machines (VMs) in Azure to host an SFTP server, and then update, patch, manage, scale, and maintain a complex architecture.
-Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single setting. Then you can set up local user identities for authentication to transfer data securely without the need to do any more work.
+Now, with SFTP support for Azure Blob Storage, you can enable an SFTP endpoint for Blob Storage accounts with a single click. Then you can set up local user identities for authentication to connect to your storage account with SFTP via port 22.
This article describes SFTP support for Azure Blob Storage. To learn how to enable SFTP for your storage account, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP) (preview)](secure-file-transfer-protocol-support-how-to.md). ## SFTP and the hierarchical namespace
-SFTP support requires blobs to be organized into on a hierarchical namespace. The ability to use a hierarchical namespace was introduced by Azure Data Lake Storage Gen2. It organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance.
+SFTP support requires hierarchical namespace to be enabled. Hierarchical namespace organizes objects (files) into a hierarchy of directories and subdirectories in the same way that the file system on your computer is organized. The hierarchical namespace scales linearly and doesn't degrade data capacity or performance.
-Different protocols extend from the hierarchical namespace. The SFTP is one of these available protocols.
+Different protocols are supported by the hierarchical namespace. SFTP is one of these available protocols.
> [!div class="mx-imgBorder"] > ![hierarchical namespace](./media/secure-file-transfer-protocol-support/hierarchical-namespace-and-sftp-support.png)
To set up access permissions, you'll create a local user, and choose authenticat
> [!CAUTION] > Local users do not interoperate with other Azure Storage permission models such as RBAC (role based access control), ABAC (attribute based access control), and ACLs (access control lists). >
-> For example, user A has an Azure AD identity with only read permission for file _foo.txt_ and a local user identity with delete permission for container _con1_ in which _foo.txt_ is stored. In this case, User A could login in via SFTP using their local user identity and delete _foo.txt_.
+> For example, Jeff has read only permission (can be controlled via RBAC, ABAC, or ACLs) via their Azure AD identity for file _foo.txt_ stored in container _con1_. If Jeff is accessing the storage account via NFS (when not mounted as root/superuser), Blob REST, or Data Lake Storage Gen2 REST, these permissions will be enforced. However, if Jeff also has a local user identity with delete permission for data in container _con1_, they can delete _foo.txt_ via SFTP using the local user identity.
-For SFTP enabled storage accounts, you can use the full breadth of Azure Blob Storage security settings, to authenticate and authorize users accessing Blob Storage via Azure portal, Azure CLI, Azure PowerShell commands, AzCopy, as well as Azure SDKS, and Azure REST APIs. To learn more, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md)
+For SFTP enabled storage accounts, you can use the full breadth of Azure Blob Storage security settings, to authenticate and authorize users accessing Blob Storage via Azure portal, Azure CLI, Azure PowerShell commands, AzCopy, as well as Azure SDKs, and Azure REST APIs. To learn more, see [Access control model in Azure Data Lake Storage Gen2](data-lake-storage-access-control-model.md).
## Authentication methods
You can authenticate local users connecting via SFTP by using a password or a Se
#### Passwords
-Passwords are generated for you. If you choose password authentication, then your password will be provided after you finish configuring a local user. Make sure to copy that password and save it in a location where you can find it later. You won't be able to retrieve that password from Azure again. If you lose the password, you'll have to generate a new one. For security reasons, you can't set the password yourself.
+You cannot set custom passwords, rather Azure generates one for you. If you choose password authentication, then your password will be provided after you finish configuring a local user. Make sure to copy that password and save it in a location where you can find it later. You won't be able to retrieve that password from Azure again. If you lose the password, you'll have to generate a new one. For security reasons, you can't set the password yourself.
#### SSH key pairs
In the current release, you can specify only container-level permissions. Direct
| Permission | Symbol | Description | ||||
-| Read | r | <li>Read file contents</li> |
-| Write | w | <li>Upload file</li><li>Create directory</li><li>Upload directories</li> |
-| List | l | <li>List contents within container</li><li>List contents within directories</li> |
-| Delete | d | <li>Delete files/directories</li> |
-| Create | c | <li>Upload file if file doesn't exist</li><li>Create directory if it doesn't exist</li> |
+| Read | r | <li>Read file content</li> |
+| Write | w | <li>Upload file</li><li>Create directory</li><li>Upload directory</li> |
+| List | l | <li>List content within container</li><li>List content within directory</li> |
+| Delete | d | <li>Delete file/directory</li> |
+| Create | c | <li>Upload file if file doesn't exist</li><li>Create directory if directory doesn't exist</li> |
-> [!IMPORTANT]
-> When performing write operations on blobs in sub directories, Read permission is required to open the directory and access blob properties.
+When performing write operations on blobs in sub directories, Read permission is required to open the directory and access blob properties.
## Home directory
put logfile.txt
You can use many different SFTP clients to securely connect and then transfer files. Connecting clients must use algorithms specified in table below.
-| Host key | Key exchange | Ciphers/encryption | Integrity/MAC | Public key |
+| Host key <sup>1</sup> | Key exchange | Ciphers/encryption | Integrity/MAC | Public key |
|-|--|--|||
-| rsa-sha2-256 <sup>1</sup> | ecdh-sha2-nistp384 | aes128-gcm@openssh.com | hmac-sha2-256 | ssh-rsa <sup>1</sup> |
-| rsa-sha2-512 <sup>1</sup> | ecdh-sha2-nistp256 | aes256-gcm@openssh.com | hmac-sha2-512 | ecdsa-sha2-nistp256 |
+| rsa-sha2-256 <sup>2</sup> | ecdh-sha2-nistp384 | aes128-gcm@openssh.com | hmac-sha2-256 | ssh-rsa <sup>2</sup> |
+| rsa-sha2-512 <sup>2</sup> | ecdh-sha2-nistp256 | aes256-gcm@openssh.com | hmac-sha2-512 | ecdsa-sha2-nistp256 |
| ecdsa-sha2-nistp256 | diffie-hellman-group14-sha256 | aes128-cbc| hmac-sha2-256-etm@openssh.com | ecdsa-sha2-nistp384 | | ecdsa-sha2-nistp384 | diffie-hellman-group16-sha512 | aes192-cbc | hmac-sha2-512-etm@openssh.com | || diffie-hellman-group-exchange-sha256 | aes256-cbc ||
You can use many different SFTP clients to securely connect and then transfer fi
||| aes192-ctr || ||| aes256-ctr ||
-<sup>1</sup> Requires minimum key length of 2048 bits.
+<sup>1</sup> Host keys are published [here](secure-file-transfer-protocol-host-keys.md).
+<sup>2</sup> RSA keys must be minimum 2048 bits in length.
SFTP support for Azure Blob Storage currently limits its cryptographic algorithm support based on security considerations. We strongly recommend that customers utilize [Microsoft Security Development Lifecycle (SDL) approved algorithms](/security/sdl/cryptographic-recommendations) to securely access their data.
-> [!IMPORTANT]
-> At this time, we do not plan on supporting the following: `ssh-dss`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `hmac-sha1`, `hmac-sha1-96`.
+At this time, in accordance with the Microsoft Security SDL, we do not plan on supporting the following: `ssh-dss`, `diffie-hellman-group14-sha1`, `diffie-hellman-group1-sha1`, `hmac-sha1`, `hmac-sha1-96`. Algorithm support is subject to change in the future.
-Algorithm support is subject to change in the future.
+## Connecting with SFTP
+
+To get started, enable SFTP support, create a local user, and assign permissions for that local user. Then, you can use any SFTP client to securely connect and then transfer files. For step-by-step guidance, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md).
### Known supported clients
The following clients have compatible algorithm support with SFTP for Azure Blob
- Workday - XFB.Gateway
-> [!NOTE]
-> The supported client list above is not exhaustive and may change over time.
-
-## Connecting with SFTP
-
-To get started, enable SFTP support, create a local user, and assign permissions for that local user. Then, you can use any SFTP client to securely connect and then transfer files. For step-by-step guidance, see [Connect to Azure Blob Storage by using the SSH File Transfer Protocol (SFTP)](secure-file-transfer-protocol-support-how-to.md).
+The supported client list above is not exhaustive and may change over time.
## Limitations and known issues
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/security-recommendations.md
Title: Security recommendations for Blob storage
description: Learn about security recommendations for Blob storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model. -+ Last updated 05/12/2022-+
storage Snapshots Manage Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/snapshots-manage-dotnet.md
await blockBlob.DeleteIfExistsAsync(DeleteSnapshotsOption.IncludeSnapshots, null
- [Blob snapshots](snapshots-overview.md) - [Blob versions](versioning-overview.md)-- [Soft delete for blobs](./soft-delete-blob-overview.md)
+- [Soft delete for blobs](./soft-delete-blob-overview.md)
storage Storage Blob Block Blob Premium https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-block-blob-premium.md
Title: Premium block blob storage accounts description: Achieve lower and consistent latencies for Azure Storage workloads that require fast and consistent response times.-+ -+ Last updated 10/14/2021
storage Storage Blob Encryption Status https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-encryption-status.md
Title: Check the encryption status of a blob - Azure Storage description: Learn how to use Azure portal, PowerShell, or Azure CLI to check whether a given blob is encrypted. If a blob is not encrypted, learn how to use AzCopy to force encryption by downloading and re-uploading the blob. -+ Last updated 11/26/2019-+
storage Storage Blob Reserved Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-reserved-capacity.md
Title: Optimize costs for Blob storage with reserved capacity
description: Learn about purchasing Azure Storage reserved capacity to save costs on block blob and Azure Data Lake Storage Gen2 resources. -+ Last updated 05/17/2021-+
storage Storage Blob User Delegation Sas Create Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-cli.md
Title: Use Azure CLI to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using Azure CLI. -+ Last updated 12/18/2019-+
storage Storage Blob User Delegation Sas Create Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-user-delegation-sas-create-powershell.md
Title: Use PowerShell to create a user delegation SAS for a container or blob
description: Learn how to create a user delegation SAS with Azure Active Directory credentials by using PowerShell. -+ Last updated 12/18/2019-+
storage Storage Blobs Latency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blobs-latency.md
Title: Latency in Blob storage - Azure Storage description: Understand and measure latency for Blob storage operations, and learn how to design your Blob storage applications for low latency. -+ Last updated 09/05/2019-+
storage Storage How To Mount Container Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-how-to-mount-container-linux.md
Title: How to mount Azure Blob storage as a file system on Linux | Microsoft Docs description: Learn how to mount an Azure Blob storage container with blobfuse, a virtual file system driver on Linux.-+ Last updated 04/28/2022-+
storage Storage Samples Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-cli.md
Title: Azure CLI samples for Blob storage | Microsoft Docs description: See links to Azure CLI samples for working with Azure Blob Storage, such as creating a storage account, deleting containers with a specific prefix, and more.-+ Last updated 06/13/2017
storage Storage Samples Blobs Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-samples-blobs-powershell.md
Title: Azure PowerShell samples for Azure Blob storage | Microsoft Docs description: See links to Azure PowerShell script samples for working with Azure Blob storage, such as creating a storage account, migrating blobs across accounts, and more.-+ Last updated 11/07/2017
The following table includes links to PowerShell script samples that create and
|**Blob storage**|| | [Calculate the total size of a Blob storage container](../scripts/storage-blobs-container-calculate-size-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Calculates the total size of all the blobs in a container. | | [Calculate the size of a Blob storage container for billing purposes](../scripts/storage-blobs-container-calculate-billing-size-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Calculates the size of a container in Blob storage for the purpose of estimating billing costs. |
-| [Delete containers with a specific prefix](../scripts/storage-blobs-container-delete-by-prefix-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Deletes containers starting with a specified string. |
+| [Delete containers with a specific prefix](../scripts/storage-blobs-container-delete-by-prefix-powershell.md?toc=%2fpowershell%2fmodule%2ftoc.json) | Deletes containers starting with a specified string. |
storage Authorization Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorization-resource-provider.md
Title: Use the Azure Storage resource provider to access management resources description: The Azure Storage resource provider is a service that provides access to management resources for Azure Storage. You can use the Azure Storage resource provider to create, update, manage, and delete resources such as storage accounts, private endpoints, and account access keys. -+ Last updated 12/12/2019-+
For more information about Azure deployment models, see [Resource Manager and cl
- [Azure Resource Manager overview](../../azure-resource-manager/management/overview.md) - [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)-- [Scalability targets for the Azure Storage resource provider](scalability-targets-resource-provider.md)
+- [Scalability targets for the Azure Storage resource provider](scalability-targets-resource-provider.md)
storage Authorize Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/authorize-data-access.md
Title: Authorize operations for data access
description: Learn about the different ways to authorize access to data in Azure Storage. Azure Storage supports authorization with Azure Active Directory, Shared Key authorization, or shared access signatures (SAS), and also supports anonymous access to blobs. -+ Last updated 04/14/2022-+
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Title: Configure Microsoft Defender for Storage
description: Configure Microsoft Defender for Storage to detect anomalies in account activity and be notified of potentially harmful attempts to access your account. -+ Last updated 05/31/2022-+
storage Configure Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/configure-network-routing-preference.md
Title: Configure network routing preference
description: Configure network routing preference for your Azure storage account to specify how network traffic is routed to your account from clients over the Internet. -+ Last updated 03/17/2021
storage Last Sync Time Get https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/last-sync-time-get.md
Title: Check the Last Sync Time property for a storage account
description: Learn how to check the Last Sync Time property for a geo-replicated storage account. The Last Sync Time property indicates the last time at which all writes from the primary region were successfully written to the secondary region. -+ Last updated 05/28/2020-+
storage Lock Account Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/lock-account-resource.md
Title: Apply an Azure Resource Manager lock to a storage account
description: Learn how to apply an Azure Resource Manager lock to a storage account. -+ Last updated 03/09/2021-+
storage Network Routing Preference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/network-routing-preference.md
Title: Network routing preference
description: Network routing preference enables you to specify how network traffic is routed to your account from clients over the internet. -+ Last updated 02/11/2021-+
For pricing and billing details, see the **Pricing** section in [What is routing
- [What is routing preference?](../../virtual-network/ip-services/routing-preference-overview.md) - [Configure network routing preference](configure-network-routing-preference.md) - [Configure Azure Storage firewalls and virtual networks](storage-network-security.md)-- [Security recommendations for Blob storage](../blobs/security-recommendations.md)
+- [Security recommendations for Blob storage](../blobs/security-recommendations.md)
storage Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md
Title: Built-in policy definitions for Azure Storage
description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Last updated 07/06/2022 --++
storage Redundancy Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/redundancy-migration.md
Title: Change how a storage account is replicated
description: Learn how to change how data in an existing storage account is replicated. -+ Last updated 06/14/2022-+
storage Resource Graph Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/resource-graph-samples.md
Title: Azure Resource Graph sample queries for Azure Storage
description: Sample Azure Resource Graph queries for Azure Storage showing use of resource types and tables to access Azure Storage related resources and properties. Last updated 07/07/2022 --++
storage Sas Expiration Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/sas-expiration-policy.md
Title: Create an expiration policy for shared access signatures
description: Create a policy on the storage account that defines the length of time that a shared access signature (SAS) should be valid. Learn how to monitor policy violations to remediate security risks. -+ Last updated 04/18/2022-+
storage Scalability Targets Resource Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-resource-provider.md
Title: Scalability for the Azure Storage resource provider description: Scalability and performance targets for operations against the Azure Storage resource provider. The resource provider implements Azure Resource Manager for Azure Storage. -+ Last updated 12/18/2019-+
The service-level agreement (SLA) for Azure Storage accounts is available at [SL
## See also - [Scalability and performance targets for standard storage accounts](scalability-targets-standard-account.md)-- [Azure subscription limits and quotas](../../azure-resource-manager/management/azure-subscription-service-limits.md)
+- [Azure subscription limits and quotas](../../azure-resource-manager/management/azure-subscription-service-limits.md)
storage Scalability Targets Standard Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/scalability-targets-standard-account.md
Title: Scalability and performance targets for standard storage accounts
description: Learn about scalability and performance targets for standard storage accounts. -+ Last updated 05/25/2022-+
storage Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Storage
description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 07/06/2022 --++
storage Shared Key Authorization Prevent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/shared-key-authorization-prevent.md
Title: Prevent authorization with Shared Key
description: To require clients to use Azure AD to authorize requests, you can disallow requests to the storage account that are authorized with Shared Key. -+ Last updated 04/01/2022-+ ms.devlang: azurecli
storage Storage Account Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-create.md
Title: Create a storage account
description: Learn to create a storage account to store blobs, files, queues, and tables. An Azure storage account provides a unique namespace in Microsoft Azure for reading and writing your data. -+ Last updated 05/26/2022-+
storage Storage Account Get Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-get-info.md
Title: Get storage account configuration information
description: Use the Azure portal, PowerShell, or Azure CLI to retrieve storage account configuration properties, including the Azure Resource Manager resource ID, account location, account type, or replication SKU. -+ -+ Last updated 05/26/2022
storage Storage Account Keys Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-keys-manage.md
Title: Manage account access keys
description: Learn how to view, manage, and rotate your storage account access keys. -+ Last updated 04/14/2022-+
storage Storage Account Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-move.md
Title: Move an Azure Storage account to another region | Microsoft Docs description: Shows you how to move an Azure Storage account to another region. -+ Last updated 06/15/2022-+
storage Storage Account Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-overview.md
Title: Storage account overview
description: Learn about the different types of storage accounts in Azure Storage. Review account naming, performance tiers, access tiers, redundancy, encryption, endpoints, and more. -+ Last updated 06/28/2022-+
storage Storage Account Recover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-recover.md
Title: Recover a deleted storage account
description: Learn how to recover a deleted storage account within the Azure portal. -+ Last updated 06/23/2022-+
storage Storage Account Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-account-upgrade.md
Title: Upgrade to a general-purpose v2 storage account
description: Upgrade to general-purpose v2 storage accounts using the Azure portal, PowerShell, or the Azure CLI. Specify an access tier for blob data. -+ Last updated 04/29/2021-+
storage Storage Auth Abac Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-attributes.md
Title: Actions and attributes for Azure role assignment conditions in Azure Stor
description: Supported actions and attributes for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC) in Azure Storage. -+ Last updated 05/24/2022-+
storage Storage Auth Abac Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-cli.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs us
description: Add a role assignment condition to restrict access to blobs using Azure CLI and Azure attribute-based access control (Azure ABAC). -+ -+ Last updated 11/16/2021
storage Storage Auth Abac Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-examples.md
Title: Example Azure role assignment conditions (preview) - Azure RBAC
description: Example Azure role assignment conditions for Azure attribute-based access control (Azure ABAC). -+ -+ Last updated 05/24/2022
storage Storage Auth Abac Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-portal.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs us
description: Add a role assignment condition to restrict access to blobs using the Azure portal and Azure attribute-based access control (Azure ABAC). -+ -+ Last updated 11/16/2021
storage Storage Auth Abac Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-powershell.md
Title: "Tutorial: Add a role assignment condition to restrict access to blobs us
description: Add a role assignment condition to restrict access to blobs using Azure PowerShell and Azure attribute-based access control (Azure ABAC). -+ -+ Last updated 11/16/2021
storage Storage Auth Abac Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac-security.md
Title: Security considerations for Azure role assignment conditions in Azure Sto
description: Security considerations for Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). -+ Last updated 05/06/2021-+
storage Storage Auth Abac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-auth-abac.md
Title: Authorize access to blobs using Azure role assignment conditions (preview
description: Authorize access to Azure blobs using Azure role assignment conditions and Azure attribute-based access control (Azure ABAC). Define conditions on role assignments using Storage attributes. -+ Last updated 05/16/2022-+
storage Storage Choose Data Transfer Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-choose-data-transfer-solution.md
Title: Choose an Azure solution for data transfer| Microsoft Docs description: Learn how to choose an Azure solution for data transfer based on data sizes and available network bandwidth in your environment. -+ Last updated 09/25/2020-+ # Choose an Azure solution for data transfer
storage Storage Configure Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-configure-connection-string.md
Title: Configure a connection string
description: Configure a connection string for an Azure storage account. A connection string contains the information needed to authorize access to a storage account from your application at runtime using Shared Key authorization. -+ Last updated 05/26/2022-+
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Title: Disaster recovery and storage account failover
description: Azure Storage supports account failover for geo-redundant storage accounts. With account failover, you can initiate the failover process for your storage account if the primary endpoint becomes unavailable. -+ Last updated 03/01/2022-+
storage Storage Initiate Account Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-initiate-account-failover.md
Title: Initiate a storage account failover
description: Learn how to initiate an account failover in the event that the primary endpoint for your storage account becomes unavailable. The failover updates the secondary region to become the primary region for your storage account. -+ Last updated 05/07/2021-+
storage Storage Network Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-network-security.md
Title: Configure Azure Storage firewalls and virtual networks | Microsoft Docs description: Configure layered network security for your storage account using Azure Storage firewalls and Azure Virtual Network. -+ Last updated 03/31/2022-+
storage Storage Powershell Independent Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md
Title: Use PowerShell to manage data in Azure independent clouds
description: Managing Storage in the China Cloud, Government Cloud, and German Cloud Using Azure PowerShell. -+ Last updated 12/04/2019-+
storage Storage Private Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-private-endpoints.md
Title: Use private endpoints
description: Overview of private endpoints for secure access to storage accounts from virtual networks. -+ Last updated 03/16/2021-+
storage Storage Redundancy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-redundancy.md
Title: Data redundancy
description: Understand data redundancy in Azure Storage. Data in your Microsoft Azure Storage account is replicated for durability and high availability. -+ Last updated 05/24/2022-+
storage Storage Require Secure Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-require-secure-transfer.md
Title: Require secure transfer to ensure secure connections
description: Learn how to require secure transfer for requests to Azure Storage. When you require secure transfer for a storage account, any requests originating from an insecure connection are rejected. -+ Last updated 06/01/2021-+
storage Storage Sas Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-sas-overview.md
Title: Grant limited access to data with shared access signatures (SAS)
description: Learn about using shared access signatures (SAS) to delegate access to Azure Storage resources, including blobs, queues, tables, and files. -+ Last updated 12/28/2021-+
storage Transport Layer Security Configure Client Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-client-version.md
Title: Configure Transport Layer Security (TLS) for a client application
description: Configure a client application to communicate with Azure Storage using a minimum version of Transport Layer Security (TLS). -+ Last updated 07/08/2020-+
storage Transport Layer Security Configure Minimum Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/transport-layer-security-configure-minimum-version.md
Title: Enforce a minimum required version of Transport Layer Security (TLS) for
description: Configure a storage account to require a minimum version of Transport Layer Security (TLS) for clients making requests against Azure Storage. -+ Last updated 07/07/2021-+
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/assign-azure-role-data-access.md
Title: Assign an Azure role for access to queue data
description: Learn how to assign permissions for queue data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 07/13/2021-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-access-azure-active-directory.md
Title: Authorize access to queues using Active Directory
description: Authorize access to Azure queues using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account. -+ Last updated 07/13/2021-+
Azure CLI and PowerShell support signing in with Azure AD credentials. After you
## Next steps - [Authorize access to data in Azure Storage](../common/authorize-data-access.md)-- [Assign an Azure role for access to queue data](assign-azure-role-data-access.md)
+- [Assign an Azure role for access to queue data](assign-azure-role-data-access.md)
storage Authorize Data Operations Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-cli.md
Title: Choose how to authorize access to queue data with Azure CLI description: Specify how to authorize data operations against queue data with the Azure CLI. You can authorize data operations using Azure AD credentials, with the account access key, or with a shared access signature (SAS) token. -+ -+ Last updated 02/10/2021
storage Authorize Data Operations Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-portal.md
Title: Choose how to authorize access to queue data in the Azure portal description: When you access queue data using the Azure portal, the portal makes requests to Azure Storage under the covers. These requests to Azure Storage can be authenticated and authorized using either your Azure AD account or the storage account access key.-+ -+ Last updated 12/13/2021
storage Authorize Data Operations Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-data-operations-powershell.md
Title: Run PowerShell commands with Azure AD credentials to access queue data description: PowerShell supports signing in with Azure AD credentials to run commands on Azure Queue Storage data. An access token is provided for the session and used to authorize calling operations. Permissions depend on the Azure role assigned to the Azure AD security principal.-+ -+ Last updated 02/10/2021
storage Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/authorize-managed-identity.md
Title: Authorize access to queue data with a managed identity
description: Use managed identities for Azure resources to authorize queue data access from applications running in Azure VMs, function apps, and others. -+ Last updated 10/11/2021-+
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/scalability-targets.md
Title: Scalability and performance targets for Queue Storage description: Learn about scalability and performance targets for Queue Storage.-+ -+ Last updated 12/18/2019
storage Security Recommendations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/security-recommendations.md
Title: Security recommendations for Queue Storage description: Learn about security recommendations for Queue Storage. Implementing this guidance will help you fulfill your security obligations as described in our shared responsibility model.-+ -+ Last updated 05/12/2022
storage Storage Powershell How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-powershell-how-to-use-queues.md
Title: How to use Azure Queue Storage from PowerShell - Azure Storage description: Perform operations on Azure Queue Storage via PowerShell. With Azure Queue Storage, you can store large numbers of messages that are accessible by HTTP/HTTPS.-+ Last updated 05/15/2019
storage Assign Azure Role Data Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/assign-azure-role-data-access.md
Title: Assign an Azure role for access to table data
description: Learn how to assign permissions for table data to an Azure Active Directory security principal with Azure role-based access control (Azure RBAC). Azure Storage supports built-in and Azure custom roles for authentication and authorization via Azure AD. -+ Last updated 03/03/2022-+
storage Authorize Access Azure Active Directory https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/authorize-access-azure-active-directory.md
Title: Authorize access to tables using Active Directory
description: Authorize access to Azure tables using Azure Active Directory (Azure AD). Assign Azure roles for access rights. Access data with an Azure AD account. -+ Last updated 07/13/2021-+
storage Authorize Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/authorize-managed-identity.md
Title: Authorize access to table data with a managed identity
description: Use managed identities for Azure resources to authorize table data access from applications running in Azure VMs, function apps, and others. -+ Last updated 04/15/2022-+ ms.devlang: csharp
storage Scalability Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/tables/scalability-targets.md
Title: Scalability and performance targets for Table storage
description: Learn about scalability and performance targets for Table storage. -+ Last updated 03/09/2020-+
synapse-analytics Apache Spark Cdm Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/data-sources/apache-spark-cdm-connector.md
Title: Azure Synapse Spark Common Data Model (CDM) connector description: Learn how to use the Azure Synapse Spark CDM connector to read and write CDM entities in a CDM folder on ADLS. -+ Last updated 03/10/2022-+ # Common Data Model (CDM) Connector for Azure Synapse Spark
The following features aren't yet supported:
You can now look at the other Apache Spark connectors: * [Apache Spark Kusto connector](apache-spark-kusto-connector.md)
-* [Apache Spark SQL connector](apache-spark-sql-connector.md)
+* [Apache Spark SQL connector](apache-spark-sql-connector.md)
traffic-manager Cli Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/cli-samples.md
Title: Azure CLI Samples for Traffic Manager| Microsoft Docs
description: Learn about an Azure CLI script you can use to direct traffic across multiple regions for high application availability. documentationcenter: virtual-network-+ Last updated 10/23/2018-+
traffic-manager Configure Multivalue Routing Method Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/configure-multivalue-routing-method-template.md
Title: Configure the Multivalue routing method - Azure Resource Manager template (ARM template) description: Learn how to configure the Multivalue routing method with nested endpoints and the min-child feature.--++ Last updated 04/28/2022
traffic-manager How To Add Endpoint Existing Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/how-to-add-endpoint-existing-profile-template.md
Title: Add an external endpoint to an existing profile - Azure Template description: Learn how to add an external endpoint to an existing Azure Traffic Manager profile using an Azure Template.--++ Last updated 12/13/2021
traffic-manager Powershell Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/powershell-samples.md
Title: Azure PowerShell samples for Traffic Manager| Microsoft Docs
description: With this sample, use Azure PowerShell to deploy and configure Azure Traffic Manager. documentationcenter: traffic-manager-+ Last updated 10/23/2018-+ # Azure PowerShell samples for Traffic Manager
traffic-manager Quickstart Create Traffic Manager Profile Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-bicep.md
Title: 'Quickstart: Create an Azure Traffic Manager profile - Bicep' description: This quickstart article describes how to create an Azure Traffic Manager profile by using Bicep. --++ Last updated 06/20/2022
traffic-manager Quickstart Create Traffic Manager Profile Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-cli.md
Title: 'Quickstart: Create a profile for HA of applications - Azure CLI - Azure Traffic Manager' description: This quickstart article describes how to create a Traffic Manager profile to build a highly available web application by using Azure CLI. -+ na Last updated 04/19/2021-+ #Customer intent: As an IT admin, I want to direct user traffic to ensure high availability of web applications.
traffic-manager Quickstart Create Traffic Manager Profile Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-powershell.md
Title: 'Quickstart: Create a profile for high availability of applications - Azure PowerShell - Azure Traffic Manager' description: This quickstart article describes how to create a Traffic Manager profile to build a highly available web application. --++ Last updated 04/19/2021
traffic-manager Quickstart Create Traffic Manager Profile Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile-template.md
Title: 'Quickstart: Create a Traffic Manager by using Azure Resource Manager template (ARM template)' description: This quickstart article describes how to create an Azure Traffic Manager profile by using Azure Resource Manager template (ARM template). --++ Last updated 09/01/2020
traffic-manager Quickstart Create Traffic Manager Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/quickstart-create-traffic-manager-profile.md
Title: 'Quickstart: Create a profile for HA of applications - Azure portal - Azure Traffic Manager' description: This quickstart article describes how to create a Traffic Manager profile to build a highly available web application using the Azure portal. --++ Last updated 04/19/2021
traffic-manager Traffic Manager Cli Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-cli-websites-high-availability.md
Title: Route traffic for HA of applications - Azure CLI - Traffic Manager
description: Azure CLI script sample - Route traffic for high availability of applications documentationcenter: traffic-manager-+ tags: azure-infrastructure ms.assetid:
na Last updated 02/28/2022-+ # Route traffic for high availability of applications using Azure CLI
traffic-manager Traffic Manager Powershell Websites High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/scripts/traffic-manager-powershell-websites-high-availability.md
Title: Route traffic for HA of applications - Azure PowerShell - Traffic Manager
description: Azure PowerShell script sample - Route traffic for high availability of applications documentationcenter: traffic-manager-+ editor: tags: azure-infrastructure
na Last updated 04/26/2018-+
traffic-manager Traffic Manager Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-FAQs.md
Title: Azure Traffic Manager - FAQs
description: This article provides answers to frequently asked questions about Traffic Manager documentationcenter: ''-+ na Last updated 01/31/2022-+
traffic-manager Traffic Manager Configure Geographic Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-geographic-routing-method.md
Title: 'Tutorial: Configure geographic traffic routing with Azure Traffic Manager' description: This tutorial explains how to configure the geographic traffic routing method using Azure Traffic Manager -+ na Last updated 10/15/2020-+ # Tutorial: Configure the geographic traffic routing method using Traffic Manager
traffic-manager Traffic Manager Configure Multivalue Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-multivalue-routing-method.md
Title: Configure multivalue traffic routing - Azure Traffic Manager
description: This article explains how to configure Traffic Manager to route traffic to A/AAAA endpoints. documentationcenter: ''-+ na Last updated 09/10/2018-+ # Configure MultiValue routing method in Traffic Manager
traffic-manager Traffic Manager Configure Performance Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-performance-routing-method.md
description: This article explains how to configure Traffic Manager to route tra
documentationcenter: ''-+ na Last updated 03/20/2017-+ # Configure the performance traffic routing method
traffic-manager Traffic Manager Configure Priority Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-priority-routing-method.md
Title: 'Tutorial: Configure priority traffic routing with Azure Traffic Manager'
description: This tutorial explains how to configure the priority traffic routing method in Traffic Manager documentationcenter: ''-+ na Last updated 10/16/2020-+ # Tutorial: Configure priority traffic routing method in Traffic Manager
traffic-manager Traffic Manager Configure Subnet Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-subnet-routing-method.md
Title: Configure subnet traffic routing - Azure Traffic Manager
description: This article explains how to configure Traffic Manager to route traffic from specific subnets. documentationcenter: ''-+ na Last updated 09/17/2018-+ # Direct traffic to specific endpoints based on user subnet using Traffic Manager
traffic-manager Traffic Manager Configure Weighted Routing Method https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-configure-weighted-routing-method.md
Title: 'Tutorial: Configure weighted round-robin traffic routing with Azure Traf
description: This tutorial explains how to load balance traffic using a round-robin method in Traffic Manager documentationcenter: ''-+ na Last updated 10/19/2020-+ # Tutorial: Configure the weighted traffic routing method in Traffic Manager
traffic-manager Traffic Manager Create Rum Visual Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-create-rum-visual-studio.md
Title: Real User Measurements with Visual Studio Mobile Center - Azure Traffic M
description: Set up your mobile application developed using Visual Studio Mobile Center to send Real User Measurements to Traffic Manager documentationcenter: traffic-manager-+ ms.devlang: java
Last updated 03/16/2018-+
traffic-manager Traffic Manager Create Rum Web Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-create-rum-web-pages.md
Title: Real User Measurements with web pages - Azure Traffic Manager
description: In this article, learn how to set up your web pages to send Real User Measurements to Azure Traffic Manager. documentationcenter: traffic-manager-+ Last updated 04/06/2021-+ # How to send Real User Measurements to Azure Traffic Manager using web pages
traffic-manager Traffic Manager Diagnostic Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-diagnostic-logs.md
Title: Enable resource logging in Azure Traffic Manager description: Learn how to enable resource logging for your Traffic Manager profile and access the log files that are created as a result. -+ na Last updated 01/25/2019-+ # Enable resource logging in Azure Traffic Manager
traffic-manager Traffic Manager Endpoint Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-endpoint-types.md
Title: Traffic Manager Endpoint Types | Microsoft Docs
description: This article explains different types of endpoints that can be used with Azure Traffic Manager documentationcenter: ''-+ na Last updated 01/21/2021-+ # Traffic Manager endpoints
traffic-manager Traffic Manager Geographic Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-geographic-regions.md
Title: Country/Region hierarchy used by geographic routing - Azure Traffic Manag
description: This article lists Country/Region hierarchy used by Azure Traffic Manager Geographic routing type documentationcenter: ''-+ na Last updated 03/22/2017-+ # Country/Region hierarchy used by Azure Traffic Manager for geographic traffic routing method
traffic-manager Traffic Manager How It Works https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-how-it-works.md
Title: How Azure Traffic Manager works | Microsoft Docs
description: This article will help you understand how Traffic Manager routes traffic for high performance and availability of your web applications documentationcenter: ''-+ na Last updated 03/05/2019-+ # How Traffic Manager Works
traffic-manager Traffic Manager Load Balancing Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-load-balancing-azure.md
Title: Using load-balancing services in Azure | Microsoft Docs
description: 'This tutorial shows you how to create a scenario by using the Azure load-balancing portfolio: Traffic Manager, Application Gateway, and Load Balancer.' documentationcenter: ''-+ na Last updated 10/27/2016-+ # Using load-balancing services in Azure
traffic-manager Traffic Manager Manage Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-endpoints.md
Title: Manage endpoints in Azure Traffic Manager | Microsoft Docs
description: This article will help you add, remove, enable and disable endpoints from Azure Traffic Manager. documentationcenter: ''-+ na Last updated 05/08/2017-+ # Add, disable, enable, or delete endpoints
traffic-manager Traffic Manager Manage Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-manage-profiles.md
Title: Manage Azure Traffic Manager profiles | Microsoft Docs
description: This article helps you create, disable, enable, and delete an Azure Traffic Manager profile. documentationcenter: ''-+ na Last updated 05/10/2017-+ # Manage an Azure Traffic Manager profile
traffic-manager Traffic Manager Metrics Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-metrics-alerts.md
Title: Metrics and Alerts in Azure Traffic Manager description: In this article, learn the metrics and alerts available for Traffic Manager in Azure. -+ na Last updated 06/11/2018-+ # Traffic Manager metrics and alerts
traffic-manager Traffic Manager Monitoring https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-monitoring.md
Title: Azure Traffic Manager endpoint monitoring description: This article can help you understand how Traffic Manager uses endpoint monitoring and automatic endpoint failover to help Azure customers deploy high-availability applications -+ na Last updated 11/02/2021-+ # Traffic Manager endpoint monitoring
traffic-manager Traffic Manager Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-overview.md
Title: Azure Traffic Manager | Microsoft Docs description: This article provides an overview of Azure Traffic Manager. Find out if it's the right choice for load-balancing user traffic for your application. -+ na Last updated 01/19/2021-+ #Customer intent: As an IT admin, I want to learn about Traffic Manager and what I can use it for.
traffic-manager Traffic Manager Performance Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-performance-considerations.md
Title: Performance considerations for Azure Traffic Manager | Microsoft Docs
description: Understand performance on Traffic Manager and how to test performance of your website when using Traffic Manager documentationcenter: ''-+ na Last updated 03/16/2017-+ # Performance considerations for Traffic Manager
traffic-manager Traffic Manager Point Internet Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-point-internet-domain.md
Title: Point a Internet domain to Traffic Manager - Azure Traffic Manager description: This article will help you point your company domain name to a Traffic Manager domain name. -+ na Last updated 10/11/2016-+ # Point a company Internet domain to an Azure Traffic Manager domain
traffic-manager Traffic Manager Powershell Arm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-powershell-arm.md
Title: Using PowerShell to manage Traffic Manager in Azure
description: With this learning path, get started using Azure PowerShell for Traffic Manager. documentationcenter: na-+ na Last updated 03/16/2017-+
traffic-manager Traffic Manager Routing Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-routing-methods.md
Title: Azure Traffic Manager - traffic routing methods description: This article helps you understand the different traffic routing methods used by Traffic Manager -+ na Last updated 01/21/2021-+ # Traffic Manager routing methods
traffic-manager Traffic Manager Rum Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-rum-overview.md
Title: Real User Measurements in Azure Traffic Manager
description: In this introduction, learn how Azure Traffic Manager Real User Measurements work. documentationcenter: traffic-manager-+ Last updated 03/16/2018-+
traffic-manager Traffic Manager Subnet Override Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-cli.md
Title: Azure Traffic Manager subnet override using Azure CLI | Microsoft Docs
description: This article will help you understand how Traffic Manager subnet override can be used to override the routing method of a Traffic Manager profile to direct traffic to an endpoint based upon the end-user IP address via predefined IP range to endpoint mappings. documentationcenter: ''-+ Last updated 09/18/2019-+ # Traffic Manager subnet override using Azure CLI
traffic-manager Traffic Manager Subnet Override Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-subnet-override-powershell.md
Title: Azure Traffic Manager subnet override using Azure PowerShell | Microsoft
description: This article will help you understand how Traffic Manager subnet override is used to override the routing method of a Traffic Manager profile to direct traffic to an endpoint based upon the end-user IP address via predefined IP range to endpoint mappings using Azure PowerShell. documentationcenter: ''-+ Last updated 09/18/2019-+ # Traffic Manager subnet override using Azure PowerShell
traffic-manager Traffic Manager Testing Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-testing-settings.md
Title: Verify Azure Traffic Manager settings description: In this article, learn how to verify your Traffic Manager settings and test the traffic routing method. -+ na Last updated 03/16/2017-+ # Verify Traffic Manager settings
traffic-manager Traffic Manager Troubleshooting Degraded https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/traffic-manager-troubleshooting-degraded.md
Title: Troubleshooting degraded status on Azure Traffic Manager
description: How to troubleshoot Traffic Manager profiles when it shows as degraded status. documentationcenter: ''-+ na Last updated 05/03/2017-+ # Troubleshooting degraded state on Azure Traffic Manager
traffic-manager Tutorial Traffic Manager Improve Website Response https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-improve-website-response.md
Title: Tutorial - Improve website response with Azure Traffic Manager description: This tutorial article describes how to create a Traffic Manager profile to build a highly responsive website. -+ # Customer intent: As an IT Admin, I want to route traffic so I can improve website response by choosing the endpoint with lowest latency. na Last updated 10/19/2020-+ # Tutorial: Improve website response using Traffic Manager
traffic-manager Tutorial Traffic Manager Subnet Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/traffic-manager/tutorial-traffic-manager-subnet-routing.md
Title: Tutorial - Configure subnet traffic routing with Azure Traffic Manager
description: This tutorial explains how to configure Traffic Manager to route traffic from user subnets to specific endpoints. documentationcenter: ''-+ na Last updated 03/08/2021-+ # Tutorial: Direct traffic to specific endpoints based on user subnet using Traffic Manager
traffic-manager